Search Results

Search found 22040 results on 882 pages for 'process improvement'.

Page 554/882 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • What PowerShell/WSMan clients or queries are consuming more than 1000 requests per 2 seconds?

    - by makerofthings7
    Exchange 2010 remote administration tools are complaining with the following error [txexmb02.ibm.com] Connecting to remote server failed with the following error message : The WS-Management service cannot process the request. The system load quota of 1000 requests per 2 seconds has been exceeded. Send future requests at a slower rate or raise the system quota. The next request from this user will not be approved for at least 558475776 milliseconds. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed VERBOSE: Connecting to TXEXHC02.ibm.com The help document this error referrers to says this is a WS-Man error. We're running SCOM 2007 R2 and am thinking that is increasing the query count, but I need to prove it.

    Read the article

  • Black Screen after installing recommended Nvidia drivers. What to do?

    - by former_Windows_user
    New to Ubuntu. Problem description: Until recently I had Windows on my computer. My hard disk is divided into two partitions. On the first one (app. 10 GB) I had my Windows XP On the second one (app. 30 GB) I have some data I tried to install Ubuntu 12.04 on the first partition (the smaller one). Since I wanted to keep the data on my second partition, I chose the third install option. During the installation process I deleted the data on partition one, created a new partition with the same size, formatted it as ext4 and mounted / on it. The installation continued fine and at the end I restarted and took the CD out when it ejected automatically (it could have been also before the restart). Ubuntu started but I noticed that my computer was slow. Then a prompt appeared telling me that I did not have the optimal NVidia drivers and recommended to install a specific one. I clicked on the recommended driver, installation went apparently just fine and at the end I had to restart the system again. I did it, Ubuntu started, asked for my password, I typed it, pressed Enter, the screen turned black and remained like that (only the cursor was there and I could move it). I restarted and the same thing happened again. Has anyone had such a problem before and was able to solve it? With Windows I always installed drivers from CDs after installing Windows. Are the same CDs going to work for Ubuntu too or I should find special drivers? P.S. During the installation I was connected to the internet and I agreed on installing updates and the third party software. In the time before I installed that problematic but recommended NVidia driver I checked that there was between 6 and 7 GB free space on the first partition where I installed Ubuntu.

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [closed]

    - by DhruvPathak
    Possible Duplicate: How to find web hosting that meets my requirements? We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Why is heap size fixed on JVMs?

    - by themel
    Can anyone explain to me why JVMs (I didn't check too many, but I've never seen one that didn't do it that way) need to run on a fixed heap size? I know it's easier to implement on a simple contiguous heap, but the Sun JVM is now over a decade old, so I'd expect them to have had time to improve this. Needing to define the maximum memory size of your program at startup time seems such a 1960s thing to do, and then there are the bad interactions with OS virtual memory management (GC retrieving swapped out data, inability to determine how much memory the Java process is really using from the OS side, huge amounts of VM space wasted (I know, you don't care on your fancy 48bit machines...)). I also guess that the various sad attempts to build small operating systems inside the JVM (EE application servers, OSGi) are at least partially to blame on this circumstance, because running multiple Java processes on a system invariably leads to wasted resources because you have to give each of them the memory it might have to use at peak. Surprisingly, Google didn't yield the storms of outrage over this that I would expect, but they may just have been buried under the millions of people finding out about fixed heap size and just accepting it for a fact.

    Read the article

  • Move entire OS from NTFS drive to bigger ext4 drive.

    - by pangel
    According to SMART data, the hard drive I curently use is about to fail. I bought a new, bigger drive to copy the system to a safer place. The old drive is 160GB. Ubuntu was installed with Wubi, and the partition is NTFS. There are a few other partitions around (recovery partition, swap...) that I don't care about. The new drive is 320GB. I would like the new system to run on ext4, not on NTFS. I looked at solutions that use dd, or clonezilla, but it seems that moving to a different filesystem prevents me from using them. I considered installing a brand new ubuntu on the new hard drive and then copy /home from the old drive to the new drive, but I heard that there would be file permission problems. I would also have to reinstall all my software. One last thing: the NTFS drive has dead sectors. I don't know how this can influence the copy process, but I mention it just in case. edit: I do not care about the windows partition. I just want Ubuntu to make the transition.

    Read the article

  • How Can I Automate the Backup of a Quickbooks Server?

    - by Nick
    I have three computers: The first is the company file server which has the Quick Books company file, is always on, and lives in the closet. The other two are Quick Books Clients. All are XP Pro. I need a way to automatically backup the QB data file, without any user intervention. Quick Books has a built in scheduled backup utility, but from what I've read, it only works when the software is running in single user mode. (and obviously putting the server into single user mode defeats the concept of having a server). Also, I'm not actually running QB itself on the server, just the "QB Database Server" process that sits in the system tray. Surely there must be a way to automate this? I'm open to any ideas/suggestions. Thanks!

    Read the article

  • Is there a difference in page fault rates between CPU bound and I/O bound processes?

    - by user198864
    I was thinking, should there be any difference in expectation of the page fault rate on CPU-bound vs I/O bound processes? At first I thought maybe we could, since CPU-bound processes would likely be using more memory accesses per time quantum, so I expect it would move from locality to locality faster. At the same time, the CPU-bound process is probably given a larger working set... but this doesn't affect the fault overhead as it hits a new locality IF this wasn't pre-paged in. Is there actually any real difference in the page fault rates or am I just musing about something nonexistent? And if there is, how would it impact a real-world OS like linux?

    Read the article

  • Is this too much to ask for a game programming and developing enthusiast? Am I doing this wrong?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • Is there a Linux mail server with an outgoing pickup directory?

    - by Paul D'Ambra
    On my Exchange server I can drop appropriately formatted text files in the "pickup" directory and Exchange will process them. I'd like to split this bulk mailing functionality onto another box to protect our business mail IP from the bumpy ride that our monthly newsletter gives us. I should note at this point that the mailing is opt-in with an opt out link included and only goes to people who pay to be a member of our organisation The ideal solution for me would be to add a linux box to use just for this purpose so we're not paying for Exchange licenses. So is there a linux equivalent of the Exchange pickup directory?

    Read the article

  • Is there a Linux mail server with an outgoing pickup directory?

    - by Paul D'Ambra
    On my Exchange server I can drop appropriately formatted text files in the "pickup" directory and Exchange will process them. I'd like to split this bulk mailing functionality onto another box to protect our business mail IP from the bumpy ride that our monthly newsletter gives us. I should note at this point that the mailing is opt-in with an opt out link included and only goes to people who pay to be a member of our organisation The ideal solution for me would be to add a linux box to use just for this purpose so we're not paying for Exchange licenses. So is there a linux equivalent of the Exchange pickup directory?

    Read the article

  • I get this error after upgade. please help

    - by user203404
    dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.2.1~); however: Version of initramfs-tools-bin on system is 0.103ubuntu0.2. klibc-utils (2.0.1-1ubuntu2) breaks initramfs-tools (<< 0.103) and is installed. Version of initramfs-tools to be configured is 0.99ubuntu13.2. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of plymouth: plymouth depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing plymouth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of mountall: mountall depends on plymouth; however: Package plymouth is not configured yet. dpkg: error processing mountall (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of initscripts: initscripts depends on mountall (>= 2.28); however: Package mountall is not configured yet. dpkg: error processing initscripts (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of upstart: upstart depends on initscripts; however: Package initscripts is not configured yet. upstart depends on mountall; however: Package mountall is not configured yet. dpkg: error processing upstart (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of passwd: passwd depends on upstart-job; however: Package upstart-job is not installed. Package upstart which provides upstart-job is not configured yet. dpkg: error processing passwd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: initramfs-tools plymouth mountall initscripts upstart passwd E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Archive format & tool for large amounts of data (50gb+)

    - by marcusstarnes
    I only realised this afternoon that the ZIP format has a limit of what appears to be around 20gb. I am trying to automate an archive process (using Automate) to zip/rar/whatever a collection of folders/files on one of my disks. It always appeared to bomb out with an incomplete archive at about 20gb. So I tried using WinRAR and doing it manually as a ZIP file, but it told me of the limit. So, I was wondering, what is a recommended zip format (and tool for accomplishing the task) for archiving up a large amount of data (around 50gb)?

    Read the article

  • SQL Server 2008 R2 installation Error "Not all privileges or groups referenced are assigned to the caller"

    - by CodeSlinger
    Im getting an error half way through an SQL Server 2008 r2 installation. The error states Not all privileges or groups referenced are assigned to the caller. it asks to retry or cancel. Upon trying both, the error message returns and I must cntrl-alt-delete to end the process. I have checked all permissions associated with local account, network domain account and ran installation as administrator. I have searched on the web and other people are having this problem too but I cant find a solution. so I turn to the experts, anyone encounter this error??

    Read the article

  • SQL Server 2005 transactional replication break before a configured number of retries

    - by ti2
    We have a SQL Server 2000 Standard database with some tables being replicated (continuous transactional replication) to dozens of SQL Server 2005 Express and MSDE computers. The step 2 of the replication agent job (Run agent) is configured by default to retry every 1 minute for 10 times if some problem ocurr. Because the client machines get shut down at night (they are POS machines), we changed the number of retries to 5760 (4 days), so replication would not be broken at night and would not need to be restarted manually. But the problem is that every other day we have at least one machine with broken replication, with this error: The process could not connect to Subscriber 'POS986'. NOTE: The step was retried the requested number of times (5760) without succeeding. The step failed. It seems that SQL Server is not respecting the number of retries or the interval between retries as we configured. PS: I have restarted the replication job after changing the number of retries from 10 to 5760.

    Read the article

  • What is the procedure to replace a failing hard drive in a RAID array?

    - by slayton
    3 years ago a co-worker setup a software RAID-6 array on Ubuntu 9.04 and I'm getting messages from the OS that the drive has bad sectors and should be replaced. I'd like to remove this drive and replace it with a new drive, however, I have never done this before and I'm terrified that in the process of fixing the array I'm going to end up ruining it. I know the device ID of the array and I know the device IDs of the individual drives in the array. Additionally I physically have the bad drive. What are the steps to replace the bad drive with a new drive and get the array running again?

    Read the article

  • Saving backup files automatically in (g)Vim after saving a file.

    - by Somebody still uses you MS-DOS
    I had a problem with my gVim. I lost some important modifications after I plugged on my machine after a hibernating process. To avoid this kind of problem, I would like to know if it's possible to add something in my .vimrc (or a plugin) that automatically backups all saving made to my files. Disk space is not an issue, I can delete these files after. I'm already using set backup set backupdir=~/.backup/vim set directory=~/.swap/vim This creates a myfile.extension~ in my .backup/vim. ...but I would like this configuration to add ~ to first save, ~0 to second, ~1 to third, ~2 to fourth, and so on - something that keeps copies from all modifications I made to a file. Is this possible? Do you know if there's a plugin for this?

    Read the article

  • "Access denied" to C:\Documents and Settings after removing malware?

    - by Rising Star
    My Windows 7 PC became infected with the so-called "Malware Protection designed to protect" trojan while I was at work the other day. I managed to kill the process so that the malware is no longer running. The removal instructions specify to delete the following file: c:\documents and settings\all users\application data\defender.exe However, when I click to c:\documents and settings, it says "Access denied". Prior to this malware infection, I've never had any trouble accessing "Documents and Settings" or "Application Data." I read that in Windows 7, c:\documents and settings is a psudonym for c:\users, but I still cannot find the file defender.exe. Suggestions?

    Read the article

  • Mac OS 10.7 Missing Java Completely

    - by Stuartsoft
    I'm in dire straights. I was trying to replace my java JDK and downgrade to 1.6 and somehow managed to completely screw up all the previous versions in the process. Bottom line, my Mac has no JDK installed at all. I've tried reinstalling java 1.7 from Oracle, I've tried using Pacifist to manually extract the files from the 1.6 Apple Java... nothing. When I open terminal and use java -version all I get is -bash: java: command not found My real goal is just to get back to java 1.7, but even after running the installer, java is still inaccessible to terminal and other applications.

    Read the article

  • Inbox lock for exclusive access [duplicate]

    - by user212051
    This question already has an answer here: Dovecot pop3: Disconnected for inactivity 2 answers -I found server logged into mailbox on my smtp server -This server released connection for inactivity after 10 minutes. -in the 10 minutes between logged in & disconnected for inactivity, 3 attempts to send message from 3 different clients to this mailbox failed due to unable to lock for exclusive access: Resource temporarily unavailable -after disconnection the 3 messages reached mailbox good. I tried to simulate the process and lock test mailbox but I couldn't, I was aiming to understand who can lock ? who has exclusive access ? and why only client server can lock ? and how to solve this ?

    Read the article

  • enable tcp_syncookies even after reboot

    - by Tim
    I'm running Scientific Linux 6.1 and would like to set net.ipv4.tcp_syncookies=1. I've set that in /etc/sysctl.conf and, if I do a sysctl -p then sysctl -q net.ipv4.tcp_syncookies it shows it's properly set. Sadly, if I reboot the machine, and sysctl -q again, it goes back to 0. I've tried to grep around and see if something else is resetting it to 0 during the boot process but haven't turned up anything. I've googled and everything points to sysctl.conf. The only thing I can think of is maybe networking isn't up by the time that file gets read but, honestly, I'm a developer and well beyond my natural skills here:) I'm tempted to just set it directly in /etc/init.d/network but then that feels hackish and so, I thought better of it and I'm here in search of the "right" way to do it. Any pointers?

    Read the article

  • Utilising a Magento server cluster to drive hot reindexing

    - by WOBenji
    We've asked a similar question in the past, basically we have a very large Magento store with 500000 products which are currently reindexed once a day, during the night. We'd like to speed this process up significantly, we're at about 4-5 hours now. The solution was suggested for us to do something like this on a server cluster and replicate the database changes after they've been done on a machine that isn't being bothered with serving customers. But what is the mechanism for that? How do we replicate those changes across to the live site from the server cluster? Can someone point me in the right direction here?

    Read the article

  • Unix / linux permissions setup for shared hosting with Apache

    - by weiyin
    I'm in the process of setting up a server from a clean CentOS 5 install. What is the best permission structure (users, groups, unix permissions) for running a single instance of apache for multiple users? Ideally, it should satisfy these requirements: Each user's websites are stored in a subdirectory of their home directory. Users can edit files and permissions. Apache can read the websites of all users. No user can read the website files of other users. Bonus question: how to add PHP and/or Perl and/or Ruby to Apache without allowing any users to access any other user's files?

    Read the article

  • need help setting up a VPN for remote computer connection

    - by Chowdan
    I am on a low budget right now. I am currently in the process of starting a computer company. I am in need of a VPN network so I can run Dameware tools for working on customers/partners computers remotely. I will be working with Windows and some Apple and linux machines. I have desktop with an AMD Phenom II 965BE(currently running stable at 3.8Ghz) processor with 8 GB of ram and a radeon hd 6870(i know graphics aren't too useful) and about 1.5TB of HDD space. I am attempting to create a network out of my office based all on one machine that would also be secure for me to remotely connect to my partners computers so when they have issues I would be able to connect and do the diagnosing and repairs remotely. What types of servers besides a VPN server would i need to create this? I have access to all Microsoft products so I can run Windows Server 2012, Windows Server 2008 R2, or any other Microsoft Software. thanks for the help all

    Read the article

  • Is it possible to extend the Active Directory schema in a Windows 2003 DC (NOT R2) to support DFSR?

    - by JohannesH
    We're in the process of installing a brand new Windows Server 2008 Web cluster and we would like to synchronize some files between the servers. The problem is that the DC in the domain is an old Windows Server 2003 Standard (NOT R2) which apparently doesn't contain some extension to the AD schema. Is it possible to upgrade the schema without upgrading the DC servers to R2? When I try to create a Replication Group on the 2008 Server I get the following message: --------------------------- Error --------------------------- srv.XXXXXX.XX: The Active Directory Domain Services schema on domain controller activedc07.srv.XXXXXX.XX cannot be read. This error might be caused by a schema that has not been extended, or was extended improperly. See Help and Support Center for information about extending the Active Directory Domain Services schema. Schema version 30 is not supported. --------------------------- OK ---------------------------

    Read the article

  • Find RARs with duplicate content

    - by Scott McClenning
    I need a utility to find RAR files that contain duplicate data (i.e. files within the RAR that hash the same, but could have different names). I can open the RARs and see the CRCs are the same, but I was hoping for a more automated process that would work in bulk (hundreds of files). Hashing the overall RAR won't help because the file contained within could have different names, or the archive could be compressed at different levels. If needed, a utility that would extract the contents of the RARs and then compare would work, but is not preferred. I would prefer a free utility for Windows, but a pay utility or a utility for Linux would be acceptable.

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >