Search Results

Search found 21061 results on 843 pages for 'bulid process'.

Page 492/843 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Looking for internet traffic manager software for Windows 7

    - by Semyon Perepelitsa
    I have 128 KB/s (1 Mbit/s) internet connection. Not quite fast for downloading big files, but it is okay for internet browsing. When I start big file dowload, I cannot surf the web normally: pages appears slowly, especially pictures there. Therefore I pause downloading, but I'm regularly spending much time on reading articles or leaving the computer alone, when I can allow the file being dowloaded. Is there any software that can automize the process of pausing and resuming download or simply automatically regulate internet speed for each application using Internet? I download files using different software and protocols (Download Master for http, ftp; uTorrent for torrents; many other programs for updating themselves), so I don't want to be tied to particular program for downloading.

    Read the article

  • Customer Experience Management for Retail 2.0 - part 2 / 2

    - by Sanjeev Sharma
    In the previous post, i discussed some of the key trends shaping up in the retail industry, their implications and the challenges facing retailers seeking to regain control of the buyer-seller relationship. Is Customer Experience Management the panacea for the ailing retailers who are now awakening to the power of the consumer? Quite honestly, customer acquisition, retention and satisfaction have been top of mind for retailers for quite some time now. The missing piece of this puzzle is bringing all those countless hours of strategy and planning to fruition. This is more of an execution gap than anything else. Although technology has made consumers more informed, more mobile and more social, customer experience is still largely defined by delivering on the following: Consistent experiences, whether shopping online or offline Personalize-able interaction ("mass market" sounds good as an internal strategy but not when you are a buyer!) Timely order fulfillment, if not pro-active notification of delays Below is a concept architecture for streamlining front-end, mid-office and back-end interfaces through shared process to achieve consistency and efficiency in managing the customer experience from order capture to order provisioning.

    Read the article

  • Remove Sync Center icon

    - by Edward Brey
    I accidentally marked a shared folder as "Available Offline" in Windows Explorer on Windows 8.1 computer. This seems to have "woken up" the Sync Center and caused the Sync Center icon to be displayed in the system notification area. Even though I've undid that by marking the folder as not available offline, and furthermore have reset CSC and disabled Offline Files, the Sync Center icon still appears in the overflow section of the system notification area. How do I remove the Sync Center icon and preferably disable the process that is displaying it? Debugging info: The registry shows that stuff is enabled, even though the Sync Center and Offline Files dialog don't indicate that anything is active. HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\SyncMgr\HandlerInstances\{750FDF10-2A26-11D1-A3EA-080036587F03} SyncTime REG_BINARY F6DDC46CBB76CF01 Connected REG_DWORD 0x1 Enabled REG_DWORD 0x0 Active REG_DWORD 0x1 NotifiedOnFirstActivation REG_DWORD 0x0 HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\SyncMgr\HandlerInstances\{750FDF10-2A26-11D1-A3EA-080036587F03}\SyncItems HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\SyncMgr\HandlerInstances\{750FDF10-2A26-11D1-A3EA-080036587F03}\SyncItems\{CBA95344-4284-48CB-8083-3BDE1FDB29A7} SyncTime REG_BINARY F6DDC46CBB76CF01 Connected REG_DWORD 0x1 Enabled REG_DWORD 0x1

    Read the article

  • Dedicated server: managed hosting or manage it myself?

    - by ddawber
    We're currently hosting a number of sites on a self-managed dedicated server. Some companies, however, offer a managed dedicated server hosting service. They offer: Roughly the same server spec Ticketing system support Managed daily backups Virtual firewall (but with a limit of 10 IP addresses allowed through at any one time) Now, this managed hosting is at extra expense - somewhere in the region of $500 per month, and the limit on the number of IP addresses they'll manage on the firewall is also a real pain. My thinking is it would be better and cheaper to Stay with the same host since the dedicated box is fine Get an Amazon AWS account and use their server to manage backups; there are a number of good tools that can be used to automate the process Configure iptables so that I have complete control of the firewall I want to know Is a managed virtual firewall likely to be more secure than me configuring iptables? Whether, in your opinion, it's best to let someone else take care of backups? If, from your experience, there's anything else i'm missing that warrants using managed hosting over a DIY service? I think there is some reluctance to not having managed hosting since a managed host in effect takes responsibility for your server, whereas any hardware or security issues with a server that we manage would mean we are forced to hold our hands up when a client site goes down. That said, I personally don't think a managed host does that much in the day to day running of your server (backups are automatic, OS updates are carried out with ease, etc.).

    Read the article

  • Discovery methods

    - by Owen Allen
    In Ops Center, asset discovery is a process in which the software determines what assets exist in your environment. You can't monitor an asset, or do anything to it through Ops Center, until it's discovered. I've seen a couple of questions about how to discover various types of asset, so I thought I'd explain the discovery methods and what they each do. Find Assets - This discovery method searches for service tags on all known networks. Service tags are small files on some hardware and operating systems that provide basic identification info. Once a service tag has been found, you provide credentials to manage the asset. This method can discover assets quickly, but only if the target assets have service tags. Add Assets with discovery profile - This method lets you specify targets by providing IP addresses, IP ranges, or hostnames, as well as the credentials needed to connect to and manage these assets. You can create discovery profiles for any type of asset. Declare asset - This method lets you specify the details of a server, with or without a configured service processor. You can then use Ops Center to install a new operating system or configure the SP. This method works well for new hardware. These methods are all discussed in more detail in the Asset Management chapter of the Feature Reference guide.

    Read the article

  • How can I throttle the bandwidth consumed by Windows Automatic Updates?

    - by eleven81
    We have many Windows XP computers sharing one connection to the internet. These machines are set to download all available automatic updates and then prompt the user to install them. Whenever Patch Tuesday rolls around, our internet usage pegs out, and remains that way for most of the day, and sometimes into the following Wednesday. This hurts! I still want the machines to start to download the updates as soon as they are available, but if it takes until Thursday or Friday before the last updates are downloaded, that's still better than the latency and dropped connections we are seeing now as a result of the internet connection bottleneck. What can I do to throttle back how rapidly each machine downloads the updates, while still having them all start the download process as soon as the updates are available? I have no desire to run a WSUS server. Also, the internet connection is more than enough, whenever there are no updates to download.

    Read the article

  • Airpot Express configuration

    - by Christina
    We are trying to set up remote access to a computer that houses a server fro a particular program we are running. The program says we need to configure the office router. In the firewall settings it says to open ports 5345-5351 (TCP only). Port Forwarding: You will also need to forward the same range of ports (5345-5351) to the computer running the Server. This typically requires that the computer running the Server be assigned a static IP on the local network. Having trouble figuring out which IP address we actually need to be using on the client side of this program in order to access the server computer. Can someone walk through this process?? We are working on Mac OSX 10.5. Thank you in advance!

    Read the article

  • GLES2.0 3D Android game performance and multi threading the update?

    - by Ofer
    I have profiled my mixed Java\C++ Android game and I got the following result: https://dl.dropbox.com/u/8025882/PompiDev/AndroidProfile.png As you can see, the pink think is a C++ functions that updates the game. It does things like updating the logic but it mostly it generates a "request list" for rendering. The thing is, I generate DrawLists on C++ and then send them to Java to process and draw using GLES2.0. Since then I was able to improve update from 9ms down to about 7ms, but I would like to ask if I would benefit from multi threading the update? As I understand from that diagram is that the function that takes the most time is the one you see it's color on the timeline. So the pink area is taken mostly by update. The other area has MainOpenGL.Handle as it's main contributor(whch is my java function), but since it's not drawn to the top of the diagram I can conclude other things are happening at the same time that use the CPU? Or even GPU stuff that isn't shown in this diagram. I am not sure how the GPU works on this. Does it calculate stuff in parallel to the CPU? Or is it part of the CPU usage as in SoC? I am not sure. Anyway, in case GPU things DO happen in parallel to CPU, then I would guess that if I do this C++ Update in parallel to the thread that makes the OpenGL calls, I might make use of "dead CPU time" due to GPU stalling or maybe have the GPU calls getting processed earlier because it won't have to wait for Update to finish? How do you suggest to improve performance based on that? Thanks.

    Read the article

  • What is "queued Windows Error Reporting"?

    - by Rewinder
    I was cleaning up my laptop hard-disk, running Windows 7, and as part of the process I ran the Disk Cleanup utility. To my surprise I saw 2 items in the list that were quite large (both ~300MB). Per user queued Windows Error Reporting System queued Windows Error Reporting I guess I had never noticed these, because they were never that big. So, what are these items? Any particular reason why they became so large all of a sudden? And finally, is it safe to remove them?

    Read the article

  • How do you maintain focus when a particular aspect of programming takes 10+ seconds to complete?

    - by Jer
    I have a very difficult time focusing on what I'm doing (programming-wise) when something (compilation, startup time, etc.) takes more than just a few seconds. Anecdotally it seems that threshold is about 10 seconds (and I recall reading about study that said the same thing, though I can't find it now). So what typically happens is I make a change and then run the program to test it. That takes about 30 seconds, so I start reading something else, and before I know it 20 minutes have passed, and then it takes (if I'm lucky!) another 10+ minutes to deal with the context switch to getting back into programming. It's not an exaggeration to say that some things that should take me minutes literally take hours to complete. I'm very curious about what other programmers do to combat this tendency (or if I'm unique and they don't have this tendency?). Suggestions of any type at all are welcome - anything from "sit on your hands after hitting the compile button", to mental tricks, to "if it takes 30 seconds to start up something to test a change, then something's wrong with your development process!"

    Read the article

  • "Missing Operating System" after installing Ubuntu 12.04 from a CD on a Macbook Pro

    - by Pierre
    I followed this guide to install Ubuntu 12.04 on my Macbook Pro 8,2 (late 2011): https://help.ubuntu.com/community/MactelSupportTeam/AppleIntelInstallation I used a CD. I synced the partition table on rEFIt, and it went fine. I do have an icon to boot on Linux, but when I launch it, after a few seconds I have "Missing Operating System" displayed, and that's all... How can I fix that? The only thing I see is, in the guide, it is mentioned this: On the last dialog of the installer, be sure to click the “Advanced” button and choose to install the boot loader (grub) to your root Ubuntu partition, for example /dev/sda3. This will be the only partition with the EXT4 file system. In Ubuntu 12.04 installation process, there is not such an option, but there is a dropdown menu to select where the grub bootloader should be installed. It was /dev/sda by default, but I selected my root Ubuntu partition (in my case, /dev/sda5). I got a warning message (but actually, it was the same warning message even when I selected /dev/sda), and I continued the installation... Thanks in advance for your help!

    Read the article

  • wrong kernel running after install

    - by ticktockhouse
    I have installed Ubuntu 14.04 from unetbootin. When it reboots after the install, uname -r says: 3.5.0-17-generic ..this means that no modules have loaded for the kernel that is actually installed (3.13.0-32-generic). Does anyone know why this kernel should be installed via the install process? Is it an artifact of using Unetbootin? Booting into the Unetbootin image gives the correct kernel, and thus the modules load. Knowing why is one thing, but I'm not sure how to remedy it now. Because no modules are loaded, I can't connect to the network or connect a USB drive. I've tried update-grub, which seems to find the correct kernel, but doesn't seem to tell the system to boot from it. I've also tried selecting the kernel at boot time using the "Advanced Options for Ubuntu", and the 3.13.x kernel is the only one listed. Selecting this lead to the 3.5.x kernel stubbornly loading.. I'm a fairly accomplished sysadmin, but this one has me flummoxed :) Can anyone help?

    Read the article

  • How to create a new public AMI for windows?

    - by user67081
    I am trying to make a windows 2008 AMI that is a nice clean 64bit starter pack (IIS, SQL express, ASP.NET MVC, etc...) I would like to make it a public AMI when its done. There in lies the problem. I can make an AMI from my image no problem. But I can't seen to get new instances to generate their own passwords.. The results are that I have a new instance that works great with my password. So what is the process of making my EBS backed Instances convert into an AMI that will auto-generate its password and do all the other setup steps that amazon wants to go thru when a new instance starts up? Thanks in advance.

    Read the article

  • One monitor getting spilled over into other monitor: how to do a 100% reset of gnome graphics configuration

    - by Paul Nathan
    I had to kill a VMWare process and afterwards, my monitor's configuration is buggy. I have 2 monitors in a side-by-side configuration. My right-hand monitor is the secondary monitor. Upon its right-hand side there are about 50 pixels showing from the left side of the lefthand monitor (ie, as if it was wrapped around). Further, my mouse clicks are registering as about 50 pixels sideways from where they should be. It's as if those 50 pixels between monitors got gobbled. What have I done? I've reset the screen configuration in multiple ways, using xrandr, multiple monitors app, etc. This persists in different side-by-side configurations, and also persists with another user. It does not occur with XFCE. Resetting the Window manager with the Compiz reset WM app does not fix this. I've concluded the burn-to-the-ground approach is likely the best, and would like to do a 100% reset of my graphics settings. It's an Intel integrated chipset. Removing ~/.config/monitors.xml did not work. Also, interestingly, the mouse can mouse-over the 50 errant pixels on the rhs of the right-hand monitor. I hypothesize that it's a compositing problem occurring at the layer where the background, selection, and clicks are caught. Also, inverting the right-hand monitor removes the issue, but renders the screen unusable. Even more datapoints: This happens in KDE as well Sometimes logging into Gnome and running xrandr --output DVI1 --auto resets it, but the issue immediately reappears when I press alt-tab. With Compiz Application Switch turned on, the workspace is 'pushed back' a bit, and the slice on the RHS follows it as well. I'm wondering if it's a flaw in the compiz workspace compositing configuration. I suspect the error was in the compositing configuration. I installed 11.10.

    Read the article

  • Filtering Client IP from Access Log for Urchin

    - by Ram Prasad
    I have some apache logs to process, and since the webserver behind two levels of reverse proxies, I am getting two IPs in the X-Forwarded-For header.. for example: 208.34.234.55, 127.0.0.1 - - [29/Oct/2009:21:38:13 -0500] "GET /monkey.html HTTP/1.0" 200 20845 0 0 "http://www.monkey.com/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.15) Gecko/2009101601 Firefox/3.0.15 (.NET CLR 3.5.30729)" Now, how do I filter this in Urchin (or remove this in Apache logging) so, 127.0.0.1 is removed from processing. Currently urchin is not able to recognize the multuple IP address so it does not log the remote IP

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • What are the challenges of implementing an ERP system?

    When a company decides to rollout an ERP system as part of its core business processes they must consider and provide solutions for the following general challenges. It is important to note that this list is generic and that every ERP system that rolls out is as distinct as the companies that are trying to implement the system. Upper Management Support Reengineering Existing Business Process and Applications Integration of the ERP with other existing departmental applications Implementation Time Implementation Costs Employee Training I just recently read an article by Mano Billi called “What are the major challenges in implementing ERP? “ were he basically outlines the common challenges to implementing an ERP system within a company. He discusses items like Upper management support, altering existing systems, and how ERPs integrate with other independent systems. In addition, he also covers items on selecting a ERP vendor, ERP Consultants, and the effects of an ERP system on employees.  I personally think he did a create job of outlining common issues that can cause an ERP implementation to fail or not be as effective as it potentially could be if the challenges are not taken in to account appropriately.

    Read the article

  • Firefox (on windows) constantly consuming 10% CPU - is there an add-in to find the rogue tab?

    - by tbone
    I often have many Firefox windows open, each with many tabs. Now and then one of the tabs will be running a web page that for some reason consumes lots of resources. Right now, I have a tab somewhere that is constantly consuming 10% of the CPU...which would be fine as my computer can easily handle that (see specs below...all other apps are responsive), but it seems to slow Firefox down....everything, everywhere is extremely laggy in Firefox, I can see pauses while I type this. Is there: - a way I can isolate separate instances (or even tabs) in FF into a separate process, so one rogue tab doesn't bog down FF across the entire system? - maybe an add-in that can either identify tabs consuming lots of CPU, or maybe a way to "shut down" activity on tabs you haven't used in a while? Firefox 3.6.10 Windows 7 Ultimate 64 i7 920 @ 3.6 GHz 12 GB Ram

    Read the article

  • Linux - How to completety clean up a software installation

    - by Jonathan Rioux
    Hi, I am running under Debian and I have recently upgraded to Squeeze. Since then, I am having so much problems with Webmin. So I have decided to remove it using: apt-get remove webmin And then I downloaded the sources of Webmin 1.530 and compiled it. But the installation process has been stucked for an hour so I canceled it. I even tried to install it using the .deb file without success (installation stucks for hours). From now, I cannot install Webmin since I uninstalled it. So I would like to know how can I make a full clean up of any traces of Webmin on my server. And then I will retry to install it.

    Read the article

  • Server suddenly running out of entropy

    - by Creshal
    Since a reboot yesterday, one of our virtual servers (Debian Lenny, virtualized with Xen) is constantly running out of entropy, leading to timeouts etc. when trying to connect over SSH / TLS-enabled protocols. Is there any way to check which process(es) is(/are) eating up all the entropy? Edit: What I tried: Adding additional entropy sources: time_entropyd, rng-tools feeding urandom back into random, pseudorandom file accesses – netted about 1 MiB additional entropy per second, problems still persisted Checking for unusual activity via lsof, netstat and tcpdump – nothing. No noticeable load or anything Stopping daemons, restarting permanent sessions, rebooting the entire VM – no change in behaviour What in the end worked: Waiting. Since about yesterday noon, there are no connection problems anymore. Entropy is still somewhat low (128 Bytes peak), but TLS/SSH sessions have no noticeable delay anymore. I'm slowly switching our clients back to TLS (all five of them!), but I don't expect any change in behavior now.

    Read the article

  • SATA controller installed but not working? (No drives show up/Don't see card's BIOS)

    - by johnnycakes
    Hi, I have an old Promise FastTrak S150 TX4 SATA controller card. I put it in an old machine running Windows Server 2003. I booted the machine. The new hardware was detected. I installed the drivers. So now in Device Manager under "SCSI and RAID Controllers" I see "Win Server 2003 Promise FastTrak S150 TX4 Controller" and "Win Server 2003 Promise RAID Console SCSI Processor Device" I previously had the card in a machine that is now dead. When I booted that machine, during the boot process I would see the card info displayed and the drives that were attached. Boot would finish and my drives would be available. When I boot this new machine I never see that screen/text. No hard drives are available/visible. What am I missing? Thanks.

    Read the article

  • iTunes randomly plays songs while importing, and can't be stopped

    - by Steve Bennett
    I'm importing a gazillion songs over the network into iTunes. Every now and then, it starts playing the song it's currently importing. And because iTunes is basically frozen up during the import process, I can't actually stop it. Then it will suddenly jump to another song a bit later on. Pretty irritating. Is it a known issue? Anything I can do about it? Versions (oops): iTunes 10.5 (141), OS X 10.6.8

    Read the article

  • How do I replace a harddrive that is in a two-way mirror storage space on Windows 8?

    - by Jon
    I have a storage space in Windows 8 doing a two-way mirror on three harddrives. The sizes are 297GB, 189GB, and 70GB. I would like to replace the 70GB HD with a larger one. My thought was to remove that drive from the space via the Storage Space control panel, shutdown, replace HD with bigger drive, reboot, add new HD to the storage space. I can't find any options to remove a HD from a storage space in the control panel. Should I just shutdown and swap out the small drive or is there another process for safely replacing the old HD? (By the way, the old HD is still operational.)

    Read the article

  • Automation of software installation - should I ask for text or file?

    - by Denis
    I am preparing a software installation in Windows environment for my application. During installation it asks for Subscriber ID which should be entered into text field. I am wondering if it is a best solution for mass installations. I know that for mass installations IT teams use systems like Microsoft System Center which allow automate deployment. But I do not know much about capabilities of such systems. Can such system automate data entry into the text fields? Will it be better to change installation process and ask not a text but a file which contains Subscriber ID? By the way, I am looking for beta testers for my software. This software let user view Microsoft Project files without having Microsoft Project installed.

    Read the article

  • How can I update Firefox add-ons automatically?

    - by Maelstrom
    Similar to this question, is it possible to update installed plugins via the command line? I'm running YSlow with beacon reporting as a nightly cron job under OSX: /Applications/Firefox.app/Contents/MacOS/firefox-bin -no-remote -P YSlow http://www.example.com/ & PID=$! sleep 300 kill $PID This dumps FF into the background and grabs the PID, waits 300 seconds (for the page to load) then kills it. If there is an update pending, the browser "hangs" waiting for a confirmation. If I do click on the "install updates" link, everything works and then Firefox launches a new process - the $! returned by the shell is no longer valid. Can I update a plugin from the command line without confirmation? Can I curl the XPI into a file and install it without confirmation?

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >