Search Results

Search found 9320 results on 373 pages for 'task managers'.

Page 118/373 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Parallel Tasks in .NET 3.0

    Provide a mechanism to execute a list of tasks in parallel on multiple threads and communicate back to the calling thread useful state such as exceptions, timeouts and successful task completion.

    Read the article

  • How to Use Directory Submission Effectively

    Well this is not any easy task to optimize your site that way because search engine algorithms are constantly changing and online competition is constantly increasing! There has to be strategic approach of search engine optimization, which begins with directory submission. Yes, because directories are created for the only reason of giving away free backlinks. So the base of your link building is built with it and is extremely useful for new sites.

    Read the article

  • Some Basic Points About Search Engine Optimisation

    There are many people who think that doing search engine optimisation of their website or page to improve their ranking in the various search engines is a very skilled task which needs a lot of effort and is also very time consuming. Thus, they consider it to be beyond their capabilities. This, however, is far from true.

    Read the article

  • Unable to back up SQL Server databases using a maintenance plan

    - by Stephen Jennings
    I am trying to create a maintenance plan that will run automatically and back up my SQL Server 2005 databases automatically. I create a new maintenance plan and add a "Back Up Database Task", select all databases, and choose a path to back up to. When I save and try to execute this plan, I get the following error message: =================================== Execution failed. See the maintenance plan and SQL Server Agent job history logs for details. =================================== Job 'Backup.Subplan_1' failed. (SqlManagerUI) ------------------------------ Program Location: at Microsoft.SqlServer.Management.SqlManagerUI.MaintenancePlanMenu_Run.PerformActions() I've checked the maintenance plan log, the agent log, and just about every log file I can find and there are no entries at all to help me figure out why this is failing. If I right-click on a specific database and select "Back Up", the task succeeds. I tried changing the plan to back up just that one database and it still failed. I've tried running the plan with both Windows authentication and SQL Server authentication with the sa account. I also tried specifically granting the SQL Server Agent user account full privileges on the backup folder, but it still failed. While searching the web for clues, the only solution I've run across so far suggests running sp_configure 'allow_update', 0. I tried this but allow_update was already set to 0 and it did not fix the problem. The Windows server and SQL Server have all updates applied to them. Thanks for any suggestions!

    Read the article

  • Temp file folder full, but clearing it out doesn't seem to work

    - by Vegar Westerlund
    I got this error on our build server: MSB6003: The specified task executable could not be run. MSB5003: Failed to create a temporary file. Temporary files folder is full or its path is incorrect. The directory name is invalid. [C:\Users\swdev_build\bamboo-agent-home\xml-data\build-dir\XXXX-ZZZZZZ-JOB1\build.xml] This was when trying to run and msbuild task to run our test suite. Using the power of google it seemed that this should be a problem with the %TEMP% folder running out of tmp file names apparently because a 4 digit hex name is used (for a total of 65535 temporary files). The problem is that the error persist, even after going into the %TEMP% folder and deleting everything before rebooting the machine. Does anyone know how to fix the issue and more importantly how to prevent it from happening again? What is the preferred way of cleaning up this temp files? Make it a part of the build process? Update: Actually I cleared the TEMP folder of the only local user, but since the build server is running as the SYSTEM user, it probably has some temp folder somewhere else.

    Read the article

  • Lync CMS replication is failing for all Domain Computers

    - by Ravi Kanneganti
    I have Lync Server 2010 and Active Directory installed on 2 different Windows Server 2008 R2 machines. I have added a Windows 7 PC to AD. And I have added this computer to Trusted Application Servers Pool and published the topology. I want to build an UCMA application to extend Lync Server functionality. I have installed UCMA 3.0 SDK in the same computer where Lync Server is residing. But, CMS Replication isn't happening and "Get-CsManagementStoreReplicationStatus" always gives Uptodate as "False" for my Windows 7 PC. I have even tried "Invoke-CSManagementStoreReplication" but nothing changed. Also, this is the error message that I can see in the log file: TL_WARN(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f85 (XDS_Replica_Replicator,FileDistributeTask.Execute:filedistributetask.cs(165))(000000000043B3FA)**Could not distribute the file. Exception: [System.IO.IOException: The process cannot access the file because it is being used by another process.** at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.File.Move(String sourceFileName, String destFileName) at Microsoft.Rtc.Xds.Replication.Replicator.Common.FileDistributeTask.Execute()]. TL_NOISE(TF_DIAG) [2]0500.07B8::04/05/2012-14:55:07.296.00000f86 (XDS_Replica_Replicator,ReplicaTaskContainer<T>.OnError:replicataskcontainer.cs(166))(00000000005C39D4)Enter. TL_INFO(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f87 (XDS_Replica_Replicator,ReplicaTaskContainer<T>.OnError:replicataskcontainer.cs(171))(00000000005C39D4)Task error callback is about to be called. TL_VERBOSE(TF_DIAG) [2]0500.07B8::04/05/2012-14:55:07.296.00000f88 (XDS_Replica_Replicator,PerReplicaTaskManager<T>.HandleTaskError:perreplicataskmanager.cs(230))(000000000385E79C)Enter. TL_INFO(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f89 (XDS_Replica_Replicator,PerReplicaTaskManager<T>.HandleTaskError:perreplicataskmanager.cs(234))(000000000385E79C)Task encountered an error: [ReplicaTaskContainer<FileDistributeTask>{FileDistributeTask{E:\RtcReplicaRoot\xds-replica\from-master\data.zip, E:\RtcReplicaRoot\xds-replica\working\replication\from-master\data.zip, **Access failed**. (E:\RtcReplicaRoot\xds-replica\from-master\data.zip)}, FileDistributeTask{E:\RtcReplicaRoot\xds-replica\from-master\data.zip, E:\RtcReplicaRoot\xds-replica\working\replication\from-master\data.zip, }}]

    Read the article

  • RoboCopy fails with "the specified network name is no longer available"

    - by Justin Scott
    We have a scheduled task that runs robocopy periodically to mirror a rather large folder structure from one server to another (thousands of folders, 100,000+ files, 50+ GB in size). There is a share on the receiving server where the mirror gets stored. We're running the task from the origin server connecting out to the share on the receiving end. Both servers run Windows Server 2003 and are connected to the same network switch (100Mbps). The process will sometimes complete all the way through without error. More often than not, however, at some point during the process (seems random as to where), robocopy will fail with the error The specified network name is no longer available. It will wait 30 seconds and try the file again and eventually give up after a number of retries. Process will repeat at the next schedule interval and may complete... or not. When this occurs I am not able to access the share at all on the destination server from anywhere on the network for up to 30 minutes. There is nothing else on the network using this share. My question is what does this error mean specifically? Why is the share "dropping off" and becoming inaccessible? Is there a way to prevent it and get the file mirroring to be more stable?

    Read the article

  • Unable to back up SQL Server databases using a maintenance plan

    - by Diogo Lopes
    I am trying to create a maintenance plan that will run automatically and back up my SQL Server 2005 databases automatically. I create a new maintenance plan and add a "Back Up Database Task", select all User databases, and choose a path to back up to. IMAGE in http://www.freeimagehosting.net/uploads/16be7dce43.jpg [new user limitation] When I save and try to execute this plan, I get the following error message: =================================== Execution failed. See the maintenance plan and SQL Server Agent job history logs for details. =================================== Job 'Backup.Subplan_1' failed. (SqlManagerUI) I've checked the maintenance plan log, the agent log, and just about every log file I can find and there are no entries at all to help me figure out why this is failing. If I right-click on a specific database and select "Back Up", the task succeeds. I tried changing the plan to back up just that one database and it still failed. I've tried running the plan with both Windows authentication and SQL Server authentication with the sa account. I also tried specifically granting the SQL Server Agent user account full privileges on the backup folder, but it still failed. Thanks for any suggestions!

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

  • Domain Environment + Certificate Authority + Server 2008 R2

    - by user1110302
    I have recently been delegated the task to setup a CA in our domain environment and have a question on why Microsoft does somethings the way they do lol. I have been trying to read up on what the best practices are for going about this task, and have decided that in an ideal CA environment you should have one “offline” Root CA, and then two subordinate CAs for redundancy/issuing the certs. That is all good, I understand how this works and why, but in messing with a sandbox I have setup, the way you go about adding certificate authorities to a domain environment seems extremely trivial and against all of their best practices… Dooes anyone know what the purpose is of an Enterprise Root CA that is integrated into Active Directory? From what I have read, once you setup an Enterprise Root CA that is integrated into Active Directory, it stays with Active Directory for the long haul and must not be turned off/renamed/touched under any circumstances. If this is true, that seems to go against the practice of setting up a standalone root CA, adding the subordinates, and then taking the root offline. Thanks for any feedback you may have to offer!

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • Disable write-protection on Micro SD

    - by Tim
    My task today is to open up and copy some files to 700 brand new micro SD cards. As I get going on this task I am finding that some of the Micro SD cards are telling me "sorry this drive is write protected" To copy the files I am using a standard SD to micro SD card adapter, and a USB SD card reader / writer. I have ensured that the switch is set to OFF on all of my adapters. As soon as I get a Micro SD that tells me it is write protected I can use the same adapter with another micro SD and it works fine, so I know the problem is not with my adapters. My question is: How can I disable the write protection on a Micro SD card? This eHow article seems to indicate that there is also a physical switch on Micro SD cards. However I have personally never seen a Micro SD with a physical switch, and none of the ones I am using today have said switch. Since these cards are brand new and thus empty are the ones that are telling me they are write protected simply useless? Could this be caused by some sort of defect in the cards?

    Read the article

  • Explorer.EXE ordinal 423 not found in urlmon.dll after updates/IE8 install

    - by Zoot
    Setting up a brand new Dell Optiplex 980 with Windows XP SP3, and everything started up fine on the first boot. My first task was to install system updates, including IE8 and WGA. After the required reboot after installing updates, I now get this error message: Explorer.EXE Ordinal not found. The ordinal 423 could not be located in the dynamic link library urlmon.dll Per my cursory Google search, this forum thread places the blame squarely on IE8. The solution provided is to enter safe mode and remove IE8. Unfortunately, when I press F8 to choose to boot safe mode, I only have the option of "Windows XP SP3 Professional" and no safe mode options. Any other ideas? Thanks in advance. FYI, I can get to the Windows Task Manager by holding down Control-Alt-Delete, but programs don't seem to run properly if you select them. I tried chatting with Dell Support, and we tried to initiate the system restore at c:\windows\system32\restore\rstrui.exe, but that had a similar "ordinal 423 not found in urlmon.dll" error.

    Read the article

  • Should one have a separate user account for work use? [closed]

    - by Tyler Wayne
    This question examines the practice of using a separate OS-level user account to divide work use from personal use (specifically, in a creative profession and on a personal computer). I recently left my in-the-flesh job to go to school, but I'm carrying on with the work remotely. I do all of my work on my laptop, and I currently have a separate user account called "Work" where I do exactly that. However, I'm now starting to question that practice. Because my hobby is the same as my job, I want to save notes of the things I learn while working. Because ideas come at any moment, I often want to throw something into my personal task manager's inbox and look at it again later. That task manager is well-suited to handle both the work and personal aspects of my life. Only my personal account has admin rights, but work sometimes requires me to install programs. My employer has no preference regarding my choice, so that is a non-issue. My work is essentially freelance web development, so advice given with that in mind will be much appreciated. Back up all opinion with some personal experience, please. Ideally, give a list of pros and cons and then name reasons for your position.

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • how does a computer know which IP address will route information to the internet? [closed]

    - by JohnMerlino
    Possible Duplicate: How does IPv4 Subnetting Work? For example, I have a computer with a Network Inteface Card (NIC) which is an Ethernet card that is connected by Ethernet cables to a router. There is also another computer with a cable that is connected in another port of the router. This is a Belkin router operating over an Ethernet in the LAN. When I connect to serverfault.com, it maps to an IP address. My computer now has a task of connecting to that IP address. But my computer itself cannot connect to the serverfault IP address. Only the router can. So the task of my computer is to find the IP address associated with the node that will do the routing to the public internet. How does my computer know that a particular IP address in the local network belongs to the router, and is not another computer connected to the network? Is this information configured manually in the operating system itself? Somehow my computer must know that it must send ethernet frames to the router with the expectation that the router will then send the packet to a public IP. How does it know to send it to the router? Is the router's ip address stored in my computer like a key/value pair e.g. "router"="192.168.2.6", so that when I put a public ip address, my computer first knows to connect to 192.168.2.6?

    Read the article

  • changing filesystem format from xfs to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux. Update The /home filesystem was xfs not jfs, and it seems there is a bug with this filesystem for some reason, I had to re-install the OS twice until I ended up with ext4 for the entire / However, as a conclusion, it seems that there is no way to make a conversion

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >