Search Results

Search found 1712 results on 69 pages for 'schedule'.

Page 52/69 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • SQL Server Offsite Backups

    - by Eric Maibach
    We have about !TB of SQL Server databases, and these databases generate about 200GB of data changes each day. Up to this point we have been doing Weekly full backups, daily diff backups, and hourly transaction log backups. The full and diff backups are backed up to tape and taken offsite each day. We have been trying to move away from tapes, and our IT department purchased a Barracuda Backup device that backups up data and then sends it offsite using our internet connection. I have been trying to get this to work for our SQL Server backups, and have ran into a number of problems. I normally like to just use SQL Server to perform backups instead of trying to use a agent, so that is what I tried first. However the Barracuda device was not able to dedup these files very well, so it ended up being to much data to try to send offsite and to archive. I then tried installing the Barracuda agent and using it to backup the SQL Server databases. However the problem I am having there is that on some of the database servers I also have files that need backed up, and I cannot find a way to create seperate backup schedules for the file backups and the SQL Server backups. Barracuda only does full or transaction log backups. So if I want to do hourly transaction log backups I end up doing a file system backup every hour (which is not good), or if I only schedule the backups to run once a night I either have to do a full backup every night, or only do a transaction backup once a day. None of these scenarios are good options. My question is, how is everyone else getting their large SQL Server database backups offsite. Are you just using tape, or have you found a offsite backup device that works well? Is anybody else using Barracuda to backup their SQL Server databases? If you do, then how do you have it setup?

    Read the article

  • Install and enforce a scheduled task across a Windows domain

    - by Ricket
    We have a small domain of about 70 Windows computers (XP and 7). We want to schedule a command (an update mechanism) to run on all computers periodically, and we want the task to run regardless of the computer's connection to our network (i.e. the task should run even on a laptop that isn't connected to our VPN). We have a Microsoft System Center Essentials 2010 server so that might come in handy. The options I see are these: Do it completely manually. Install the scheduled task by hand or remotely using psexec (and the at command?) for each computer in our network. Enforce that newly imaged computers should have this task installed on them before deployed to the employee, or the task should be in the image. High initial cost (having to do this for each of 70 computers) but building it into the image might work... But there is some maintenance in making sure the task is added to everything. And I fear that a year or two down the road, we will have forgotten about it or gotten sloppy or had new IT employees who miss this step and some computers won't have the task. Having one of our servers run a script that loops through all computers and psexec's the command on each computer in the network -- it would only run on running, connected computers, so this solution wouldn't work. I suspect SCE could do something like this too, but again this is not a good solution. Neither of these are ideal, and I'm certain there is a better way to do it -- right? What is the best way to accomplish this task?

    Read the article

  • Is execution of sync(8) still required before shutting down linux?

    - by Amos Shapira
    I still see people recommend use of "sync; sync; sync; sleep 30; halt" incantations when talking about shutting down or rebooting Linux. I've been running Linux since its inception and although this was the recommended procedure in the BSD 4.2/4.3 and SunOS 4 days, I can't recall that I had to do that for at least the last ten years, during which I probably went through shutdown/reboot of Linux maybe thousands of times. I suspect that this is an anachronism since the days that the kernel couldn't unmount and sync the root filesystem and other critical filesystems required even during single-user mode (e.g. /tmp), and therefore it was necessary to tell it explicitly to flush as much data as it can to disk. These days, without finding the relevant code in the kernel source yet (digging through http://lxr.linux.no and google), I suspect that the kernel is smart enough to cleanly unmount even the root filesystem and the filesystem is smart enough to effectively do a sync(2) before unmounting itself during a normal "shutdown"/"reboot"/"poweorff". The "sync; sync; sync" is only necessary in extreme cases where the filesystem won't unmount cleanly (e.g. physical disk failure) or the system is in a state that only forcing a direct reboot(8) will get it out of its freeze (e.g. the load is too high to let it schedule the shutdown command). I also never do the "sync" procedure before unmounting removable devices, and never hit a problem. Another example - Xen allows the DomU to be sent a "shutdown" command from the Dom0, this is considered a "clean shutdown" without anyone having to login and type the magical "sync; sync; sync" first. Am I right or was I lucky for a few thousands of system shutdowns?

    Read the article

  • How to use AND/OR Building Block content in a Word 2007 template

    - by JimmyJames
    I am creating a Schedule of Work template and am successfully using Developer Tab and Quick Parts to allow user to choose content on an "either/or" basis: either A; OR B; OR C; etc., essentially choose one option from many. One Building Block control, one paragraph, nice and clean. Now what I need to do but cannot seem to figure out, is how to allow user to choose content on an "and/or" basis: either A AND B; A OR B AND C; B AND D AND E OR F; etc., essentially choose several options from many on a variable basis. One Building Block control, maybe one paragraph, maybe three or more paragraphs. Not so clean. I thought of building choice options for all possible paragraph combinations, but I can have as many as 7 or 8 different paragraphs, and that solution quickly becomes unworkable. Multiple controls--some of which will be left unused doesn't work either, since I cannot find an easy way to have a "Choose or Delete" control that actually deletes if "Delete" is chosen. Recommendations are most welcome.

    Read the article

  • v2v of RHEL5 box - issues with retaining MAC address

    - by Alex Berry
    For the last week we have been troubleshooting a customer's Red Hat Virtual Machine running on ESXi. We've been using Veeam to try to create a replica off-site and have been having getting it to work on a decent schedule and recently we noticed that there were issues with orphaned snapshots while looking at the datastore. You can see several snapshots in the same folder and it's causing issues with replication and backup, so we decided the cleanest way was to v2v the machine to another datastore so that we had a clean single-vmdk setup to work with, this is where our trouble started. We first started off with a v2v using vmware converter and connecting to the powered on machine as we were having issues doing an offline v2v. This copied fine but when I tried to set a static MAC using this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=507 the new VM wouldn't take the address, it simply obtained a new MAC, received a dhcp lease and then would only boot up to a blank red screen, never the login screen. So the next step was to do an offline v2v, once we finally got it working. Same thing, followed the kb to the letter and still it wouldn't take the MAC. I then tried it again and upon completion I compared both old and new VMX file, copying every identifier and variable possible, then unregistered both VMs, uploaded the new VMX file and booted, only to see the same results. Finally I did the same as above but I copied the disk using DD to a second attached vmdk and then attached this to the new VM, and still no luck. After downloading the modified VMX file after the first boot and comparing it to the original I created I found that the bios uuid had changed from the one I typed in manually, so I'm assuming this may be the snagging point, but I have no idea. I've never had this issue before on a P2V and I'm just wondering if someone could shed some light on this, maybe it's to do with RHEL licencing?

    Read the article

  • Application for time and project management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, COM, or otherwise (even Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • Backup Exec tape rotation guidelines

    - by HannesFostie
    Hi We use Backup Exec to take care of our backups for our data server, exchange server, and one more set of systems. Each of these 3 is being done on a separate "set" of tapes. Our goal is to be able to roll back a full 2 weeks, with 1 full backup each weekend and differential/incremental backups in between (the difference between the two in our case isn't very big, because the employees mostly use a very similar set of files throughout the week). While playing around with the settings on how to achieve this, we set the time for BE to keep the full backup to 14 days, but because we have too much data this would require manual intervention each time to erase a certain tape and use that. What I would like to know is what kind of guidelines, tricks, tips and general "stuff to think about" you keep in mind when designing your backup schedule. The type of backups (full/diff/incr) isn't of that much importance in our case as it's more or less set in stone. Made this community wiki as it's not a very specific question. Thanks in advance!

    Read the article

  • Application runs fine manually but fails as a scheduled task

    - by user42540
    I wasn't sure if this should go here or on stackoverflow. I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • NTBackup (on WS2k3) fails to backup remote server (WS2k8R2) with " Error: is not a valid drive, or you do not have access."

    - by Mark A
    We run an NTBackup job on a Windows Server 2003 R2 SP2 with all updates (as of Q4-2011). It works well backing up two WS2k3 servers as well as the backup server itself. However, we have been unable to successfully back up our Windows Server 2008 R2 machine ("G5-01"). It often runs for about 2GB worth of backup and then dies out with one of the below error messages. It should be more like 20GB for the full server. We have tried using the admin share (C$), an explicitly shared drive share, UNC and mapped drives. The result is the same each time, the only thing that varies is the amount of stuff backed up before it chokes. We've also run NTBbackup from the UI interface, from the command line and as a scheduled task. We are backing up to 400/800GB tapes and they have plenty of space available on them (blank media). Error: \\G5-01\c is not a valid drive, or you do not have access. Error: \\G5-01\c$ is not a valid drive, or you do not have access. Error: Y: is not a valid drive, or you do not have access. Error: Could not access or create backup catalog files. Verify that you have full access to the working folder and there is disk space available. The job is run as Administrator and we have no problems logging onto the server and transferring files. The Event Log on the WS2k8 is not much help, as it has success audits for each login. All of the hardware involved (HP DL360 G3, HP LTO Ultrium 3, Adaptec 39320A) has the latest supported drivers. We've seemingly tried a bunch of different options but are wondering where to look next to resolve the backup issue. We've been super happy with our reliable schedule task for years but this one is stumping us!

    Read the article

  • Exchange 2003 -- Mailbox Management not deleting ALL messages aged 30 days or older...

    - by tcv
    I've recently created a Mailbox Management task within Exchange 2003 that, every night, looks at the contents of the Deleted Items within a particular mailbox and deletes mail that's 30 days or older. The scheduled task ran on its own last night and I have confirmed that messages within the right mailbox and the right folder were, in fact, processed. Many mails were deleted ... but not never email older than 30 days. In fact, the choice seems kinda random. Last night 3/10/2010 was the 30 day watermark. Mails were deleted from 3/10/2010, sure enough, but not all of them. Mails older than 3/10/2010 were deleted as well, but, again, not all of them. The only criteria I have on the management -- aside from the single mailbox and single folder scopes -- is the age criteria. The size criteria is set to Any, meaning I don't care about the size. I care about the age. It's made me wonder where there is some sort of limit on how many mails can be processed? The schedule is set for 12am and 1am every night. Any hints appreciated.

    Read the article

  • No password is complex enough

    - by Blue Warrior NFB
    I have one user in my AD domain who seems to not be able to self-select a password. I may have another one, but they're on a different enough password-expiration schedule that I can't remember who it is right now. I can set a password via ADU&C just fine, but when he tries it via C-A-D he gets the "doesn't meet complexity" message. Figuring he was just doing something like 'pAssword32', I did some troubleshooting of my own and sure enough it doesn't want to take a password that way. He's one of our users that habitually uses a local account and then maps drives using his AD credentials so he doesn't get the your password will expire in 4 days, maybe you should change it prompts, so he's a frequent "my password expired, can you fix it" flyer. I don't want to keep having him set it via ADU&C over my shoulder every N days. I'm just fine setting temp passwords of 48 characters of keyboard-slamming and letting him change it something memorable. My environment is at the Windows 2008 R2 functional level, and I am using fine-grained password policies. In fact, I have two such policies: For normal users (minimum length, remembered passwords) For special utility accounts The password complexities I've tried match both policies for length and char-set selection. The permissions on the User object themselves look normal, SELF does indeed have the "Change Password" right. Is there some other place I should be looking for things that can affect this?

    Read the article

  • Run a script on user connection on the VM host

    - by Scott Chamberlain
    I have a server running a Virtual Desktop Managed Pool, what I would like to do is when a user logs in I would like a script to check the number of available VMs and if below a threashold add additional VMs to the pool. The script to check the load and add to the pool is not the problem, I have that already figured out: $collectionName = "Test1"; $rdvh = "vmHost.example.com"; $minAvailableVMs = 2; Import-Module RemoteDesktop; $pool = Get-VirtualDesktopCollection -CollectionName $collectionName; $availableVMs = $pool.Size - ($pool.Size * $pool.PercentInUse / 100); $status = Get-VirtualDesktopCollectionJobStatus $collectionName #only add new servers if we are below the threashold and in the JOB_COMPLETEED state if($availableVMs -lt $minAvailableVMs -and $status.Status -eq [Microsoft.RemoteDesktopServices.Management.VirtualDesktopCollectionJobStatus]::JOB_COMPLETED) { Add-RDVirtualDesktopToCollection -CollectionName $collectionName -VirtualDesktopAllocation @{"$rdvh" = 1} } The problem I am having is, how do I run the above script on the Virtualization Host/Connection Broker/Some other server when a user connects?. I don't think it would be appropriate to run this as a logon script inside the VM, I think there is a way to do this on the management side but I don't know the new scripting interface in Server 2012 R2 well enough to know which commandlets I should look for to schedule this. EDIT: I know System Center is perfect for this but I do not have a license and was denied when I asked for it to be added to the budget.

    Read the article

  • Offline productivity

    - by Frank Meulenaar
    On some days I'm commuting 2hs (oneway) in the train. I don't have any mobile internet nor is there always WiFi service in the train. Because of security reasons I can't do any work in the train so I'm trying to work on my geek time. I'm looking for general solutions on how to do this (I'm on FireFox/Windows but I don't think it matters) Email works perfectly with gmail offline. It syncs directly when online and remembers complicated stuff. So far I used the ScrapBook plugin to store an website. It works good, but I have to download my favorite news page every day again - I want it to sync as soon as possible. It would even be more awesome if I could click a page on my desktop and my laptop would sync as soon as it has the chance. (edit: maybe the autosave plugin for scrapbook can do this) Similarily, I use the Downloadhelper plugin to download youtube vids, but I'd like something that automatically downloads videos from a given channel. Any tips are welcome. So far my early morning schedule is: wake up, power on laptop, make coffee, power off laptop and leave within 10 minutes (enough time for GMail to sync) but I can imagine a system where my laptop stays on during the night (or boots before I wake (and makes me coffee :])).

    Read the article

  • What Windows service binds a NIC to the network?

    - by Bigbio2002
    I have a server that takes several minutes for the NIC to bind itself to the network upon startup (it has a statically-configured IP). This causes DNS/WINS/Intersite Messaging to fail to start, since they're dependent on a network connection. While I'm still attempting to find a root cause to this issue (I've done firmware updates, checked for any odd drivers/services, no luck so far), but in the meantime, I want to adjust the load order of services to ensure that the NIC binds first before these services attempt to start. The only question is, which service is it? The server is running Server 2008 R2 and only has one NIC installed. (On a side note, there are two other small but odd problems occuring with the server. The server had the issue described in KB2298620, which I've fixed. The other problem occurs in Windows Server Backup. No events appear in the upper portion of the window, despite the fact that backups are running in the background. Whenever I attempt to modify the backup schedule, it gives me the error "Not enough storage is available to process this command" and appears to fail, when, in fact, it actually succeeds. These may be separate issues, but something tells me that some of these might share a common root cause.)

    Read the article

  • Proper Outlook Free/Busy status when working from home

    - by rwmnau
    Our office (pretty large - about 200 people) has recently started part-time telecommuting. It's only one day/week now, but it's already raised some questions about availability, so I wanted to see how the users here, some of whom I'm sure telecommute to a corporate job, how they set their out of office status. Outlook has four statuses, and here's what I (and most others?) take them to mean: Free: I'm available for meetings Busy: I'm in a meeting or otherwise occupied, and unavailable Tentative: Shy away from scheduling over, but I'm available if needed Out of office: I'm on vacation and unavailable. However, I don't travel for work - do people tend to use this status to mean they're remote, but available for a phone call/bridge? As we begin to telecommute, I'll be available by phone for meetings, but not in person - any meeting can have a conference bridge, but some meetings just need to be in person. I'd like to send the right message about my status - people can schedule meetings with me on my telecommute days, but they should expect me to be on a conference bridge when they do. What status do people use? Does "Out of Office" correctly reflect that you're working from home, even though I perceive this to mean that somebody is on vacation? Maybe I'm the only one confused here, but as a company that's never before done telecommuting of any kind, I'm in the dark about standard practices. Thanks for the insight! Though this isn't a technical question directly, I'm hoping it's still applicable to the group and constructive - if it's not, please close it and accept my apology.

    Read the article

  • Wake a Mac display from sleep via SSH

    - by MaxGabriel
    I'm using Jenkins as a CI server, where I'm SSHing into an iMac running OS X Mountain Lion (10.8.4) to run some UIAutomation integrations tests on an iOS app. The iMac actually sits 10 ft from me (but across a table) so I'm able to see the screen. However, the tests don't wake up the display, so I often can't see them. Is there a way to wake up the display from the terminal once Jenkins has SSHed in? So far I have tried using Applescript to press an arrow key, and using the Wake Assist application. I also tried setting the wake schedule to be the current date. Finally, I tried using the caffeinate command: caffeinate -t 300 &. The computer's "Wake for Wi-Fi access" checkbox is enabled. So far my best workaround is to just set the iMac to stay awake for atleast 3 hours. However, it'd be nice to keep normal sleep behavior, as I hypothesize that the screen waking from sleep would alert me visually that the integration tests are running. It's also significantly cooler :)

    Read the article

  • How do you set up CRON?

    - by user1723760
    I have never used CRON before but I want to use CRON in order to be able to perform schedule jobs for a php script. The php script is called "inactivesession.php" and in the php script is this code: <?php include('connect.php'); $createDate = mktime(0,0,0,10,25,date("Y")); $selectedDate = date('d-m-Y', ($createDate)); $sql = "UPDATE Session SET Active = ? WHERE DATE_FORMAT(SessionDate,'%Y-%m-%d' ) <= ?"; $update = $mysqli->prepare($sql); $update->bind_param("is", 0, $selectedDate); $update->execute(); ?> Wht I want to do is that when the above date is reached (25th Oct), I want the php script to perform the UPDATE statement above. But my question is that how do I use CRON in order to do this? The server I am using is the university's server known as helios, does CRON need to be set up in helios, (do I have to call the admin for this) or is it something else which uses CRON. I have never used CRON before so can you explain to me how CRON can be set up for the example above with the server I am using? Thanks UPDATE: Hi, I think the name of the OS is actually helios but I am not sure, I have a wikipedia page on this: here. I will read the crontab wikipedia page and see what I can find, but what my question is that is CRON already set up, do I just go right ahead and use CRON or do I need to set it up first?

    Read the article

  • Scheduled Task unable to create/update any files

    - by East of Nowhere
    I have several tasks in Task Scheduler in Windows Server 2008 SP2 (32-bit) and they all successfully "do their work", except for creating or updating any files on Windows. All the tasks point to simple .cmd files that have the real work but beyond that there's no pattern: some call robocopy with the /LOG option, some call .exe files I wrote that manipulate XML files, some just do stuff with > redirection. With all of them, if I double-click the .cmd file myself, it works fine and the files are created or updated or whatever. If I run it from Task Scheduler (by the schedule or just clicking Run), the task always completes "successfully" but without any of the desired changes to files. I don't see any "unable to create file" errors in Event Viewer either. The tasks do all Run As a specific account, but I have logged in as that account and verified that it has permissions to do everything it needs to. Further details -- Task is set to Run whether user is logged in or not. Configured for: "Windows Vista or Windows Server 2008", there is no other Configured for option available.

    Read the article

  • Solutions on how to use an OS X calendar as a more perfect time tracking solution for 5-10 users in a small agency?

    - by jnthnclrk
    I really like OS X's iCal. Entering events is easy with the mouse and it also gives you a very real visual sense of how long tasks take to complete. We often work remotely in our organisation, so we use a few shared calendars between key individuals to provide us with an overview of hours worked, availability & schedule conflicts without too much disruption to our various, hectic workflows. It really is a neat solution, especially on shared tasks. How many times have you tasked a remote colleague and then lost the thread on whether that task was completed or not? With shared calendars you get a much clearer idea of what your people are working on without having to pick up the phone or compose a chat. However, there are a few areas where this approach fails... iCloud syncing often needs to be re-jiggered The "view only" option on shared calendars does not seem to work, which makes all shared calendars editable by others There is no decent reporting with this workflow There is no task categorisation or tagging Things get very busy in iCal when working with more than 2 shared calendars I've looked at a few task management apps like Basecamp and Harvest, but nothing appears to let me edit my calendar natively and then sync with a 3rd party. Interested in solutions to improve the above workflow and enable us to elegantly increase the amount of users.

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • HSphere - Only sees Apache 2 Test Page after forced shutdown?

    - by Darkwoof
    Hi, I have a dedicated server running on a Dell PowerEdge 850 with CentOS 4.4 and HSphere 3.0 Patch 6 colocated at a datacenter. Last night my hosting company had to schedule a change in the power bar, and I gave the go ahead for them to shut down the server and bring it up when they are done. Since they do not have admin access to the machine, I suppose they did a forced shutdown. When the machine was brought up, I found that all my domains (and sub-domains) are now pointing to an "Apache 2 Test Page" instead of the pre-configured sites that were running prior to the shutdown. This apparently only affects the standard sites running on port 80 - my Webmin instance running at port 1000 is still accessible for example, as well as my HSphere control panel running at port 8080. I've checked the config settings using the HSphere UI for each of the sites, and didn't find anything wrong. I've also tried rebooting the server via SSH, which does not rectify the problem. I've previously done reboots with no issues; the sites would just come right back up when its done, but not this time. I'm guessing some configuration file got corrupted or overwritten this time? Anyone with experience with HSphere and can provide some advice on what's happened and how to solve it? Thanks. (I do not have an active support agreeement for HSphere since Parallels took over and increased the min. license to 200. I only had 25 license for use by family and friends.) Thanks in advance.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >