Search Results

Search found 4011 results on 161 pages for 'automated printing'.

Page 129/161 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Exim: send every emails with a predefined sender

    - by Gregory MOUSSAT
    We use Exim on our servers to send emails only from local automated users, as root, cron, etc. We have to specify every possible users into /etc/email-addresses. For example: root: [email protected] cron: [email protected] backup: [email protected] This allow us te receive every email generated. The problem is when we add a user for whatever reason (for example when we add a package, some add a user), we can forget to add this user to /etc/email-addresses. Most of the time it's not a problem, but this is not clean. And the overall method is not clean. We'd like to configure Exim to send every emails with the same source address. i.e. every sent email comes from [email protected] One way could be to use a wildcard or a regular expression into /etc/email-addresses but this is not supported. I don't currently understand Exim enought to figure out how to modify this in a way or another. Ideally, Exim should look into /etc/email-addresses first, and if no match it use the predefined address. But this is very secondary. There are two places where this address is used: 1. when Exim send the FROM: command to the smtp server 2. inside the header edit: The rewrite section is the original one from Debian begin rewrite .ifndef NO_EAA_REWRITE_REWRITE *@+local_domains "${lookup{${local_part}}lsearch{/etc/email-addresses} \ {$value}fail}" Ffrs *@ETC_MAILNAME "${lookup{${local_part}}lsearch{/etc/email-addresses} \ {$value}fail}" Ffrs .endif (comments removed)

    Read the article

  • Writing scripts that work with my emails

    - by queueoverflow
    I currently use Thunderbird as my email client and it has some filters, but that seems to be all I can program in it. On several occasions, I heard people talk about their automated email workflow. One example: When I do not get a reply to an email the script will send a “nag” email asking why I did not get a response yet. Or another one: I get so much mail that I cannot read them all. After a week, unread email is put on hold and the sender gets a “if it was important, reply to this email and it will be set to un-hold” email. The script then takes the answer and move it to back into the important folder. I read about FiltaQuilla which seems nice, but it does not seem to be the kind of programming that I am looking for. How can I write general purpose scripts like those? Do I need to write my own Python IMAP/SMTP client (if that is even possible) to to this or can I script it it, say JavaScript, in Thunderbird?

    Read the article

  • smartctl short test doesn't seem to complete

    - by Cédric COPY
    I am working on project which involve automated HDD testing through smartctl. The station is working fine on most product, but I have two specific products that fail the smartctl test. Those two product are both WD product (WD2500BUDT series) Smartctl behaviour is quite strange, in fact the test is launched without any problem, i wait about 2min (test length), and when i check the smartctl, i have got no result at all. It's like I hadn't launched any test (no fail, no success in smartctl result). No error return on command, nothing in syslog, .. As i said before, the test is working for other product, thousands products worked well with this test. The main smartctl command used are : smarctl -t shortest /dev/sdX #Launch test smartctl -l selftest /dev/sdX #Look at test result I have tried to use: smartctl -s on /dev/sdX or smartctl -o on /dev/sdX But doesn't change anything. The system is using Debian 6.0, smartctl v5.40 (rev 3124) x86_64, HDD are plug through SATA to PCI controller. I have 4 HDD connected at a time. Well if anyone has some hints to give with this problem, because I have no idea how can i fix this. Thanks in advance. PS: Not sure if it was a serverfault topic, sorry if i was wrong!

    Read the article

  • Process Power to the People that Create Engagement

    - by Michael Snow
    Organizations often speak about their engagement problems as if the problem is the people they are trying to engage - employees,  partners, customers and citizens.  The reality of most engagement problems is that the processes put in place to engage are impersonal, inflexible, unintuitive, and often completely ignorant of the population they are trying to serve. Life, Liberty and the Pursuit of Delight? How appropriate during this short week of the US Independence Day Holiday that we're focusing on People, Process and Engagement. As we celebrate this holiday in the US and the historic independence we gained (sorry Brits!) - it's interesting to think back to 1776 to the creation of that pivotal document, the Declaration of Independence. What tremendous pressure to create an engaging document and founding experience they must have felt. "On June 11, 1776, in anticipation of the impending vote for independence from Great Britain, the Continental Congress appointed five men — Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman, and Robert Livingston — to write a declaration that would make clear to people everywhere why this break from Great Britain was both necessary and inevitable. The committee then appointed Jefferson to draft a statement. Jefferson produced a "fair copy" of his draft declaration, which became the basic text of his "original Rough draught." The text was first submitted to Adams, then Franklin, and finally to the other two members of the committee. Before the committee submitted the declaration to Congress on June 28, they made forty-seven emendations to the document. During the ensuing congressional debates of July 1-4, 1776, Congress adopted thirty-nine further revisions to the committee draft. (http://www.constitution.org) If anything was an attempt for engaging the hearts and minds of the 13 Colonies at the time, this document certainly succeeded in its mission. ...Their tools at the time were pen and ink and parchment. Although the final document would later be typeset with lead type for a printing press to distribute to the colonies, all of the original drafts were hand written. And today's enterprise complains about using "Review and Track Changes" at times.  Can you imagine the manual revision control process? or lack thereof?  Collaborative process? Time delays? Would  implementing a better process have helped our founding fathers collaborate better? Declaration of Independence rough draft below. One of many during the creation process. Great comparison across multiple versions of the document here. (from http://www.ushistory.org/): While you may not be creating a new independent nation, getting your employees to engage is crucial to your success as a company in today's world. Oracle WebCenter provides the tools that power engagement. Employees that have better tools for communication, collaboration and getting their job done are more engaged employees. Better engaged employees create more engaged customers and partners. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • Using fedora 17 commandline 'mail' program cannot send to hotmail

    - by Eric Leschinski
    I am trying to use the console in Fedora 17 to send an automated email to myself. I run this: echo "email content" | mail -s "blah" [email protected] It works fine, google treats it as a spam email, but when you mark it not spam everything is cool. For Hotmail there are policies to prevent the email from being sent. I do this: echo "email content" | mail -s "blah" [email protected] And the email returns as undeliverable, the email does not even appear in the spam folder and I get this as a response: ----- Transcript of session follows ----- ... while talking to mx3.hotmail.com.: >>> MAIL From:<[email protected]> SIZE=685 <<< 550 DY-001 (BAY0-MC3-F8) Unfortunately, messages from 184.90.101.28 weren't sent. Please contact your +Internet service provider. You can tell them that Hotmail does not relay dynamically-assigned IP ranges. +You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. 554 5.0.0 Service unavailable So apparently hotmail doesn't like spammers so much, they they are blocking anything with a dynamically assigned IP range. Google does not do this. What is the easiest way to just get around this and send an email to hotmail and end up in their spam folder to be unblocked later by the user?

    Read the article

  • Additional Security Measures for Syslog over SSH

    - by Eric
    I'm currently working on setting up some secure syslog connections between a few Fedora servers. This is my currently setup 192.168.56.110 (syslog-server) <---- 192.168.57.110 (syslog-agent) From the agent, I am running this command: ssh -fnNTx -L 1514:127.0.0.1:514 [email protected] This works just fine. I have rsyslog on the syslog-agent pointing to @@127.0.0.1:1514 and it forwards everything to the server correctly on port 514 via the tunnel. My issue is, I want to be able to lock this down. I am going to use ssh keys so this is automated because there will be multiple agents talking to the server. Here are my concerns. Someone getting on the syslog-agent and logging into the server directly. I have taken care of this by ensuring that syslog_user has a shell of /sbin/nologin so that user can't get a shell at all. I don't want someone to be able to tunnel another port over ssh. Ex. - 6666:127.0.0.1:21. I know my first line of defense against this is to just not have anything listening on those ports and it's not an issue. However I want to be able to lock this down somehow. Are there any sshd_config settings on the server that I can use to make it where only port 514 can be tunneled over ssh? Are there any other major security concerns I'm overlooking at this point? Thanks in advance for your help/comments.

    Read the article

  • Nagios check_host_alive and check_ping not showing host as down

    - by Kyle
    I am using the check_host_alive command to send 5 packets every minute to all my routers at remote locations. I noticed today I received a notification from The AT&T Global Client Support Center that a router was down (which can take 5-30 minutes to send these notices out) and never received a notice from Nagios. I went onto Nagios and it is was showing the host as alive with a latency of 0ms. This tells me it is seeing the automated response from my router in the data center that, "TTL expired in transit" as a reply from the remote router. Is there anyway for me to tell nagios to check where the reply is comming from? I feel like other people have to of had this issue... I tested it with the check_ping command and it produced the same results. I have the command defined has %hostname% and the proper IP in the host definition, and it works fine for telling me the latency is high. Any ideas are welcome, I have already exercised my Google skills with no results. EDIT: root@IM-UBTU:/# /usr/local/nagios/libexec/check_ping -H 192.168.250.1 -w 100.0,10% -c 200.0,20% -vvv CMD: /bin/ping -n -U -w 10 -c 5 192.168.250.1 Output: PING 192.168.250.1 (192.168.250.1) 56(84) bytes of data. Output: From 10.69.10.2 icmp_seq=1 Time to live exceeded It knows something is wrong why doesn't it give me a warning?

    Read the article

  • Getting prompted for password accessing page through script even when client and server are in same

    - by Munawar
    I'm trying to pull up an internal webpage in automated fashion using the methods in 'Internetexplorer.Application' using vbscript. But I'm getting prompted for password, although the client and the server both are in the same domain. Predictably when I manually try to access the web page, I don't have any problem. Only when I try using cscript.exe or iexplore.exe, I get prompted. I'm trying to automate some of the smoke test we do after a new build is deployed. But this password prompt is getting in the way. Following are the system specs Client machine - IE 7.0, OS is Windows server 2003 Server machine - Windows Server 2008 Both are in the same domain. So far I've unsuccessfully tried following to automate the password input system.diagnostics.process.start var WinHttpReq = new ActiveXObject("WinHttp.WinHttpRequest.5.1"); WinHttpReq.Open("GET", "http://website", false); WinHttpReq.SetCredentials("username", "password", 0); Nothing seems to work I checked in IIS. we have only anonymous and forms authentication enabled Is there any configuration setting in the client machine that can be tweaked to bypass this, although I'd hate to do it since you step on the toes of twenty people trying to do that. Preferable way would be to programmatically input it if its possible. Also, if you can suggest a more appropriate forum, that'd be great too. Please help.

    Read the article

  • Pivot table not refreshing sort order

    - by William Anthony
    I have Pivot Table that get its data source from another sheet, same workbook. I want the sort order of data is same as the data source order, I choose "Sort in data source order" in Pivot Table option. The problem is, when I change the data order on data source worksheet, then I refresh the Pivot Table, the sort order didn't change. I googled that the Pivot Table should be unlink first then re-link again in order to work properly, so I tried the following: The original data source has named range: origdata. The fake data source has names range: dummydata I changed manually data source to dummydata then changed back to origdata. The sort order did change as expected. Now I want to make the operation automated, so I'm using this code in Worksheet.activate event. Note that, PT is PivotTable instance. ... PT.SourceData = "dummydata" PT.RefreshTable PT.SourceData = "origdata" PT.RefreshTable ... Change data source from VBA didn't change the sort order just like I did with manual method. Why is that? Am I missing something? Maybe there are some routine called when I changed the data source manually via toolbar button? Thanks in advance for your help.

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • Simple Merging Of PDF Documents with iTextSharp 5.4.5.0

    - by Mladen Prajdic
    As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets. The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 5.4.5.0 This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files. The first parameter is the name of the target PDF file that will be created. The second parameter is the directory containing PDF files to be merged into a single file. using iTextSharp.text; using iTextSharp.text.pdf; using System; using System.IO; namespace MergeSpeedPASS { class Program { static void Main(string[] args) { if (args.Length == 0 || args[0] == "-h" || args[0] == "/h") { Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 5.4.5.0."); Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs."); Console.WriteLine(""); Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir"); Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension."); Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs"); Console.WriteLine(""); Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles"); } else if (args.Length == 2) CreateMergedPDF(args[0], args[1]); Console.WriteLine(""); Console.WriteLine("Press any key to exit..."); Console.Read(); } static void CreateMergedPDF(string targetPDF, string sourceDir) { using (FileStream stream = new FileStream(targetPDF, FileMode.Create)) { Document pdfDoc = new Document(PageSize.A4); PdfCopy pdf = new PdfCopy(pdfDoc, stream); pdfDoc.Open(); var files = Directory.GetFiles(sourceDir); Console.WriteLine("Merging files count: " + files.Length); int i = 1; foreach (string file in files) { Console.WriteLine(i + ". Adding: " + file); pdf.AddDocument(new PdfReader(file)); i++; } if (pdfDoc != null) pdfDoc.Close(); Console.WriteLine("SpeedPASS PDF merge complete."); } } } } Hope it helps you and have fun.

    Read the article

  • Block users from Social networking websites while firewall is down

    - by SuperFurryToad
    We currently have a SonicWall firewall, which does a pretty good job a blocking Social networking websites like Facebook and Bebo. The problem we are having is that sometimes we need to temporarily disable our firewall blocklist so we can update our company's page on Facebook for example. Whenever we do this, have see an avalanche of users logging on to their Facebook pages during work time. So what we need a way to block access while the firewall is down. For the sake of argument, we have two groups of users - "management" and "standard users". "standard users" would have no access to Facebook, but "management" users would have access. Perhaps something like a host file redirect for non-management users. This could probably be enforced via group policy that would call a bat file to copy down the host file, depending if the user was management or not. I'm keen to hear any suggestions for what the best practice would be for this in a Windows/AD environment. Yes, I know what we're doing here is trying to solve a HR problem using IT. But this is the way management wants it and we have a lot of semi-autonomous branch offices that we don't have a lot of day to day contact with, so an automated way of enforcing this would be the most preferable method.

    Read the article

  • ACDSee alternatives for batch editing images

    - by Oxwivi
    I am looking for free, preferably open, alternatives to ACDSee for batch editing work. While I can do much of the work well on ACDSee, it's not entirely satisfactory despite having to pay for it. I need at least the following batch editing functions: Resize using either height or width and maintain aspect ration Auto contrast text overlays and occasionally, cropping oh, I make extensive use of renaming features as well Couple of issues with ACDSee are: I always need to highlight the Exposure section or auto contrast will not be done despite it being saved in the preset; and I can't define, move around the cropping box, forcing me to manually crop tons of images. I'm not an advanced, or "power photo-editor". I only require the basic stuff I described to be automated. My personal feature wish list (I'm pretty sure something so niche doesn't exist) would be text overlay based on the image names (images are named as image-1_1, image-1_2 or image-2_c1_1, image-2_c1_2, and text overlay would Image-1 and Image-2 C1 and Image-2 C2). I tried digiKam, but damn that thing is huge. It runs very slowly on my Pentium 4 and 1.5 GB RAM. On top of being a program with over 1 GB of files, the KDE library it uses is always slow regardless of it running on either Windows or Linux.

    Read the article

  • How to connect to a remote server and run some code on that particular server?

    - by seedeg
    I am implementing an automated backup scheme so I created a shell script which first creates SQL Dumps for all MySQL databases, then it retrieves all the websites from the /var/www of a remote server. The latter is working as I am using rsync to get the remote files. However, obviously, the MySQL dumps being retrieved are the ones on the local server which is not what I want. I want to get the SQL Dumps from the remote server as well. I have a tunnel between the local and remote server which I can connect without using any password (I added the public key to the authorized_hosts), so I tried to add the following code to the script: ssh [email protected] Then I tried to retrieve the SQL dumps and then I exit from the remote server. However this does not work as I still have to enter exit manually in the terminal for the SQL dumps to be retrieved from the remote host. I don't know why this is happening. Basically this is what the script is trying to do: //connect to remote server ssh [email protected] //retrieve SQL dumps //code to retrieve... //exit from remote server exit //use rsync to get remote files of /var/www from local server (working) Is there a way to connect to the remote host AND run the script's code ON THAT remote host? Many thanks in advance

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Doing arithmetic and passing it to the next command

    - by neurolysis
    I know how to do this in /bin/sh, but I'm struggling a bit in Windows. I know you can do arithmetic on 32-bit signed integers with SET /a 2+2 4 But how do I pass this to the next command? For example, the process I want to perform is as follows. Consumer editions of Windows have no native automated sleep function (I believe?) -- the best way to perform a sleep is to use PING in association with the -n switch to get that many seconds, minus one, of sleep. The following command is effective for a silent sleep: PING localhost -n 3 > NUL But I want to alias this into a sleep command. I'd like to have it elegant so that you enter the actual number of seconds you want to sleep after the command, right now I can do DOSKEY SLEEP=PING 127.0.0.1 -n $1 > NUL Which works, but it's always 1 second less than your input, so if you wanted to sleep for one second you would have to use the command SLEEP 2. That's not exactly ideal. Is there some way for me to pass the arithmetic of $1+1 and pass it on to the next command in Windows? I assume there is some way of using STDOUT...

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • Tools required for a Web Development Project..

    - by RBA
    Hi, I wanted to design a project in linux which could contain programming languages(C, perl, PHP, HTML, XML etc) basically a web based project. Why i have chosen to build on Linux is because it is Open Source, and lot many things can be automated through scripting languages, which in windows i don't know. So, i have installed linux on a virtual machine(Host-Windows 2007 & Guest Linux CentOS), CentOS(command line interface). Since i am a beginner, so I want to know what all tools can be used to facilitate and ease my development process. Some which i know are listed below, and request you to please share your experience on this. 1) Using Putty so that can access the Linux machine from anywhere within the network. 2) Since i want to develop on Linux, but want to use windows as developing platform. So have downloaded Eclipse Editor (C/PHP) on windows. But want to know how can i access linux files from here?? 3) Installed Samba, and still trying to figure out how can i access linux files remotely on Windows. 4) Please share your experience, as how can i ease my development process. and what all tools i can use..?? Please let me know if you need any other clarification..

    Read the article

  • What methods are available for updating a non-Internet-connected VMWare ESXi host?

    - by romandas
    I have a stand-alone installation of VMWare vSphere Essentials, with a vCenter Server and 3 ESXi 4.0 host servers. The environment is intended to remain as a stand-alone network, with the exception that I can "float" a workstation or server between the 'Net and the VMWare network for patches and maintenance. With other installations, where the Internet is available, I've used the vSphere Host Update utility to connect to VMWare and then apply the patches to the ESXi hosts. My problem is that this utility does not seem to function if it cannot connect to both VMWare and the ESXi host at the same time, as the scan for patches function will not scan the server without connecting to VMWare's site to sync its repository first. Even if I sync it, disconnect from the 'Net and connect to the VMWare network, it still won't scan hosts for required patches -- it will prompt for syncing with VMWare and if you click No to syncing, the scan does not occur. Does anyone know of other options for updating the ESXi hosts in some automated fashion? I believe I can manually pull down required patches and apply them, but this will not scale well, and in the future I'm sure I'll want something a bit more scalable.

    Read the article

  • Why do I need to set up Autologon values in registry twice before it works and can i fix this?

    - by jJack
    Background: As part an automated testing suite I am building, I need to set up Autologon on my virtual machines 'on demand'. By on demand, I mean that I don't want to necessarily pre-configure my VM or any snapshot to have Autologon set up already, for security reasons and also a huge business case. My solution so far: I'm copying a script to the guest machine and then using Sysinternals PsExec to execute it. The script is: reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultUserName /t REG_SZ /d myusername reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultPassword /t REG_SZ /d myfakepassword reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultDomainName /t REG_SZ /d mydomain reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v ForceAutoLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v AutoAdminLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AutoLogonChecked" /f /ve /d 1 Note: I don't believe AutoLogonChecked is required for machines post Windows 2000 but I'm doing it just in case for now. Maybe ForceAutoLogon isn't either, not sure yet. The Problem: I see PsExec executes this properly and all the values are in the registry, however when I restart the machine, the user isn't automatically logged on...When I run this a second time then restart the machine, the user is finally logged on. A diff between the registry states shows that the first time I run this, it is missing both the "1" for AutoAdminLogon, and also the DefaultPassword key. The second time I execute it, these values are correctly intact as I intended. So, what is going on here? Is this expected? This post claims in the end that it really all just works (the problem was that a logoff script was setting off the values). Doesn't seem to work for me however.

    Read the article

  • Copying windows home server backup offsite

    - by Simon
    What ways are there to copy a windows home server backup to an offsite location? I'm talking specifically (and only) about the automated backup of my entire machine, and not the shared network folders. I am 90% working away from home on my laptop which has a 640GB drive so the shared folders are essentially useless to me. I backup every night, but if my house burns down or broken into the I'm in serious serious trouble ! I'm really looking for some alternative way to back up my entire machine - which much not interfere with the reliability or speed by which my WHS backs up my laptop every night. Either a way to 'export' a complete machine backup from the server, or recommendations on non-conflicting software I can backup to a 1TB drive at work are what I'm looking for. Note: I believe that WHS uses its own completely proprietary backup and doesn't use things like any 'backup bit' or 'archive bit'. I just dont want to install some other backup software that will conflict. PS I'm now running Windows 7 and just realized that I should probably check out the backup functionality it gives me. I assume that won't conflict right! Edit: Thanks for the hosted solutions. I'd also appreciate ways to backup to an 'offsite' location that I control - like my office vs. my home. The hosted solutions I think will be too slow or expensive for my needs.

    Read the article

  • In a Shell scripts, check version of installed package, make a decision based on output

    - by DJDarkViper
    Looking to write a cross distro / cross version shell script that makes sure a forced version of PHP is installed Example: Ubuntu 12.04 has 5.3, Ubuntu 13.10 has 5.5, Debian 7 has 5.4 I need this script, when run on a distro that has an old version of PHP, to update the repo to point to a package for 5.4, and if the distro has too new of a version, can downgrade to 5.4 appropriately. Im still not entirely comprehensive of Shell/Terminals technical limit of what you can do with it, but ill be perfectly frank that im still not totally used to existing tools The best I can think at the moment is: php -v | grep "PHP 5" but that returns a bunch of potentially changeable granular characters (PHP 5.4.4-14+deb7u5 (cli) (built: Oct 3 2013 09:24:58) ). Im not sure what to pipe to after this to extract out the characters im interested in Im not sure if im being totally clear, im not sure how to ask this.. Basically, in an automated shell script for Linux distros, how do I extract the PHP version (and just the PHP version number preferably) and make a decision based on that output EDIT This line ended up doing pretty dang good php -v | grep "PHP 5" | sed 's/.*PHP \([^-]*\).*/\1/' | cut -c 1-3 Bit long in the tooth, but gives me "5.3", "5.4", and "5.5" which is exactly what I need to work with

    Read the article

  • Outlook signature distribution tools ?

    - by HannesFostie
    Hi We are soon changing our corporate identity, and as such we will need to change our outlook signatures. However, being some 125 people, my colleague sysadmin and I don't want to go around changing these manually, and are thus looking for a good way to do this fully automated. Most of our desktops are XP, with the exceptional few running Win7. Most run Outlook 2007, some run 2003. Our environment is AD-centered, and most of the information will come from AD (telephone number, title, ...). The biggest problem I can see so far is that because we are bilingual (Dutch and French), there will be 2 versions of the signature, depending on what the person has as main language. People currently do not have anything in AD to distinguish this, but we could create a group for it, or perhaps add some sort of attribute. A cheap if not free tool would be great. eMailSignature could probably do most, if not all, of this for us but it's a rather expensive tool costing some 1250 euro. We just want to distribute the signatures, actual "management" is less important as job titles don't change all that much. Any tips are welcome!

    Read the article

  • When ran as a scheduled task, cannot save an Excel workbook when using Excel.Application COM object in PowerShell

    - by Daniel Richnak
    I'm having an issue where I've automated creating an Excel.Application COM object, add some data into a workbook, and then saving the document as an xlsx. This works fine if: I'm already in Powershell interactive host and either run each command in sequence, or execute as a ps1. I run it from cmd.exe, using the syntax: powershell.exe -command "c:\path\to\powershellscript.ps1" I create a scheduled task in Windows 7 / Server 2008 R2, use the above powershell.exe -command syntax, and use the mode "Run only when the user is logged on". It fails when I modify the same scheduled task, but set it to "run whether the user is logged on or not". Here's a sample script that illustrates the problem I'm having: $Excel = New-Object -Com Excel.Application $Excelworkbook = $Excel.Workbooks.Add() $excelworkbook.saveas("C:\temp\test.xlsx") $excelworkbook.close() I have a theory that the COM object fails somehow if my profile isn't loaded / if it's not performed in a command window. Any ideas on which options to choose when creating the scheduled task, or which options to use when creating the Excel object or using the SaveAs() function? Can anybody reproduce this? I've been able to see this behavior on both a Server 2008 R2 machine, and Windows 7. Haven't tried other platforms.

    Read the article

  • RAID 10 or RAID 5 for multiple VMs - what is the best choice?

    - by Lars Fastrup
    I have just ordered a new rig for my business. We do a lot of software development for Microsoft SharePoint and need the rig to run several virtual machines for development and test purposes. We will be using the free VMware ESXi for virtualization. For a start, we plan to build and start the following VMs - all with Windows Server 2008 R2 x64: Active Directory server MS SQL Server 2008 R2 Automated Build Server SharePoint 2010 Server for hosting our public Web site and our internal Intranet for a few people. The load on this server is going to be quite insignificant. 2xSharePoint 2007 development server 2xSharePoint 2010 development server Beyond that we will need to build several SharePoint farms for testing purposes. These VMs will only be started when needed. The specs of the new rig is: Dell R610 rack server 2xIntel XEON E5620 48GB RAM 6x146GB SAS drives Dell H700 RAID controller We believe the new server is going to make our VMs perform a lot better than our existing setup (2xIntel XEON, 16GB RAM, 2x500 GB SATA in RAID 1). But we are not sure about the RAID level for the new rig. Should we go for having the the 6x146GB SAS drives in a RAID 10 configuration or a RAID 5 configuration? RAID 10 seems to offer better write performance and lower risk of a RAID failure. But it comes at a cost of less drive space. Do we need RAID 10 or would RAID 5 also be a good choice for us?

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >