Search Results

Search found 8172 results on 327 pages for 'job interview'.

Page 96/327 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Help with cron syntax

    - by Randy
    I need to setup a cronjob on my webhost. The documentation for my webapp reads as follows: you will need to create following cronjob: /public_html/cake/console/cake -app /public_html/app master Also, I want any output written to a log file. My hosts documentation says this: You can have cron send an email everytime it runs a command. If you do not want an email to be sent for an individual cron job you can redirect the command's output to /dev/null like this: mycommand /dev/null 2&1 Can someone help me write the cron job? I dont know the syntax at all. Thanks for the help!

    Read the article

  • What should I learn next?

    - by Krysten
    I am a CS major. I've taken 2 courses in C (Intro to C and CS1) and 1 introductory course to OOP with Java. I really like Java and feel that I have a firm understanding of OOP concepts. I am really interested in web development and would like to learn a programming language that can be used to build dynamic web applications. My question is what language should I learn? I've narrowed it down to python or ruby. Also, I want to learn a programming language that will help me get a job upon graduating. So essentially, I will use this particular language to build applications that will help me get a job in the future.

    Read the article

  • Viviane Reding envisage la création d'une CIA européenne, un contrepoids à la NSA à l'horizon 2020 ?

    Viviane Reding envisage la création d'une CIA européenne, un contrepoids à la NSA à l'horizon 2020 ? Dans une interview accordée au journal grec Naftemporiki, Viviane Reding, vice-présidente de la Commission de Bruxelles et commissaire à la justice, a proposé un renforcement de la coopération entre les services secrets des États membres de l'Union. « Il faut former un contrepoids à l'Agence de sécurité nationale américaine (NSA). C'est pourquoi je propose de créer un Service européen de renseignement...

    Read the article

  • How can a single database work for website, mobile apps?

    - by user1696497
    We have developed a job-portal where users can view jobs and and also post jobs. We have used Php and MySQL. We hosted this on web faction. Now we want to develop the mobile app of the job portal for android, ios and windows. As the database should be synchronous and aligned dynamically with apps and website database. As the back-end code has to be changed to Java in android and c# in windows, how to manage a single synchronous database?

    Read the article

  • What companies do what I'm interested in? [closed]

    - by Alex
    I'm a systems guy. People change their concentrations to avoid taking operating systems, while I took it during my first semester after transferring. I'm taking compilers and networks now, and I think they're awesome. And yet there are so many job postings looking for people to do work in things like web development, and so few postings looking for people to work in kernel hacking or network engineering. What sorts of companies do these things? I'm currently awaiting a contract in the mail for an internship with VMWare, so I'm not out of a job for the summer. Still, I'd like to companies do these things.

    Read the article

  • Money investments for software developers [closed]

    - by user91590
    Let's say you (or your company) have an amount of money that you would like to invest in improving your career, and probably make more money in the long run by landing a better job or raising your productivity. There are some classical investments that anyone would suggest: spend on programming books, since they pay for themselves very quickly; buy a better chair, since you will stay on that all day; attend conferences to improve your network of contacts. I'm looking for more examples to weigh my options in the quest for improvement. Google does not help as it only shows job ads for programmers in the financial sector.

    Read the article

  • What is the standard term for my role?

    - by sigil
    I'm doing work that involves writing code and managing developers in a "special projects" division of a large company. I'd like to define my role better and figure out if there's an industry standard term for what I do, so that it will be easier for me to research best practices and work on a career path What I do all day: A macro that connects an Excel sheet to an Access database is acting funny; I get called in to figure out what's happening and debug it. Someone needs data extracted from a bunch of files on Sharepoint. I figure out a client-side solution because I'm not authorized to do anything server-side and getting IT to do anything would take several months and need a business case. A manager wants a new data entry tool for their team. I interview the manager and team members to work out the functional requirements, then design/develop/test the application. Someone needs a VBA script to crunch some data for their presentation that's due in two hours. I drop everything I'm doing to hack out a quick script and run the analysis, without much in the way of testing. A developer has been hired to build a database for one of the teams, since I'm working on too many different things and don't have time to take this project on in the timeframe required. I direct his work and push him to meet certain deadlines, interview stakeholders to get more info that will help him figure out how to build the necessary forms, and modify the functional requirements of the database to fit in the timeframe. Someone wants to load a set of data into a GIS system and set up an ongoing refresh and reporting of this data set. I facilitate the conversation between the GIS developers and the owners of this data set, and design a demo application as proof of concept. It's kind of an "all-purpose programming and IT management" position, but it's not officially IT because the company has an actual IT department with a rigorously defined system of submitting requests, developing code, and managing projects. What I do, I guess, is more of a handyman job, where stuff falls to me because I'm the geekiest one in the room. Is there a standard term in the software world for what I do?

    Read the article

  • Is it a bad practice to quit a company only to begin as a consultant? [closed]

    - by niwi
    Like the title says; is it a bad practice to quit a company after a few years, only to begin as a consultant for the same 'customer'? I've been lucky enough to come in to a company in the oil business from the beginning, developing software for relatively new and unused technology. Long before I even got this job, I've wanted to start my own consulting firm. Is it morally wrong of me to quit my job after a few years, only to hire myself out as a 'specialist' or consultant on our systems?

    Read the article

  • Need help displaying a dynamic table

    - by Gideon
    I am designing a job rota planner for a company and need help displaying a dynamic table containing the staff details. I have the following tables in MySQL database: Staff, Event, and Job. The staff table holds staff details (staffed, name, address...etc), the Event table (eventide, eventName, Fromdate, Todate...etc) and the Job table holds (Jobid, Jobdate, Eventid(fk), Staffid (fk)). I need to dynamically display the available staff list from the staff table when the user selects the EVENT and the DATE (3 drop downs: date, month, and year) from a PHP form. I need to display staff members that have not been assigned work on the selected date by checking the Jobdate in the Job table. I have been at this for all day and can't get around it. I am still learning PHP and would surely appreciate any help I can get. My current code displays all staff members when an event is selected: if(isset($_POST['submit'])) { $eventId = $_POST['eventradio']; } $timePeriod = $_POST['timeperiod']; $Day = $_POST['day']; $Month = $_POST['month']; $Year = $_POST['year']; $dateValue = $Year."-".$Month."-".$Day; $selectedDate = date("Y-m-d", strtotime($dateValue)); //construct the available staff list if ($selectedDate) { $staffsql = "SELECT s.StaffId, s.LastName, s.FirstName FROM Staff s WHERE s.StaffId NOT IN (SELECT J.StaffId FROM Job J WHERE J.JobDate != ".$selectedDate.")"; $staffResult = mysql_query($staffsql) or die (mysql_error()); } if ($staffResult){ echo "<p><table cellspacing='1' cellpadding='3'>"; echo "<th colspan=6>List of Available Staff</th>"; echo "</tr><tr><th> Select</th><th>Id</th><th></th><th>Last Name </th><th></th><th>First Name </th></tr>"; while ($staffarray = mysql_fetch_array($staffResult)) { echo "<tr onMouseOver= this.bgColor = 'red' onMouseOut =this.bgColor = 'white' bgcolor= '#FFFFFF'> <td align=center><input type='checkbox' name='selectbox[]' id='selectbox[]' value=".$staffarray['StaffId']."> </td><td align=left>".$staffarray['StaffId']." </td><td>&nbsp&nbsp</td><td align=center>".$staffarray['LastName']." </td><td>&nbsp&nbsp</td><td align=center>".$staffarray['FirstName']." </td></tr>"; } echo "</table>"; } else { echo "<br> The Staff list can not be displayed!"; } echo "</td></tr>"; echo "<tr><td></td>"; echo "<td align=center><input type='submit' name='Submit' value='Assign Staff'>&nbsp&nbsp"; echo "<input type='reset' value='Start Over'>"; echo "</td></tr>"; echo "</table>";

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Networking stopped working on Ubuntu

    - by 1337Rooster
    I installed Ubuntu 10.04 through the Wubi installer (Funny, I installed it today and thought I would have gotten 10.10). I had a network connection and everything was working fine. I rebooted my coumputer a couple of times and then suddenly, I could not connect to the network and when I click the wireless/networking icon it says "Networking Disabled". I reinstalled Ubuntu and the problem went away. After a few reboots the problem returned. I have tried restarting to see if it would come back as well as a few other things listed below. Any other suggestions would be appreciated. Tried to restart networking via /etc/init.d/networking: amato@ubuntu:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... Ignoring unknown interface eth0=eth0. [ OK ] Tried to stop and start it: amato@ubuntu:~$ sudo /etc/init.d/networking stop * Deconfiguring network interfaces... [ OK ] amato@ubuntu:~$ amato@ubuntu:~$ sudo /etc/init.d/networking start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service networking start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start networking networking stop/waiting Tried start networking: amato@ubuntu:~$ start networking start: Rejected send message, 1 matched rules; type="method_call", sender=":1.58" (uid=1000 pid=2241 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo start networking networking stop/waiting Tried service networking restart: amato@ubuntu:~$ service networking restart restart: Rejected send message, 1 matched rules; type="method_call", sender=":1.60" (uid=1000 pid=2248 comm="restart) interface="com.ubuntu.Upstart0_6.Job" member="Restart" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo service networking restart restart: Unknown instance: Here are the contents of my /etc/network/interfaces. auto lo iface lo inet loopback I even tried to modify it to this (based on something I read, online, not sure if I was doing the right thing here). Tried everything again and no luck: auto lo eth0 iface lo inet loopback iface eth0 inet dhcp

    Read the article

  • How to register an agent with launchd

    - by Konrad Rudolph
    I’m unable to schedule a periodic launch with launchctl/launchd on OS X (Leopard). Basically, I’m unable to find a step-by-step list of instructions on the web and the intuitive approach doesn’t work. The sync.plist file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>net.madrat.utils.sync</string> <key>Program</key> <string>rsync</string> <key>ProgramArguments</key> <array> <string>-ar</string> <string>/path/to/folder/</string> <string>/path/to/backup/</string> </array> <key>StartInterval</key> <integer>7200</integer> </dict> </plist> I’ve put this script inside the path ~/Library/LaunchAgents. Next, I’ve registered the script using launchctl load ~/Library/LaunchAgents/sync.plist Finally, to test that it works, I started the job: launchctl start net.madrat.utils.sync – Nothing happened. Manually executing the rsync command in the terminal yields the expected result. I’m fairly sure that the job was registered correctly because if I try to start a non-existing job, I get an error message (which I didn’t get in the above command). What did I do wrong?

    Read the article

  • SCCM 2012 - some remote clients unable to download some applications, 401.2 error

    - by growse
    I've got a small SCCM 2012 deployment with about 35 clients attached. Most of these clients are in the same network as the single SCCM host, but three are about 1000 miles away. Oddly, these three clients have stopped being able to download some application packages over BITS. Publishing a new package works for all the other clients, but for these three it never seems to download. If I go to the software centre, it just hangs at "0% downloaded". On the client, the DataTransfer.log says (repeatedly): CDTSJob::HandleErrors: DTS Job '{2DCBBB4C-6D84-479A-9218-885B72C834B9}' BITS Job '{E78147DD-4A26-4942-B4FD-6EC3EB77EECD}' under user 'S-1-5-18' OldErrorCount 442 NewErrorCount 443 ErrorCode 0x80072EE2 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) CDTSJob::HandleErrors: DTS Job ID='{2DCBBB4C-6D84-479A-9218-885B72C834B9}' URL='http://sccm-host:80/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1' ProtType=1 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) Cas.log says (repeatedly): Location update from CTM for content Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 and request {AD041FCB-03D2-4FE6-A6FA-38A6B80FB2A1} ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download location found 0 - http://lonsbrndsccm02.mcs.int.thomsonreuters.com/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download request only, ignoring location update ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) On the server, I've enabled failed request log tracing. The raw IIS log says the following: 2012-07-30 08:28:42 10.13.111.35 GET /SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1/sccm /NSCP-0.4.0.172-x64.msi 80 - 10.2.27.19 Microsoft+BITS/7.5 401 2 5 293 Which is a 401.2 error, meaning access denied. The failed request log is large, but the punchline is that it chucks out a Unauthorized: Access is denied due to invalid credentials. message. All clients are members of the same domain and appear to be (otherwise) working great. I've re-installed the SCCM client, deleted and re-added the computer to SCCM. Some other packages seem to work fine, the daily anti-malware delta gets downloaded and patched without issue. Why are these packages failing?

    Read the article

  • Updating a staging server (from a CI server) in a Vagrant box with Chef

    - by Tomas Brambora
    I'm using Vagrant + Chef (chef_client provisioner) to create & provision a staging environment for my server. And I have a Jenkins job set up that is run every time I push to my 'develop' branch. In the Jenkins job, I would like to update & rebuild the source code of the server in the staging box and restart it. I have already written the cookbooks that install the dependencies, configure the db etc. But I'm not sure how to run only the update & rebuild & restart stuff from the cookbooks. I understand I could always tear down the whole box and rebuild it, but provisioning the box is a lengthy process so I would like to do that as little as possible. I split my server cookbook into 3 recipes: dependencies, db_setup and server. What I want to run in the my Jenkins job is the "server" recipe only. But I dont' understand how can I do that... If I specify the run_list on my Chef server, then I lose the ability to provision the whole box from scratch. Basically, I would like to be able to tell Vagrant from the command line what recipes Chef should run. Is that possible somehow? Cheers!

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Resource reference passing in puppet

    - by paweloque
    Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Deploy', } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Smoke Test', } In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Deploy["Deploy ${release_name}"], } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Smoke_test["Smoke Test ${release_name}"], } And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc.. Is it possible to achieve this in puppet?

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • A possible case of hacked email account. What kind of an attack is this?

    - by Rickesh John
    I own a Yahoo mail account. I am using this account for sending resumes and receive notifications from various job portals. But yesterday, I found that some 10-15 mails had been sent to random addresses from my account. Most of them had this format: hr@<companyname>.com I am pretty sure that I didn't send any mails to such addresses. Initially, I thought the job portals may be sending mails on my behalf and Yahoo is logging them, but then I saw the contents. The contents of all those mails were a URL, which I did not click. SCARED. Also, to top it off, my "Sending Name" has been changed to 'Nice Maria'!! o_0 I have taken the necessary measures and changed my password and the secret question. I cannot delete this account as this email is registered with all the job portals and other companies. Is this a simple case of my account being compromised or was I a victim of some web vulnerability? All the mails seem to be bot generated, with only a URL as the message body. Please advice.

    Read the article

  • Thunderbird alerts when expected email does not arrive

    - by user871199
    I am on Ubuntu 12.04 using Thunderbird as email client. Both are up to date in terms of updates. I have bunch of nightly jobs that do the work and send a status mail. It gets tedious if you keep getting same/similar mails every day so I ended up writing a mail filter rule which causes emails to end up in their respective folders automatically. If things are going ok, I really don't need to read emails. Failure emails are sent to different alias - if the job runs. We recently discovered that one of the job had not run for few days as someone accidentally disabled it. In order to avoid such problems in future, I would like to setup thunderbird in such a way that if I don't get email from given address within given duration, it should alert me. My dream solution is to set up frequency - some jobs do run every 4 hours. Is this possible? Can I setup Thunderbird (preferred) or other email client for reminding me when expected email does not show up. Based on comments and answer I received, here are the reasons why I would like to use Thunderbird. We are already using Thunderbird. It has calender support via plugin, so I suppose someone is already watching time to remind us about the event. May be this another type of event. Additional job is one more failure point, may complicate life if it has to monitor multiple hosts. Additional tools - same thing, one more failure point. Thunderbird can be run across all the platforms we are using - Windows and Ubuntu. It sort of becomes platform independent solution.

    Read the article

  • Usage of putty in command line from Hudson

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij EDIT: These are the build logs: [workspace] $ cmd /c call C:\WINDOWS\TEMP\hudson7429256014041663539.bat C:\Hudson\jobs\Artifact deployer\workspace>putty -ssh -2 -P 22 USER@SERV_ADD -pw PASS -m com.txt Le build a été annulé Finished: ABORTED And the Hudson.err.log file at the same time (after a stop): 3 juin 2010 18:27:28 hudson.model.Run run INFO: Artifact deployer #6 aborted java.lang.InterruptedException at java.lang.ProcessImpl.waitFor(Native Method) at hudson.Proc$LocalProc.join(Proc.java:179) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1241) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) My shell script only write "hello" in a "hello.txt" file on the server, and nothing is done.

    Read the article

  • Advanced Linux file permission question (ownership change during write operation)

    - by Kent
    By default the umask is 0022: usera@cmp$ touch somefile; ls -l total 0 -rw-r--r-- 1 usera usera 0 2009-09-22 22:30 somefile The directory /home/shared/ is meant for shared files and should be owned by root and the shared group. Files created here by usern (any user) are automatically owned by the shared group. There is a cron-job taking care of changing owning user and owning group (of any moved files) once per day: usera@cmp$ cat /etc/cron.daily/sharedscript #!/bin/bash chown -R root:shared /home/shared/ chmod -R 770 /home/shared/ I was writing a really large file to the shared directory. It had me (usera) as owning user and the shared group as group owner. During the write operation the cron job was run, and I still had no problem completing the write process. You see. I thought this would happen: I am writing the file. The file permissions and ownership data for the file looks like this: -rw-r--r-- usera shared The cron job kicks in! The chown line is processed and now the file is owned by the root user and the shared group. As the owning group only has read access to the file I get a file write error! Boom! End of story. Why did the operation succeed? A link to some kind of reference documentation to back up the reason would be very welcome (as I could use it to study more details).

    Read the article

  • PC won't PXE boot to WDS/MDT with Dell Optiplex 755

    - by Moman10
    I am trying to set up a basic MDT solution. I have set one up in the past at a previous job and it worked flawlessly, however here I'm running into a problem and am having no luck getting around it. I've installed Windows Server 2012 and MDT 2013, along with adding on the WDS role. I haven't configured much outside of the defaults for WDS, basically just set PXE response to respond to all clients (and unchecked admin approval). This machine does not run a DHCP server. I looked on the DHCP scope of our DHCP server, it shows options 66/67 checked and the server name of the WDS server is in there as well. I didn't add this but I assume it was put on during the install process (I believe I had to manually make some adjustments at my old job for this). The PC I have is a Dell Optiplex 755. I have enabled the onbard NIC w/PXE boot option in BIOS and attempted to boot. I get a "TFTP...." error but nothing offering out a DHCP address like I'm used to. In my previous job it pretty much worked right out of the box. I've verified that PortFast is enabled on the port and I've tried a couple different PCs (but both are the same model, only model I have to work with). No matter what, I get the same error. The subnet the PC is in is a different subnet than where the WDS server is sitting, but there are IP helper statements on the switch and the PCs can get regular DHCP addresses just fine from the DHCP server, just doesn't seem to get offered out a PXE boot option. I don't know if the problem is a configuration with the server or the PC itself...but after a few days of Googling I'm running out of ideas. Does anyone have a good idea of something it may be?

    Read the article

  • Log Shipping breaking daily on SQL Server 2005

    - by IT2
    I am facing a somewhat serious problem with Log Shipping on SQL Server 2005 and I am having trouble to correct it, so I will try some help from SF's experts. I have a Windows 2003 Server (PROD) that ships transactional log backups to another two servers: STAND1: Windows 2003 Server with SQL Server 2005. STAND2: Windows 2008 R2 Server with SQL Server 2005. The problem is that Log Shipping to STAND2 is breaking for ~ 90 minutes some times of the day and returning back without intervention. The breaking occur at times when the backup file is larger (after reindexing, etc). I can see the message below logged on the COPY job: *** Error: The specified network name is no longer available The copy agent was breaking dozens times a day only to STAND2 server, and after the changes below "only" breaks ~ two times a day: The frequency of the backup job was changed from 5 minutes to 10 minutes. Instead of backing up the 4 databases to the same folder, the log backups are now saved on separated folders for each database. The backup job doesn't run 24hs now, and only for 14 hours a day, when people are working on the database. I configured the SQL Server instances on the three servers to limit the memory, leaving more memory to the OS. Now I don't know what to do. Any help will be much appreciated! Thanks!

    Read the article

  • Best Practices for adding Exchange Archive to current 3 server setup

    - by ADquestion
    I'm looking to add an Archive Database (which I know is just a Mailbox Database) to our current Exchange 2010 environment. I have done this in the past at a previous job, but we had a simpler setup than at this current job. I've been trying to find some best practices to make sure it's setup in an ideal way, but so far not finding the details I would prefer. Hoping someone on here can give me a few pointers. Currently we have a 3 server setup, Server1, Server2 and Server3. Three databases of course, DB1, DB2 and DB3. We have a DAG setup between them. Server1 has DB1 and DB3 on it, DB1 is not active, DB3 is active. Server2 has DB1 and DB2 on it, both are active. Server3 has DB2 and DB3 on it, both are not active. All three servers are virtual (VMware). Each one is setup identical to the other as follows: C:\ 60GB - OS E:\ 600GB - DB (currently only 90GB used, pointing to Datastore just for Server2) F:\ 200GB - Log (2GB used, pointing to same Datastore as above) G:\ 200GB - Restore (0 used, pointing to same Datastore as above) The drives are all set to Thin Provisioning, and it looks as though I have 600GB of available space. They have not been on Exchange that long and only have about 70GB worth of PSTs to import back in that will be going to the Archive Database, plus anything older than 2 years from their current inbox that will be moved into there. I was considering placing the Archive DB on the E:\ drive of Server3 (only) like the current DB, but wasn't sure if that was acceptable. I don't plan on setting the Archive DB up with the DAG, just plan on having it as a single repository for older emails and manually back it up every now and then. If anyone has any suggestions on this I would appreciate it the input. I've done it on a slightly smaller scale before and it worked well, but like to think it through before pulling the trigger, especially at a new job. :) Thanks again!

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >