Search Results

Search found 2088 results on 84 pages for 'jobs'.

Page 69/84 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • Assistance on setup to Connect an offsite server to the LAN via RRAS VPN - Server 2008 R2

    - by Paul D'Ambra
    I have an office LAN protected using a Zyxel Zywall USG 300. I've set up an L2TP/ipsec VPN on that which accepts connections using a shared secret and I've tested this from multiple clients. I have a server offsite and want to set up RRAS to use a persistent connection to the VPN so that it can carry out network jobs even with no one logged in (I'm using it for Micorosft DPM secondary backup). If I create a vpn as if I were setting up a users laptop it can dial in no problem but if I set up a demand dial interface in RRAS it errors. I enable RRAS ticking only demand dial interface (branch office routing) Select network interfaces, right click and choose new demand dial interface Name the VPN ToCompany Select connect using VPN And then L2TP as the vpn type enter the IP address (double-checked for typos!) select Route IP packets on this interface specify static route to remote network as 10.0.0.0/24 with metric of 1 add dial out credentials (again double checked for typos and confirmed with other vpn connections click finish now I right-click on the new interface and choose properties and then the security tab I change Data encryption to optional select only PAP for Authentication (both as per manufacturer of Zywall) click advanced settings against type of vpn and set shared secret then I select the new interface, right-click and choose connect this dials and then errors with either 720 or 811 as the error codes. However, if I create a VPN by going to Network & Sharing center and setting up as if I was creating a VPN from my laptop to the office (say) it dials successfully so I know the VPN settings are correct and the machine can connect to the VPN. Suggests very strongly the problem is how I'm setting up RRAS. Can anyone help?

    Read the article

  • Defrag starting when not scheduled. What is triggering the defrag

    - by leroyclark
    I have a fileserver that is starting a defrag around 2:00 PM everyday. This is killing performance as it runs for ours becuase this is a file server and has multiple drives. All scheduled tasks regarding defrag have been disabled. I have verified that it is accessing the data drives(using SysInternals tools). The reason I might have though otherwise was the event log has multiple entries regarding defragging a db file related to shadow copies. Oh yes these drives take shadow copy snapshots multiple times per day but the times of them don't coincide with the defrag task. There is nothing in the event logs regarding defrag except those noted above in relation to shadow copies. I'm out of ideas looking for what is starting these jobs. One possiblility is that the drives are not being defgramented, but being analyized to determine if they need to be defragmented. I manually ran an analysis and the cpu usage(by dfrgntfs.exe) seems to be similar to what I'm seeing everday while the defrag process is running. However I've found no setting that schedules this analysis.

    Read the article

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • How does one make sure or even guarantee server time are sync correctly between dozens of servers across multiple datacenter on different location?

    - by forestclown
    Currently our web applications contain a logic to check if the data sent to the web server is expired or not by comparing the timestamp of the data with the date/time of the server. Everything goes will, until some dude from data center accidentally modify one of the web server date/time and causes some disruptions in our web services. My managers are of course not happy with this, and said we shouldn't use timestamp to check expiry in the first place...anyway.... Network Time Protocol is implemented, because of data centers are spread across different continents so we have one NTP server in each data center. The servers within the data center will have cron jobs to check against the time with their NTP server from the same data center. If time is out of sync it will auto update the server date/time. But then with our managers not happy with it, and think it could still easily causes the same problem. e.g. what if someone accidentally modify the NTP date/time? what if all the NTP servers are out of sync with each other? which NTP servers we can really trust? and blah blah.. So my questions are: What are the current practice to sync date/time between servers across multiple data centers or locations? How does one manages time stamp between web apps? e.g. Server A send data (contain timestamp of Server A) to Server B (compare timestamp between Server B and the timestamp from the data to see if it has expired or not. This is to avoid HTTP replay) Should we really not use timestamp check? Thanks & Best Regards

    Read the article

  • Why is it necessary to chmod o+r parent directory to fix 403 access forbidden error with Nginx and P

    - by davenolan
    This may be an Nginx wrinkle, or it may be because I don't understand Unix permissions. We're using Hudson CI to deploy our staging instance. So RAILS_ROOT is /var/lib/hudson/jobs/JOBNAME/workspace. Hudson runs as hudson user Nginx runs as www-data user hudson and nginx are both members of the www group root of my nginx conf points to RAILS_ROOT/public as per normal. RAILS_ROOT/config/environment.rb is owned by www-data (so Passenger runs as www-data) RAILS_ROOT and everything in it is owned by the www group and group has r/w/x permissions As it stood, Nginx threw 403 permission denied when requesting any url. error.log contained entries like this: public/index.html" is forbidden (13: Permission denied). These did not fix the or change the error (each with a stop/start of Ngnix): chmod 777 -R RAILS_ROOT chgrp www -R /var/lib/hudson I also tried Nginx as root, and passenger complained that it could not find config/environment (despite the path displayed on the error page being correct). The fix was to ensure everybody has read permissions on each directory in the heirachy. In this case chmod o+r /var/lib/hudson. But if the group has read permissions on the directory, and nginx is a member of the owner group of the directory, why was it necessary to allow everyone read permissions? Is there something have not grokked about permissions? $nginx -V nginx version: nginx/0.7.61 built by gcc 4.4.1 (Ubuntu 4.4.1-4ubuntu8) configure arguments: --prefix=/opt/nginx --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-2.2.5/ext/nginx --with-http_ssl_module --with-pcre=~/src/pcre-8.00/ --with-http_stub_status_module $cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.10 DISTRIB_CODENAME=karmic DISTRIB_DESCRIPTION="Ubuntu 9.10"

    Read the article

  • Reliable backup software for windows network/samba shares.

    - by Eli
    Hi All, I have a Win2003 server that works as a pdc for a number of XP boxes, and a couple related FreeBSD boxes. I need to back up roaming profiles, non-roaming profiles via network shares, local hard drive data, and files on the FreeBSD boxes via samba shares. I have tried Genie Backup Manager and Backup4All pro, and both have excellent features, but both also begin to fail disastrously with more than a few days use. Mostly, the errors seem to have been from the backup catalog getting out of synch with itself. Whatever it is, there is no excuse for a backup software that says it backed up files when it really didn't, or the log saying it backed up exactly the same file 10,000 times in a single run, or flat-out crashing, or any of the other myriad problems I've run into with these. Really sad for products that fill such an important need. Anyway, does anyone know of a backup software that works reliably and can do the following? Scheduled backups for multiple jobs, without a user logged in. Backup from local hard drives or network shares. Incremental backups. Thanks! Edit: Selected solution: I've added my (hopefully final) solution as an answer.

    Read the article

  • Building an Email server for mass emails

    - by EGHDK
    I recently started doing IT odd jobs for a company. The company has a pretty decent sized mail list that costs them over $3000 per month to send out email from. The company is set on creating their own email server so that it can just run and send emails to the client lists. They only send out emails roughly once a month. Has anyone had experience with this? This wouldn't be an email server I guess (as it doesn't need to handle incoming messages) It just has to be able to send around 200,000 emails, once a month. What would be the best way to go about this? Services online like MailChimp have proved to be too pricey. It's not an ad that is being sent out, it's more of a monthly newsletter, so we don't need any crazy software for ROI or anything crazy like that. If I could fit 200,000 people in GMAIL, I'd do it, but I don't think I can (heh... maybe I should try).

    Read the article

  • Small store infrastructure - where to begin?

    - by KevinM1
    It looks like my older brother is about to change jobs - from lawyer to shooting range proprietor - and since I'm the family 'computer guy' I have the task of coming up with and setting up the in-store equipment. Only problem, I don't know how to start or where to look. I'm a web programmer, not an IT specialist. To that end, I figured I should ask the pros. Users: 3 (myself, my brother, and his business partner) Equipment: 1 Windows (likely 7) desktop for POS software, 1 Windows desktop/laptop for backroom use (bookkeeping, etc.) Other: ?? I'm looking for a reliable and, well, idiot-proof way to handle backups. Neither my brother nor his business partner are tech savvy (A web browser, email, MS Word and Excel are about the extent of their knowledge), so I need something they can handle. On-site would be preferable to off-site, given my brother's hesitance to have sensitive business data be handled by an outside source. I'm also looking for a small on-site server. I estimate that, at most, only 2-3 users will need access. A linux solution would keep costs down, but I'm concerned about Windows <- linux interoperability. Would the store security cameras' storage be handled by the security company, or would we have to stream that data to our own server? I know from my own experience with personal security that the company gives/loans a recording device to the home owner, but I'm not sure about business security. I know this sounds like a shopping list, and it's pretty vague. I wish I could give more detail, but between my own ignorance and things not being 100% nailed down on the business end, I'm a bit stuck. At the very least I'd like a nudge - links on a place to start, what to look for, things I need to think about, etc. - for this endeavor. Thanks.

    Read the article

  • Building an Email server for mass emails

    - by EGHDK
    I recently started doing IT odd jobs for a company. The company has a pretty decent sized mail list that costs them over $3000 per month to send out email from. The company is set on creating their own email server so that it can just run and send emails to the client lists. They only send out emails roughly once a month. Has anyone had experience with this? This wouldn't be an email server I guess (as it doesn't need to handle incoming messages) It just has to be able to send around 200,000 emails, once a month. What would be the best way to go about this? Services online like MailChimp have proved to be too pricey. It's not an ad that is being sent out, it's more of a monthly newsletter, so we don't need any crazy software for ROI or anything crazy like that. If I could fit 200,000 people in GMAIL, I'd do it, but I don't think I can (heh... maybe I should try).

    Read the article

  • mysqldump skip one table

    - by danneth
    I'm running a cronjob to backup our system using mysqldump. The database contains 90 or so tables. One of these tables is HUGE and every once in a while causes the dump to fail. From the manual I see that you can specify specific tables to dump shell> mysqldump [options] db_name [tbl_name ...] This got me thinking. What if I have two jobs, one for dumping the huge table, and one for all the others. To accomplish this it would be nice if I could to something like shell> mysqldump -u backupuser -p database huge_table > db_huge_table.sql shell> mysqldump -u backupuser -p database --skip huge_table > db_rest.sql Unfortenately I'm not seeing such and option. I could of course explicitly state the 90 tables, but that just seems like a mess. Another option would be a script of some sort, but before checking that route I'll try this resource. MySQL is 5.1.61 on CentOS 6.2

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • How to track which process is failing logons?

    - by Massimo
    Windows Server 2003, VMWare VirtualCenter 2.5. Something is continuously trying to log on to VirtualCenter using a disabled domain account; the failed logon attempts are logged by VirtualCenter in its own logs and by Windows in the Security event log. This happens roughly every minute or two. The source of the logon attempts is 127.0.0.1, so it must be some process running on the server itself. There are no services running as this user account and no scheduled jobs on the system. The task manager doesn't show any proces running under this account, either. The user account's name is nowhere to be found in the Registry. But some process is trying to use it, and failing. It probably is not some critical process, as everything looks fine; it could be something that was installed long ago and forgotten there. Whatever it is, it probably is running under another user account (possibly a system one), but is trying to log on to VC using those credentials, which are probably saved in some configuration file, since they're not stored in the Registry. How can I track which process is trying (and failing) those logon attempts, either using Windows or VirtualCenter?

    Read the article

  • How do you optimize your Outlook Exchange + IMAP setup?

    - by Mike
    My company provides an Outlook/Exchange account we must use for mail/calendar. Like many companies, they unfortunately also provide a ridiculously small mail quota. I got tired of managing and backing up .pst files (since I'm always in my e-mail there is never a good time to back it up), so I started storing my archived mail "in the cloud", using an IMAP server I set up on my Linux box. This has a few drawbacks for me: IMAP (at least the implementation in Outlook) is *very slow*. Furthermore, if I move a large number of messages to the IMAP server, it blocks the entire Outlook client for hours sometimes, which is quite annoying. Can't use exchange over HTTP to do mail without launching a VPN session, because the client-side rules I have which organize my mail fail and disable the rule if the IMAP server can't be reached. If I reply to a message from my IMAP store, I have to specify a SMTP server willing to relay for me in order to send e-mail, unless I always remember to select my Exchange account while composing e-mail. ... but the main advantage of being very easy to back up, with a couple of cron jobs that essentially do an 'rsync'. Short of moving the IMAP server to my local host (which seem like might have the same file locking problems as using a .pst), my options seem limited for solving (1). I'd like to come up with a solution for (2) and (3) though. For problem (2) would it be possible to somehow tell Outlook that the IMAP server is "offline", and have it synchronize my changes during a periodic "send and receive"? If so, I wonder if it would block the Outlook client, like it does in problem (1), and if it would be compatible with the client-only rules I use to sort my mail into folders. I've looked all over the options menu and have not found a way to tell Outlook to not use a certain account for sending mail, which would solve (3). Is anyone else crazy enough to be doing something like this? Any ideas?

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • Replacement for public folder workflow, I'm confused as to how sharepoint does it.

    - by RodH257
    For years Microsoft has been slowly phasing out public folders, perhaps exchange 2010 really is the LAST TIME they'll be shipped... I've heard sharepoint is the replacement, but I don't understand full, can someone give me an idea of how to replace this workflow? In our office, we have projects, they have a project number, ie 10353. Each job folder has a public folder, organized in a hierachy like Projects Year Folder Subfolders The main subfolder we use is for genera correspondence. When an email is received that relates to a project, it is dragged and dropped (or right click move to) a public folder. Adding public folder favourites for each user helps this. When an email is sent, we have a custom email form, which is the default email form, but with a project number field next to the subject line. When you enter the job number in there, it carbon copies our filing system in, which reads the job number and puts the email in the public folder for you. if you need to refer to emails, you go to public folder and find them there. This isn't the best with large jobs, but it works ok. Now, I have limited experience with sharepoint (well, WSS), we've used it to do some neat discussion boards/polls etc as an intranet site, but I haven't seen much of its integration with outlook. The great thing about our solution is how tightly it integrates with outlook which is exactly where the emails are. If you want to forward an old email, you go to public folder and forward it, simple. Any solution that replaces it should be at least as easy as this. Improvements we would like would be to have better searching of emails, better support in exchange (ie future version) and also, custom forms in outlook are being phased out (the VBA kind), so avoiding these would be good. Does sharepoint do this? or what solutions do this kind of thing?

    Read the article

  • What Windows app can sort a huge XML file?

    - by Torben Gundtofte-Bruun
    I have some enormous XML-based configuration files, with 125000 lines in them. The problem is that they are auto-generated by the system I use, and "child" tags are in a random order within their respective parent tag. This means that a diff comparison is impossible. I want to recursively sort all tags within a parent tag by the value in name="". Some parent tags only appear once and don't have a name="" parameter; these should be sorted by the tag name itself. Once the files are sorted like this, they can be compared quite easily using normal tools. We are currently using ExamXML which can match unsorted XML files, but it fails because the files are too big. Is there an application that can do this? (Windows much preferred; Linux only as a last resort) I do not want to dive into development or XSLT jobs. I am thinking that someone must have made a simple sorting tool like this already - I just can't find it using Google. Update: With help from this site, I created a small package that I want to share: XML-Sorter_v0.3.zip Update: Follow-up question here.

    Read the article

  • How do you set up CRON?

    - by user1723760
    I have never used CRON before but I want to use CRON in order to be able to perform schedule jobs for a php script. The php script is called "inactivesession.php" and in the php script is this code: <?php include('connect.php'); $createDate = mktime(0,0,0,10,25,date("Y")); $selectedDate = date('d-m-Y', ($createDate)); $sql = "UPDATE Session SET Active = ? WHERE DATE_FORMAT(SessionDate,'%Y-%m-%d' ) <= ?"; $update = $mysqli->prepare($sql); $update->bind_param("is", 0, $selectedDate); $update->execute(); ?> Wht I want to do is that when the above date is reached (25th Oct), I want the php script to perform the UPDATE statement above. But my question is that how do I use CRON in order to do this? The server I am using is the university's server known as helios, does CRON need to be set up in helios, (do I have to call the admin for this) or is it something else which uses CRON. I have never used CRON before so can you explain to me how CRON can be set up for the example above with the server I am using? Thanks UPDATE: Hi, I think the name of the OS is actually helios but I am not sure, I have a wikipedia page on this: here. I will read the crontab wikipedia page and see what I can find, but what my question is that is CRON already set up, do I just go right ahead and use CRON or do I need to set it up first?

    Read the article

  • Changing the prompt in telnet

    - by wim
    With some help from people on here, I was able to set a custom prompt in an ssh session (thanks!). Now I need to do the same in telnet, but I'm not sure of what syntax I could use for that. Basically the telnet prompt is just a > character, I need to modify it to something I can more reliably detect in automation jobs. Hope this makes sense. From inside telnet, trying to escape that command with a bang like !PS1=spam and !PS2=eggs did not change it. wim@wim-acer:~$ ssh [email protected] -i ~/.ssh/guest_nopassphrase -t "export PS1='Sending a custom prompt \w \$ '; exec sh" Sending a custom prompt ~ $ set HOME='/var/tmp' IFS=' ' LOGNAME='guest' PATH='/sbin:/usr/sbin:/bin:/usr/bin' PPID='1128' PS1='Sending a custom prompt \w $ ' PS2='> ' PS4='+ ' PWD='' SHELL='/bin/sh' TERM='xterm' USER='guest' Sending a custom prompt ~ $ telnet localhost <snip> Entering character mode Escape character is '^]'. > !set CONSOLE='/dev/ttyp0' HOME='/var/tmp' IFS=' ' LOGNAME='root' PATH='/sbin:/bin:/usr/sbin:/usr/bin' PPID='546' PREVLEVEL='N' PS1='\w \$ ' PS2='> ' PS4='+ ' PWD='/var/tmp' RESPAWN_COUNT='1' RESPAWN_LAST='0' RESPAWN_MAX='5' RESPAWN_TIME='5' ROOTDEV='/dev/sla1' RUNLEVEL='5' SHELL='/bin/false' TERM='linux' USER='root' > > Connection closed by foreign host Sending a custom prompt ~ $ Connection to 192.168.1.124 closed. wim@wim-acer:~$

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Howto get exit code of a script started in screen session

    - by Bettina
    Hi folks, I am currently creating a backup script which uses screen to start a backup job with rsync inside a screen session. The backup jobs are started as follows. screen -dmS backup /usr/bin/rsync ... As soon as the rsync job is finished, the screen session is terminated automatically. To make sure, that the backup was successful, I would like to check the exit code of the rsync job but unfortunately I really don't know how to get the exit code after the screen was terminated. Does someone have a good idea how to automatically check, if the rsync job was successful or not? Would be great if someone does. I already thought about using a temp file but like this: screen -dmS myScreen "rsync -av ... ; echo $? /tmp/myExitCode" but this unfortunately does not work. Then I thought about using stderr like in the example below: screen -dmS myScreen "rsync -av ... 2 /tmp/rsync-sterr None of my ideas worked out so far, since stderr is not written when I use the command above. :-( ? Would be great if someone has a good idea or even a solution. Cheers, Bettina

    Read the article

  • Remote offscreen rendering

    - by redmoskito
    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA. My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start. We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • Cisco FWSM -> ASA upgrade broke our mail server

    - by Mike Pennington
    We send mail with unicode asian characters to our mail server on the other side of our WAN... immediately after upgrading from a FWSM running 2.3(2) to an ASA5550 running 8.2(5), we saw failures on mail jobs that contained unicode. The symptoms are pretty clear... using the ASA's packet capture utility, we snagged the traffic before and after it left the ASA... access-list PCAP line 1 extended permit tcp any host 192.0.2.25 eq 25 capture pcap_inside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface inside capture pcap_outside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface WAN I downloaded the pcaps from the ASA by going to https://<fw_addr>/pcap_inside/pcap and https://<fw_addr>/pcap_outside/pcap... when I looked at them with Wireshark Follow TCP Stream, the inside traffic going into the ASA looks like this EHLO metabike AUTH LOGIN YzFwbUlciXNlck== cZUplCVyXzRw But the same mail leaving the ASA on the outside interface looks like this... EHLO metabike AUTH LOGIN YzFwbUlciXNlck== XXXXXXXXXXXX The XXXX characters are concerning... I fixed the issue by disabling ESMTP inspection: wan-fw1(config)# policy-map global_policy wan-fw1(config-pmap)# class inspection_default wan-fw1(config-pmap-c)# no inspect esmtp wan-fw1(config-pmap-c)# end The $5 question... our old FWSM used SMTP fixup without issues... mail went down at the exact moment that we brought the new ASAs online... what specifically is different about the ASA that it is now breaking this mail? Note: usernames / passwords / app names were changed... don't bother trying to Base64-decode this text.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >