Search Results

Search found 2088 results on 84 pages for 'jobs'.

Page 51/84 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • Is there a software package that safely allows SSH via web on simple web host?

    - by spoulson
    I want to be able to use a secured web page on my shared web host to make SSH connections out to any destination. A shared web host is cheap and easy to maintain, and usually allows ssh to the web server. There are times I'd like to ssh into my web server, but don't have direct ssh connectivity. I'm aware of consoleFISH, Ajaxterm, and Anyterm. The problem is consoleFISH is a man-in-the-middle by design, and Ajaxterm/Anyterm require running a daemon process on the hosting server. Web hosts can usually support cron jobs, but not continuously running daemon processes. Additional Apache modules are usually out, too, as they require reconfiguration of the server and affects all other customers. Are there any software packages out there I can run on my shared web hosting account that provide a true ssh experience with these limitations?

    Read the article

  • Request Coalescing in Nginx

    - by Marcel Jackwerth
    I have an image resize server sitting behind an nginx server. On a cold cache two clients requesting the same file could trigger two resize jobs. client-01.net GET /resize.do/avatar-1234567890/300x200.png client-02.net GET /resize.do/avatar-1234567890/300x200.png It would be great if only one of the requests could go through to the backend in this situation (while the other client is set 'on-hold'). In Varnish there seems to be such a feature, called Request Coalescing. However that seems to be a Varnish-specific term. Is there something similar for Nginx?

    Read the article

  • Adjusting color on one monitor in Nvidia Surround

    - by Chris Stauffer
    I'm currently running three 2560x1440 screens using Nvidia Surround. Two monitors are Yamakasi Catleaps (cheap Korean jobs) and the third is the Achievia Shimian (also Korean). The Catleaps have great color reproduction, however the Shimian is exceptionally blue tinted. With normal monitors the required correction would require minimal effort to accomplish. But these Korean monitors do not have hardware controls to do it. For those who are unfamiliar, Nvidia Surround basically takes all three monitors and makes one big "monitor" out of all of them (Xinerama for GNU/Linux folk), at a resolution of 7680x1440 in my situation. Therefore, adjusting the color profile in the Nvidia control panel changes the settings for ALL of the monitors simultaneously. Thus, I am looking for some software to adjust the Shimian (perhaps by just selecting the pixels that that monitor encompasses). Does anyone know of such a program?

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • LPR command won't recognize CUPS printer

    - by Datapimp23
    I have a cups server with one shared printer configured on it. It prints test pages without problems. printername (Idle, Accepting Jobs, Shared) Description: desc Location: Driver: Zebra ZPL Label Printer (grayscale, 2-sided printing) Connection: socket://172.20.50.26 Defaults: job-sheets=none, none media=oe_w288h432_4x6in sides=one-sided This is the output from lpstat -t. it shows that the printer is idle and accepting requests admin@SERVER:~$ lpstat -t scheduler is running no system default destination device for printername: socket://172.20.50.26 printername accepting requests since Thu 26 Jan 2012 01:29:35 PM CET printer printername is idle. enabled since Thu 26 Jan 2012 01:29:35 PM CET Now when I want to send a printjob to it via an LPR command it won't recognize the printer /usr/bin/lpr -P printername test.pdf Result lpr: ttn_seg_zebra1: unknown printer What am I missing here ?

    Read the article

  • Graphic Setup tune-up checklist

    - by Click Ok
    I was trying to play the game Warzone 2100 and the games runs fine, with nice speed, but the screen stays with a horizontal lines "flickering"... My PC have a integrated GeForce Go 6100 vga. Ok, not a powerfull vga, but it's not the end of the world to run a "simple" game like this (compared with another games that ask you send your eyes to purchase a expensive vga). So, I think that the problem can be of the configuration of my machine. I use it in first instance for programming jobs, so I underpay attention to video setup. I would like about a checklist to know if my PC is "ready" to games. By example, I know that I need: Lastest vga drivers Updated DirectX and OpenGL What you suggest? There is too some good programs to test performance and suggests improvements in the system? Thank you! PS: I'm using Windows 7

    Read the article

  • Where would an S3 upload speed cap originate?

    - by CoreyH
    I do a ton of uploading to S3 and am experiencing capped speeds and I can't quite figure out how to address it. The setup: Windows Server 2008 R2 x64, external HD, using a Java based upload tool called Jsh3ll and custom VBS scripts to kick the jobs off. Running one process at a time, I am always limited to about 4mbps. I have FiOS at 35/35mbps speeds, so it isn't an outright limit. AND, I can run parallel instances and can go all the way up to 35mbps, so I know the problem isn't gateway/nic/machine/amazon related. Running parallel instances works to a degree as a solution, but increases the complexity of my workflow greatly. Solving this would make my life dramatically easier. When I was first doing this I was playing around with a bunch of Windows TCP parameters and was able to briefly get unconstrained bandwidth, but it wasn't repeatable. Thoughts?

    Read the article

  • Bigger ProjectServer farm is performing worse

    - by MSPS DBA
    I am using Project Server 2007 sp3 with SharePoint 2007 sp3 and SQL Server 2008 r2. I have recently moved my farm from 2 servers (1 DB and 1 App/Web) to a very big farm having Many Servers, Clustered Database, Load Balancer, Powerful processors and Large RAM. This Farm has more than one Web Servers, Project App Servers, SharePoint App Servers and a separate Index Server. But the performance of Project Server in the new Farm has been downgraded. Views are taking even more time to load data and Project publishing time has also been increased. I am also facing deadlock problems which are causing the project server queue jobs to fail. Could anyone inform me that what would be the reason of this problem and what should be the starting point to look into the issue? Is it mainly because now the application server needs to communicate with other application servers which were not needed in the previous farm? Thanks!

    Read the article

  • CentOS, CUPS - printer managment

    - by HTF
    I'm using CentOS 6.3, and trying to get a printer PIXMA iP4950 to work. The printer is attached via USB. I've downloaded and installed the drivers from the Cannon website, and have the printer installed in CUPS. However, when I print anything (even the test page), the job is completed successfully (according to CUPS-log), but the printer does not print a thing. I don't know how to debug this. Have tried to change logging to debug, but I don't see any errors in the error_log and the access_log says: Returning IPP successful-ok for Get-Jobs (ipp://localhost:631/printers/Canon_iP4900_series) from localhost Please note that I was able to print on another CentOS machine however with GNOME Desktop.

    Read the article

  • Web Filter For Multiple Networks

    - by Rob
    I have been using a Barracuda web filter 310 in our network and I have just had enough of it. It does not support trunking and we have several networks that have users that need to be web filtered. (I guess if everyone just did their flippin jobs I would not have this issue) but the management wants me to get it resolved. Does anyone know the top five web filters that are better than the barracuda web filters that support network trunking so that I can have multiple domains and subnets going through it? Thanks in advance - everyone on this site is gold in my book!

    Read the article

  • windows VPS running apache and mysql, php scripts running slow.. but cpu usage is 1-3%..

    - by Roeland
    So every night I run some cron jobs. It requires probably about 20 min to process all the records. I gather the script does something like 10,000 sql queries. I figure this task was just that intense and needs time to complete, but I looked at CPU and memory usage, and it is super low. Cpu usage is between 1-3% and once in a while will bounce to 50ish for 2-3 seconds. This VPS is running windows 2003 server with Apache and MySQL. Does this sound right?

    Read the article

  • How to schedule a biweekly cronjob?

    - by Roman
    crontab(5) defines the following fields: field allowed values ----- -------------- minute 0-59 hour 0-23 day of month 1-31 month 1-12 (or names, see below) day of week 0-7 (0 or 7 is Sun, or use names) and explains: Step values can be used in conjunction with ranges. Following a range with ``/<number>'' specifies skips of the number's value through the range. For example, ``0-23/2'' can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is ``0,2,4,6,8,10,12,14,16,18,20,22''). So, no biweekly Jobs, as far as my understanding goes. I'm quite sure there are workarounds, what are yours? Or did I miss something?

    Read the article

  • Video encoding is very slow on Amazon EC2 instance

    - by Timka
    We are using Amazon EC2 m1.xlarge instance for video re-encoding and it looks like the actual encoding process takes a very long time. For an average 250mb video file it takes about an hour to encode. Intance: m1.xlarge (Xeon E5645 x 15gb ram) Windows Server 2008 R2 64-bit AviSynth version 2.5 (32bit) + ffms2 plugin (FFmpegSource 1.21) FFmpeg SVN-r13712 libavutil 3213056 libavcodec 3356930 libavformat 3411456 libavdevice 3407872 Number of parallel jobs is 3 Average CPU utilization ~96% Update#1 Source video: mp4/h.264 Parameters for ffmpeg: --enable-memalign-hack --enable-avisynth --enable-libxvid --enable-libx264 + --enable-libgsm --enable-libfaac --enable-libfaad --enable-liba52 + --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-pthreads + --enable-swscale --enable-gpl Video files encoded to mp4/h.264 with the following extra command line options: -threads 0 -coder 0 -bf 0 -refs 1 -level 30 -maxrate 10000000 -bufsize 10000000

    Read the article

  • Managing service passwords with Puppet

    - by Jeff Ferland
    I'm setting up my Bacula configuration in Puppet. One thing I want to do is ensure that each password field is different. My current thought is to hash the hostname with a secret value that would ensure each file daemon has a unique password and that password can be written to both the director configuration and the file server. I definitely don't want to use one universal password as that would permit anybody who might compromise one machine to get access to any machine through Bacula. Is there another way to do this other than using a hash function to generate the passwords? Clarification: This is NOT about user accounts for services. This is about the authentication tokens (to use another term) in the client / server files. Example snippet: Director { # define myself Name = <%= hostname $>-dir QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 3 Password = "<%= somePasswordFunction =>" # Console password Messages = Daemon }

    Read the article

  • Job queueing in Toast Titanium 10?

    - by moonslug
    I have a bunch of .MP4 video files I'm burning to DVD-Video using Toast Titanium 10 on my MacBook Pro. Right now, I'm doing them one at a time. Because my computer is several years old, encoding video for a single DVD takes approximately six hours. I've discovered that it appears I can encode the video directly to a .toast format — however, I have yet to figure out if I can burn these directly to DVD. Also, I have quite a bit of video left to burn, and even that method would require me intervening manually to start a new encoding or burn job every six hours. Would it be possible to somehow queue up multiple DVD-Video encoding jobs at once, and have the computer work through them automatically? The actual writing to DVD disc doesn't take nearly as long, and if I had all my video encoded for me to begin with my job would be a lot quicker. Maybe this can be accomplished with a different piece of software?

    Read the article

  • Multiple .bkf files created in Backupexec 12.5 or 2010 related to heavy I/O?

    - by syuusuke
    Hey everyone, I was wondering if anyone who has used backupexec 12.5 or 2010 have ever experienced multiple .bkf files created for a single job. To describe what I mean by multiple files, the .bkf are being created with random file sizes under 2GB even though I've assigned the setting to chop the file after 10GB size. Some jobs will create 20x .bkf files in 1 job with file chunks ranging from 50MB to 800MB sizes. Is this is a sign of heavy I/O issues? Bandwidth limitations? I'm not sure, I'm here to seek some advices and suggestions. I've setup another backup server with the same exact settings and they seem to create a new .bkf file when 10GB limit has been reached. Although I am backing up different machines but I know my settings are an exact match to the problematic or atleast I think it's a problem.

    Read the article

  • NetDiag + TCP Blocking?

    - by CrazyNick
    We are facing some issue with the sharepoint 2007 timer jobs everyday at a specific time, so decide to track the tcp blocking informartion during those hours using NetDiag tool. We are not able to find the required information if we uses "netdiag /test:ipsec", what is the command that can be used to pull the TCP blocking information and how to configure it? if i ran the command "netdiag /test:ipsec /debug" it is returning "IP Security test . . . . . . . . . : Skipped", what does it mean?

    Read the article

  • NetDiag + TCP Blocking?

    - by CrazyNick
    We are facing some issue with the sharepoint 2007 timer jobs everyday at a specific time, so decide to track the tcp blocking informartion during those hours using NetDiag tool. We are not able to find the required information if we uses "netdiag /test:ipsec", what is the command that can be used to pull the TCP blocking information and how to configure it? if i ran the command "netdiag /test:ipsec /debug" it is returning "IP Security test . . . . . . . . . : Skipped", what does it mean?

    Read the article

  • Recommendations: Good Network MFP Printer/Scanner

    - by Joeme
    Hi, We have a small office that is expanding. At the moment we have 1x HP J6424 MFP, shared using it's built in network port. It is now becoming a headache, we have daily problems with people not being able to print or scan, and jobs just sitting in the queue. Or the scanner not being detected. Sometimes people can print but not scan, sometimes scan but not print, sometimes a bit of both. We are also pretty much constantly printing or scanning, or trying to! I would like to get a laser MFP (mono is fine) which works well for scanning a printing over the network with multiple users. Althernativly any recommendations for network scanners (sheet feed and or duplex a bonus). Clients are Windows 7 and Mac. Thanks very much!

    Read the article

  • Odd SVN Checkout failures occur frequenctly on VMWare virtual machines

    - by snowballhg
    We've recently been experiencing seemingly random SVN checkout failures on our Hudson build system. Google search has failed me; I'm hoping the super user community can help me out :-) We are occasionally receiving the following SVN error when our Hudson build jobs checkout source via the Hudson Subversion plug-in (which uses svn kit): ERROR: Failed to check out http://server/svnroot/trunk org.tmatesoft.svn.core.SVNException: svn: Processing REPORT request response failed: XML document structures must start and end within the same entity. (/svnroot/!svn/vcc/default) svn: REPORT request failed on '/svnroot/!svn/vcc/default' This issue seems to only occur when checking out from our Virtual Machines (Windows XP, Fedora 9, Fedora 12) using Hudson's SVN Plug-in. Systems that use the traditional SVN client seem to work. SVN Server version: 1.6.6 Hudson version: 1.377 Hudson SVN Plugin Version: 1.17 Has anyone dealt with this issue, or have any suggestions? Thanks

    Read the article

  • Unix printing a banner page on every print job

    - by yum_tacos4u
    I have a Data General server on unix that is printing a banner page on every print. I originally thought that the banner page was comming from the printer. As this is an HP printer, I used telnet to get to the jetadmin and then proceded to disable the banner page, but this did not solve the issue. I then went into the sysadm program to see if the TCPIP printing was set to print a banner page on print jobs, but I did not see any options to print a banner page. Any help or ideas on how to disable the banner page from printing in unix? Here is a banner page example print

    Read the article

  • I'll be setting up a dedicated web server at work soon, my first non hobby server - What should I know?

    - by Rogue Coder
    I've been running my own dedicated server running CentOS and a LAMP stack for 2-3 years now, but it's only been hosting my own websites which aren't super important. However, I will soon be setting up a Linux Webserver and Linux Database Server at work, and I'm wondering what are some important things I should be doing. It's an internal server only, so only people in the company can access it. Should I get a slave server for both of my servers for backups? If I do this, how many backups should I be keeping and how often should those backups be done? Right now on my current server I run a cron job nightly to backup my MySQL databases (Usually 40mb files once compressed), and bi-weekly cron jobs to backup my web root. I just store these files on my local computer via FTP. Also, for an internal server like this, should I look at using LightHTTPD or NginX to increase performance, or will Apache be fine?

    Read the article

  • Getting Started with windows server 2008/2012

    - by hbrock
    First let me say, I am a programmer(not a super star) but I want to get more into the system/network administration side of things. This because there are more jobs for system/network administrators in the area I live. Right now I am using virtual machines to learn how windows 2008/2012 works and to build labs with. But how would I prove to an employer what my skill set is with windows 2008/2012? As a programmer I would point to my past projects, code samples, and so on. Thanks for any help.

    Read the article

  • Capture the build number for a remote-triggered Hudson job?

    - by EMiller
    I have a very simple inhouse web app from which certain Hudson builds (on another server) can be triggered remotely. I have no problem triggering the builds, but I don't know how to capture the associated build number for later reference. I'm using the buildWithParameters trigger, and the actual result of that call is just a mess of HTML - I don't believe it gives me back the build number. I started down the path of pulling the whole build list for the job (via the api), and then attempting to reconcile that list against my records - but that's much more complicated than I'd like it to be. I also considered sleeping for a few seconds after launching the job, and then grabbing the latestBuild from the Hudson api - but I'm sure that's going to go wrong at some point (someone will fire off two jobs quickly, and I'll get the association wrong).

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >