Search Results

Search found 7685 results on 308 pages for 'job scheduler'.

Page 247/308 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • (Mac Intel) HP PS driver prints in B&W from Adobe Reader after installing Cannon PS driver

    - by John B
    I have a unique problem that leaves me at a loss as to where to start troubleshooting. We have three Macs we use for graphics, two of which are PowerPC and one which is Intel. They are set up to print to an HP 5500dn, but occasionally this printer gets tied up with a massive print job, so I installed the PS driver (iR-PSv1.81MacOSX) for the Cannon C3200 Printer/copier on each of the machines. Both of the PowerPC Macs installed without issue, but the Intel Mac exhibits strange behavior: I've confirmed that while the Cannon driver is installed (whether or not the Cannon is set up for printing in print settings), the HP 5500dn will print in color from Safari, but only prints in black and white from Adobe Reader. The Cannon printer itself has not exhibited any strange behavior As soon as the Cannon driver is uninstalled, the HP 5500dn prints in color from Adobe Reader again. We run a network of Windows PCs, and the 'Mac room' mostly takes care of itself, so we don't have any experienced Mac administrators onsite. The Cannon is capable of Appletalk, but the PS driver seemed easier to work with (and Appletalk is currently disable on the Cannon. I'm not against using the Appletalk compatible drivers, but I would rather use the PS driver if at all possible - I don't want to open up the proverbial can of worms. If someone has any clues or suggestions that would help troubleshoot this problem, I would be grateful. I've already done some googling, but due to the obscure nature of this problem, I haven't been very successful.

    Read the article

  • How can I take a one time full backup of a Windows Server without the need for a restore program?

    - by TheCleaner
    I have a Windows SBS server with about 500GB of data that I'm decommissioning but I'd like to take a final backup of the server and place it on an external USB drive. I already have multiple backups of the server on disk from the past but they are through Simpana Commvault. I'd like a backup that will simply copy the file structure, ACLs, timestamps, etc. as is to a NTFS volume on the external drive. This way if someone says "I need x file on the server you decommed" I can search the external drive real quick instead of firing up Commvault, cataloging, restore, etc. I know the built-in Windows backup is great, I just don't feel like running it for a restore job on this. I'd like an option where in the future it won't require a program to run a restore. Rather a simple mount of the drive will suffice. I believe I can use robocopy just fine, but I'm not sure if it will grab the Windows directory, system files, and full user profiles correctly even with the /ZB option. Options? Is Robocopy /E /ZB /COPYALL /DCOPY:DAT /MT:32 /R:5 /W:5 /LOG:copylog.log the way to go here?

    Read the article

  • Looking for a "light" compositing manager for GNOME

    - by detly
    I have an HP Pavilion DM3 (graphics is nVidia GeForce G105M), running Debian Squeeze with GNOME 2.30. My preference for DE is Gnome + Metacity + Nautilus. I'd like to use Docky, but it requires compositing. So I'm looking for a relatively "light" compositing manager. I realise that "light" is ambiguous, but I basically want something that won't chew through my notebook's batteries because of CPU or GPU usage. I know that Metacity is capable of compositing, but as far as I'm aware it's still testing. Some people report that it's smooth and lightweight, others claim that it eats up processor time. I've also seen references to a problem with nVidia, but no actual details. I'm not averse to Compiz, but I haven't used it before and I don't know what to expect in terms of "weight." And maybe there's something else I haven't heard of. So can anyone recommend anything? Or dispel my idea that Metacity is not the right tool for the job? (Originally posted on GNOME forums.)

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • Setting up MySQL Linux slave with a Windows master

    - by philwilks
    I'm running a MySQL 5.0 database server on Windows Server 2008. The total size of the database is about 1Gb. I make daily backups, but I'd like to step up to having a slave server for extra protection. My thinking was that I wouldn't need the expense of a Windows machine to do this, and a Linux "cloud server" from RackSpace would do the job well for quite a low cost. However I have little experience with Linux, so I have a few questions... Does this sound like a good idea? Is there anything wrong with linking Windows and Linux MySQL servers? Does Linux have the equivalent of Remote Desktop Connection? If so can I use this from a Windows machine? Would a particular Linux distro be well suited to this task? RackSpace offer ArchLinux, CentOS, Debian, Fedora and Ubuntu. My immediate thinking is to go with Ubuntu as I've heard it's more friendly for people coming from a Windows background. Any comments you have would be very appreciated! Phil

    Read the article

  • How to correctly configure DNS for Icelandic domains and Plesk

    - by Leonard Challis
    I have a domain registered with ISNIC (domain.is). They only let you set nameservers that pass their requirements. I've been told it's this requirement that I need to fix: Nameserver must be consistently registered in DNS, i.e. its own A resource record must be available and a corresponding PTR resource record as well. I allocated two new IP addresses from my server host and at that point set their PTR records to ns0.domain.is and ns1.domain.is. After that I created two A records for that domain in Plesk, again ns0. and ns2.domain.is with their respective IPs. Next, I went to the ISNIC page to register my nameservers, along with their IP addresses I'd allocated and this worked perfectly for both without error. So the final job was to set the nameservers for the domain via ISNIC's control panel, however when I try, I'm getting this error: Test results for "NS0.DOMAIN.IS": The nameserver ns1.vps123.vpsprovider.com is not consistently registered in DNS (ns1.vps123.vpsprovider.com => 1123.123.123.123 => vps123.vpsprovider.com) The nameserver ns0.vps123.vpsprovider.com is not consistently registered in DNS (ns0.vps123.vpsprovider.com => 1123.123.123.123 => vps123.vpsprovider.com) The nameserver ns0.DOMAIN.IS is missing from the NS record set for DOMAIN.IS Test results for "NS1.DOMAIN.IS": The nameserver ns1.DOMAIN.IS is missing from the NS record set for DOMAIN.IS The nameserver ns0.DOMAIN.IS is missing from the NS record set for DOMAIN.IS This is really at the limits of my DNS knowledge I'm afraid. It feels like I'm close but maybe missing a vital part, linking the nameservers in Plesk or something?

    Read the article

  • My internet drops and will only reconnect if I troubleshoot it or restart my computer.

    - by Paul
    My internet loses its connection and at the same time, and my speaker seems broken if that happens and my mouse will not be smooth as it was before it happens. It already happened before and I didnt mind it. But now, it's happening literally all the time. After startup, my connection seems okay but after some few minutes, even without me doing something, my laptop will just lose its connection. It says that it is not connected and connections are available, but if I check if there are connections where I can connect it shows nothing. Even if, there is actually a wi-fi connection where my phone and other laptops connect. They dont seem to have problem with it. Just my laptop. After that happens, I usually restart my laptop but I discovered that troubleshooting it would somehow repair the problem, but not all the time. It says that restarting my wireless adapter do the job. Are there any long term fix? I mean that could last long or totally erase the problem? I have Atheros AR5B97 Wireless Network Adapter and Microsoft Virtual WiFi Miniport Adapter, both enabled. On the services, I have Wired AutoConfig and WLAN AutoConfig on Automatic. Aspire 4750, Windows 7 (x86)

    Read the article

  • Problems using InfraRecorder to back up ISOs of certain CDs

    - by Voyagerfan5761
    I've gotten into the habit of backing up my CDs as ISO files, just in case the discs should be damanged, lost, or destroyed. Using InfraRecorder, the process is pretty painless. Unfortunately, I have run into at least two discs that don't back up. I get the error message: Can't read source disc. Retrying from sector 252270 Sometimes this will appear repeatedly. One of the discs is my retail copy of Star Trek: Armada II; the other is disc one of DOOM 3. Both discs run flawlessly when I put them in the drive and let Windows AutoPlay them. Armada II appears as two tracks (one data, one audio) in InfraRecorder, and the error happens at the approximate track boundary. DOOM 3's first disc, however, fails much sooner (around sector 990) and appears as one solid data track. Am I simply using the wrong tools for this job? InfraRecorder is a nice free tool that I can run from my flash drive and use for most tasks of this type, but it does seem to have trouble with certain things. Ideally I'd like to hear about any workarounds people have found for this issue, but if I must switch tools I'm open to it (preferably other free tools).

    Read the article

  • background jobs and ssh connections

    - by petrelharp
    This question has come up quite a lot (really a lot), but I'm finding the answers to be generally incomplete. The general question is "Why does/doesn't my job get killed when I exit/kill ssh?", and here's what I've found. The first question is: How general is the following information? The following seems to be true for modern Debian linux, but I am missing some bits; and what do others need to know? All child processes, backgrounded or not of a shell opened over an ssh connection are killed with SIGHUP when the ssh connection is closed only if the huponexit option is set: run shopt huponexit to see if this is true. If huponexit is true, then you can use nohup or disown to dissociate the process from the shell so it does not get killed when you exit. If huponexit is false, which is the default on at least some linuxes these days, then backgrounded jobs will not be killed on normal logout. But even if huponexit is false, then if the ssh connection gets killed, or drops (different than normal logout), then backgrounded processes will still get killed. This can be avoided by disown or nohup as in (2). There is some distinction between (a) processes whose parent process is the terminal and (b) processes that have stdin, stdout, or stderr connected to the terminal. I don't know what happens to processes that are (a) and not (b), or vice versa. Final question: How can I avoid behavior (3)? In other words, by default in Debian backgrounded processes run along merrily by themselves after logout but not after the ssh connection is killed. I'd like the same thing to happen to processes regardless of whether the connection was closed normally or killed. Or, is this a bad idea?

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • Windows file association for README, INSTALL, LICENSE and the like [closed]

    - by Lumi
    Possible Duplicate: How to set the default program for opening files without an extension in Windows? Many files originating in the UNIX world come without file extension. Popular examples include README, INSTALL, LICENSE. We know for a fact that these are text files. It is therefore a bit disappointing not to be able to just double-click them open in Explorer and see them in Notepad (actually, Notepad2 because of the UNIX line endings which silly Microsoft Notepad doesn't render correctly). Does anyone know of a way to create a file association for, say, README files without extension? This could then be replicated to cover the most frequently occurring file types, and then double-clicking them open would work. Update (Sort of in response to all your comments.) Thanks, folks, your comments and answers have helped me. @Indrek, yes, I was under the assumption that you could somehow create an association for just README or Makefile, and couldn't do so for files without extension. Turns out the contrary is true, and yes, that is a workaround that neatly solves the issue. Ultimately, I just want to be able to double-click to open a README or Makefile, that's all. @Sampo, the SendMe trick is also useful, although usability is not as great as a straight double-click. (I'm really lazy sometimes.) Turns out the following trick using ftype and ftype from an Administrator prompt does the double-click enabling job: assoc .=no_ext ftype no_ext=%SystemRoot%\system32\NOTEPAD.EXE %1 :: You can see it created some entries in the registry: reg query hkcr\no_ext /s reg query hkcr\. /s

    Read the article

  • Windows ACL inheritance issues for FTP server and automated tools

    - by Martin Sall
    I have set up Cerberus FTP server. By default, Cerberus FTP service runs under SYSTEM ACCOUNT. Also I have some console applications which run as scheduled tasks. They are running under a dedicated "Utilities" user account which has "Log on as batch job" permissions. These console applications take uploaded FTP files, process them and then move them to some dedicated archive folder. The problem is that my console apps are throwing Security exceptions when trying to acces the uploaded files. I tried to give the Full control permissions on the ftproot folder for my "Utilities" account and I have checked that "Replace all Child object permissions with inheritable permissions from this object" checkbox, but it affects only current files. When new files are uploaded, they again are not accessible by my "Utilities" account. I tried to go another way and put Cerberus FTP service under "Utilities" account. Then I also needed to give "Utilities" account permissions on Cerberus Data folder in ProgramData. Still no luck - after this operation, Cerberus internal SOAP web service stopped working (although everything else seems to work). I need that SOAP service to be available, so running the Cerberus FTP under "Utilities" account seems to be not an option. Unless I find out, what else do I need to set up for that "Utilities" account to stop Cerberus from complaining. I guess, Cerberus is uploading files to some temporary folder and so those files get the permissions form that folder and keep the same permissions even after moved to the ftproot. What would be the right solution for this which would grant Cerberus FTP server and the "Utilities" account minimal needed permissions to access the contents of the ftproot folder?

    Read the article

  • Block users from Social networking websites while firewall is down

    - by SuperFurryToad
    We currently have a SonicWall firewall, which does a pretty good job a blocking Social networking websites like Facebook and Bebo. The problem we are having is that sometimes we need to temporarily disable our firewall blocklist so we can update our company's page on Facebook for example. Whenever we do this, have see an avalanche of users logging on to their Facebook pages during work time. So what we need a way to block access while the firewall is down. For the sake of argument, we have two groups of users - "management" and "standard users". "standard users" would have no access to Facebook, but "management" users would have access. Perhaps something like a host file redirect for non-management users. This could probably be enforced via group policy that would call a bat file to copy down the host file, depending if the user was management or not. I'm keen to hear any suggestions for what the best practice would be for this in a Windows/AD environment. Yes, I know what we're doing here is trying to solve a HR problem using IT. But this is the way management wants it and we have a lot of semi-autonomous branch offices that we don't have a lot of day to day contact with, so an automated way of enforcing this would be the most preferable method.

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • AviSynth ChangeFPS: combining videos with different framerates

    - by Daniel Saner
    I have two video recordings of the same scene, but with different framerates, that I would like to combine using an AviSynth script. One video is recorded at 30fps, the other at 120fps. What I would like to do is to keep them temporally synchronised, meaning that for each frame of the 30fps video, the output should display 4 frames from the 120fps video. I would like the final output video to play at 30fps so that the duration is 4 times the original recordings. From AviSynth's documentation, it seems ChangeFPS is the function I'll need, since it removes and duplicates frames, while 'AssumeFPS' just changes the playback speed (and my plan is basically to quadruple every frame of the 30fps clip). However, the filter does not seem to do what it says. If I try: clip30 = AviSource("0326.avi").ChangeFPS(120) clip120 = AviSource("0326-120fps.avi") it doesn't affect the playback speed or frame count of the 30fps clip at all, but removes every fourth frame from the 120fps clip, which is not at all what I want. Unfortunately, appending .ChangeFPS(7.5) to the clip120 instead does not have the same inverse effect—in that case, it does exactly what's to be expected. Alternatively, if I try: clip30 = AviSource("0326.avi").AssumeFPS(7.5) clip120 = AviSource("0326-120fps.avi") there is no effect at all, both clips are played back at 30fps, meaning that only a quarter of the 120fps clip has been shown by the time the 30fps clip is over. So how can I combine these two clips in the manner I want? I was unable to find any other internal or external filters that would help me do that. It seems to me that if ChangeFPS did what the manual says, it would be the right one for the job.

    Read the article

  • Upstart can't determine my process' pid

    - by sirlark
    I'm writing an upstart script for a small service I've written for my colleagues. My upstart job can start the service, but when it does it only outputs queryqueue start/running; note the lack of a pid as given for other services. #/etc/init/queryqueue.conf description "Query Queue Daemon" author "---" start on started mysql stop on stopping mysql expect fork env DEFAULTFILE=/etc/default/queryqueue umask 007 kill timeout 30 pre-start script #if [ -f "$DEFAULTFILE" ]; then # . "$DEFAULTFILE" #fi [ ! -f /var/run/queryqueue.sock ] || rm -rf /var/run/queryqueue.sock #exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script script #Originally this stanza didn't exist at all if [ -f "$DEFAULTFILE" ]; then . "$DEFAULTFILE" fi exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script post-start script for i in `seq 1 5` ; do [ -S /var/run/queryqueue.sock ] && exit 0 sleep 1 done exit 1 end script The service in question is a python script, which when run without error, forks using the code below right after checking command line options and basic environmental sanity, so I tell upstart to expect fork. pid = os.fork() if pid != 0: sys.exit(0) The script is executable, and has a python shebang. I can send the TERM signal to the process manually, and it quits gracefully. But running stop queryqueue claims queryqueue stop/waiting but the process is still alive and well. Also, it's logs indicate it never received the kill signal. I'm guessing this is because upstart doesn't know which pid it has. I've also tried expect daemon and leaving the expect clause out entirely, but there's no change in behaviour. How can I get upstart to determine the pid of the exec'd process

    Read the article

  • Any program to help me check whether an ethernet channel can support full-length VLAN packet?

    - by Jimm Chen
    Sometimes, I have to face such a situation that I need to quickly and explicitly know whether a full length VLAN packet can traverse between two RJ45 ports. Yes, I mean 802.1Q ethernet frame with Etype=81 00 (diagram below). What I can do now is: Get two Windows PCs, for each PC, intall Intel Gigabit NIC and Intel specific driver to create a virtual NIC, with VLAN ID=3 assigned. Then connect the two PCs to each of the two RJ45 port. Finally execute ping to generate a full-length ethernet packet. ping -f -l 1472 <dest-IP> This way, I can be sure that the sent packet has the maximum "IP data payload" of 1500 bytes(8 bytes of ICMP header and 1472 bytes of ICMP data). If the ping gets reply, I know that the ethernet channel support full-length VLAN packet. From my experiment, some home switch or broad band routers(e.g. Linksys WRT54G) does not support full-length VLAN packet switching, so only ping -f -l 1468 succeeds. You see, I have to use an expensive Intel NIC to carry on that test, quite inconvenient. You know, for most laptop today, they do not equip an Intel NIC, and, even it is an Intel NIC, Intel VLAN driver, Intel has limitations on the models on which VLAN driver can be installed. So, my question is: Is there a small program that can let me send a full-length VLAN packet without installing a dedicated VLAN driver? Or better, the program has a stock feature that does the very job for my situation. Windows programs preferred, Linux solution welcome. Simpler the program, the better. Thank you.

    Read the article

  • How can I report a website that uses the webmail APIs to send spam?

    - by Igoru
    I've signed up for a cool job website that, unfortunately, asks you if you want to "invite your friends", and if you say so, you can give them access to your Gmail contacts to send the invite. However, contrary to what everyone would be expecting, they don't give you a list of who you want to invite; instead, they simply directly send spam to your entire contact list, like old-fashioned Outlook viruses. When you complain about this with them, they simply say "we will check the application and see if there is anything that might be confusing for the users". For me and some other friends (that felt for the same prank), this is a clear break on web best practices and a big disrespect on the users' trust. Thus, I would like to know what can we do to stop the website of using Gmail/Yahoo/Outlook APIs to send spam this way. P.S.: I wonder what would happen if I've given this website the access to post in my Facebook timeline as well. I've got a couple of calls from relatives asking about the email and I wonder how many unrelated people got this spam, like HR addresses from my past and whatnot.

    Read the article

  • Server Crash Diagnosis...Are there any 'black box recorder' style programs available.

    - by columbo
    My redhat server is crashing every three weeks or so at 4:15am ish on Sunday mornings. (well it was sundays the last two have been Thursday mornings at 4:15ish) Looking at the logs (mysql, httpd, messages) there are no clues as to why. They just seem to stop. I ran a little script to take memory readings every 15 minutes and it too stops (with normal readings) at this time. The server is remote at a provider so I can only access it via the web. I use Plesk. It appears to be a set job or something that is causing the issue. I can see nothing in crontab. So my question is...has anyone else had this and can offer advice? Failing that. Does any one know of a way to get more detailed logging than that offered by the messages file? I was thinking of a black box style recording program or maybe something as simple as an option somewhere to increase the level of reporting in the messages log. Thanks

    Read the article

  • Cisco 7206 error trying to copy running-config (Bad file number)

    - by jasondewitt
    I have a cisco 7206 that terminates a bunch of pppoa sessions for dsl users. Today I noticed that if I tried to "show run" nothing happened. I mean that it doesn't show anything and just sends me right back to the command prompt. I decided I should probably try and back up the config and that is where I'm stuck. Any time I try to copy the running-config to tftp or to pcmcia card that I know is not full I get the following error: %Error opening system:/running-config (Bad file number) I get this error when I try to do anything with the running config. I've been googling around, but I haven't found any thing else that talks about this error. I've seen people say to erase the nvram and then try to "copy run start", but I don't want to erase the nvram until I can pull off a copy of the running-config. I would try to reboot it, but the startup-config that is on the nvram looks to be woefully out of date (good job me!). Any ideas what might be wrong? or how I can get the running config off the router?

    Read the article

  • ubuntu 10.04 + php + postfix

    - by mononym
    I have a server I am running: Ubuntu 10.04 php 5.3.5 (fpm) Nginx I have installed postfix, and set it to loopback-only (only need to send) The problem is it is not sending. if i issue (at command line): echo "testing local delivery" | mail -s "test email to localhost" [email protected] I get the email no problem, but through PHP it does not arrive. When I send it via PHP, mail.log shows: Mar 28 10:15:04 host postfix/pickup[32102]: 435EF580D7: uid=0 from=<root> Mar 28 10:15:04 host postfix/cleanup[32229]: 435EF580D7: message-id=<20120328091504.435EF580D7@FQDN> Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: from=<root@FQDN>, size=1127, nrcpt=1 (queue active) Mar 28 10:15:04 host postfix/local[32230]: 435EF580D7: to=<root@FQDN>, orig_to=<root>, relay=local, delay=3.1, delays=3/0.01/0/0.09, dsn=2.0.0, status=sent (delivered to maildir) Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: removed any help appreciated, my main.cf file: smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = FQDN alias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliases myorigin = /etc/mailname #myorigin = $mydomain mydestination = FQDN, localhost.FQDN, , localhost relayhost = $mydomain mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only virtual_alias_maps = hash:/etc/postfix/virtual home_mailbox = mail/

    Read the article

  • VMM 2012 Adding Hosts in Trusted Forest

    - by Steve Evans
    I have two forests with a two way trust between them. VMM 2012 sits in ForestA and I can discover hosts in ForestA with no issue. When I try to discover hosts in ForestB I hit one of two issues: If I go through the GUI or use Powershell just like I normally do I get the following error on the job: Error (10407) Virtual Machine Manager could not query Active Directory Domain Services. Recommended Action Verify that the domain name and the credentials, if provided, are correct and then try the operation again. It doesn't matter which account I use. I've tried accounts from both forests, with Admin/Domain Admin permissions all over the place, etc Going through the GUI (can't find the switch in Powershell to duplicate this), I check the box "Skip AD Verification" and it causes the GUI to crash during discovery. I found an article (http://technet.microsoft.com/en-us/library/gg610641.aspx) that describes how to add a host in a disjoint namespace (even though that doesn't apply to me) and it says that VMM creates an SPN if one does not exist. So I verified that the correct SPN's exist in ForestB, that did not help the issue. I have a case open with PSS but they are stuck. I have VMM traces if anyone would like to see them. Any suggestions or ideas?

    Read the article

  • Raid 5 with hot spare or RAID 10 with no hot spare?

    - by Boden
    Yes, this is on of those "do my job for me" questions, have some pity:) I'm at the limit for what I can do with the number of hard drives in a server without spending a substantial amount of money. I have four drives left to configure, and I can either set them up as a RAID 5 and dedicate a hot spare, or a RAID 10 with no hot spare. The size of each will be the same, and the RAID 5 will offer enough performance. I'm RAID 5 shy, but I also don't like the idea of running without a hot spare. I'm not so interested in degraded performance, but the amount of time the system is without adequate redundancy. The server and drives are under a 13x5 4 hour response contract (although I happen to know that the nearest service provider is at least 2-3 hours away by car in the winter). I should note that the server also has two RAID 1 arrays which would also be protected by the hot spare. Why don't they make drive cages with 9 bays! Heh.

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >