Search Results

Search found 30312 results on 1213 pages for 'tactial test team'.

Page 305/1213 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Adding Thunderbird-stable repository gives "can't find signing_key_fingerprint" error

    - by EBV2010
    I'm trying to install Thunderbird 11 on Kubuntu 10.04. I was able to do it on the machine I'm working on. To get a clean process that I can roll out to other clients, I re-installed the machine and repeated the process. This is what I did (I've left out the sudo for clarity): add-apt-repository ppa:ubuntu-mozilla-security/ppa apt-get update add-apt-repository ppa:mozilla-team/thunderbird-stable The last one resulted in this error: Error: can't find signing_key_fingerprint at https://launchpad.net/api/1.0/~mozilla-team/+archive/thunderbird-stable The machine as it was before re-installation gave no such message. It was built from the same sources. Bottomline: I got Thunderbird 11.0 to run on Kubuntu 10.04 but after re-installation, adding the repository gives an error and won't add. Is there a way to solve the signing_key_fingerprint error?

    Read the article

  • Unable to record using Jmeter

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. Launch jmeter 2.3.3, added thred group to test plan Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings Saved the settings Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved Now in the jmeter, started the http proxy server. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • Symantec NetBackup restore - Incremental backup

    - by w0051977
    We are using Net Backup as a corporate solution. Incremental backups are taken daily during the week and then a weekly backup is done at the weekend (Saturday). My colleague has restored a folder to how it stood at 14:00 on a Tuesday. The problem is that the restore is taking files from the weekend backup if they did not exist at that point in time of the restore. For example, the folder we are restoring should look like this (this is how it looked on Tuesday at 14:00): Folder1 (folder name) Test.txt Test1.txt Test2.txt The folder looked like this at the weekend when the full restore was done (even though it did exist at the weekend when the full backup ran): Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt The actual folder restored looks like this: Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt Test3.txt should not be restored because it did not exist at the point of the restore. Is there a setting somewhere that we are missing. The folder in question is 200GB - the example above is for simplification. I realise this is a basic question.

    Read the article

  • Acer recovery disks not bootable?

    - by user13743
    We got a new Acer laptop with Vista installed at work. As it's getting ready to go out in the field, we wanted to do a burn-in test on it. We made the recovery DVDs before we ran the test. Part of the burn-in was bonnie++, which does a destructive read/write test of the hard drive. The machine passed with flying colors, but after trying to boot to the recovery DVD to being re-installing the system, the machine began to try PXE boot after a while. After doing some googling, it appears these 'recovery' disks expect a certain recovery partition to exist on the hard drive, and are in fact not bootable at all, and are useless in absence of the recovery partition. Is this the case, and is this "The Way Things Are" with all PC manufacturers and Windows Vista+ nowadays? How do I get my hands on actual bootable DVDs? I've emailed Acer support. I see an option on their site to purchase recovery disks, but I have the suspicion that these are the same non-bootable disks that I burned on the new system. Will Acer provide actual boot disks?

    Read the article

  • Sendmail Alias for Nonlocal Email Account

    - by Mark Roddy
    I admin a server which is running a number of web applications for a software dev team (source control, bug tracking, etc). The server has sendmail running solely as a transport to the departmental email server over which I have no control. We have someone who is still in the department but no longer on the dev team so I need to configure the transport agent to redirect all outgoing email (which would be coming from these applications) to the person that has taken their place. I added an entry in /etc/aliases like such: [email protected]: [email protected] But when I run /etc/init.d/sendmail newaliases I get the following error: /etc/mail/aliases: line 32: [email protected]... cannot alias non-local names So clearly I'm doing something I shouldn't. Is there a way to get aliases to work with non-local names or alternatively is their a way to accomplish my goal of redirecting outgoing mail for this user to another one? Technical Specs if the matter: Ubuntu 6.06 sendmail 8.13 (ubuntu provided package)

    Read the article

  • How to create a new background process in a KSH "while read" loop?

    - by yael
    The following test script has a problem. When I add the line (sleep 5 ) & in the script then the "while read" loop does not read all lines from the file, but only prints the first line. But when I remove the ( sleep 5 ) & from the script, then the script prints all lines as defined in the file. Why the ( sleep 5 ) & causes this? And how to solve the problem? I want to create a new process (for which the sleep is just an example) in the while loop: $ more test #!/bin/ksh while read -r line ; do echo Read a line: echo $line ( sleep 5 )& RESULT=$! echo Started background sleep with process id $RESULT sleep 1 echo Slept for a second kill $RESULT echo Killed background sleep with process id $RESULT done < file echo Completed On my Linux, when using the following contents of file: $ more file 123 aaa 234 bbb 556 ccc ...running ./test just gives me: Read a line: 123 aaa Started background sleep with process id 4181 Slept for a second Killed background sleep with process id 4181 Completed

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • SSH tunnel for socks5 proxy is slow with concurrent load

    - by RawwrBag
    I ssh to a remote AWS server using Ubuntu. I use ssh's port forwarding capabilities to do this. I have tried forwarding a dynamic port (ssh -D) or a single port (ssh -L with dante running as a remote socks server). Both are equally slow. I also tried different ciphers (ssh -c). Concurrent TCP connections pretty much do not work. For example, I can go to speedtest.net and start a test (which is fairly fast, probably maxes out my line speed) and if I try and do anything (i.e. load google.com) while the test is still running, all the additional connections seem to hang until the speed test is over. I realize OpenSSH is single-threaded. Is this the problem? It doesn't even show up on my top. Same goes for sshd on the remote server -- no processor hit. Is there anyway to bump ssh performance or should I step up to OpenVPN or something better suited for this?

    Read the article

  • remote desktop connection help?

    - by robin agrahari
    sir, i have created a web application in eclipse,db2 database server is running on my pc.so i can access the web application through the address http://localhost:8080/LimsWeb i used team viewer software to establish a remote desktop connection. can my friend somehow connect to my pc so that when he types the above url in his browser he will be able to fetch the pages of the application. i was able to do this by connecting with remote desktop mode.but in that case my friend was able to use the application which i created running on my pc only and in the window provided by team viewer.i want that he can run the application on his own computer calling the given url from his own browser. please help

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • How to make an x.509 certificate from a PEM one?

    - by Ken
    I'm trying to test a script, locally, which involves uploading a file using a Java-based program to a FileZilla FTPES server. For the real thing, there is a real certificate on the FZ server, and the upload step (tested alone) seems to work fine. I've installed FileZilla Server on my dev box (so it'll test uploading from localhost to localhost). I don't have a real certificate for it, of course, so I used the "Generate new certificate..." button in FZ. It works fine from an interactive FTPES program (as long as I OK the unknown cert), but from my Java program it throws a javax.net.ssl.SSLHandshakeException ("unable to find valid certification path to requested target"). So how do I tell Java that this certificate is OK with me? (I know there's a way to change the Java program to accept any certificate, but I don't want to go down that route. I want to test it just as it will happen in production, and I don't want to ignore unknown certificates in production.) I found that Java has a program called "keytool" that seems to be for managing this sort of thing, but it complains that the certificate file that FZ generated is not an "x.509" file. A posting from the FZ side said it was "PEM encoded". I have "openssl" here, which looks like it's perfect for converting between certificate formats, but I think my understanding of certificate formats is wrong because I'm not seeing anything obvious. My knowledge of security certificates is a bit shaky, so if my title is stupidly wrong, please help by fixing that. :-)

    Read the article

  • Google Drive desktop client not updating existing files from other users

    - by cqm
    I've looked around and there doesn't really seem to be any troubleshooting information for the Google Drive desktop client. It all assumes you are using Google Docs on the web. Anyway, my team is trying to use Google Drive like Dropbox, where multiple people are editing files shared amongst them through the desktop, such as images. Dropbox is really good at noticing when a checksum for a file is changed, and syncing it. Google Drive's desktop client seems not to do this at all. Google Drive desktop client seems to only sync newly created files and not giving any notification at all that there is a modified version, it will never sync it, even though going online and opening that file will show the modified version. Is there any way to fix this? and the answer has nothing to do with proxy or firewall configurations. Team is using computers running OSX and Windows.

    Read the article

  • Could not find codec parameters in ffmpeg in Windows

    - by Grienders
    While trying to convert wmv to animated gif using ffmpeg in Windows 7, I ran into an issue. Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\>ffmpeg -i test.wmv test.gif ffmpeg version N-39877-g4fa706a Copyright (c) 2000-2012 the FFmpeg developers built on Apr 16 2012 14:57:12 with gcc 4.6.3 configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-ru ntime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable -libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libope njpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libth eora --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable- libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --e nable-zlib libavutil 51. 46.100 / 51. 46.100 libavcodec 54. 14.101 / 54. 14.101 libavformat 54. 3.100 / 54. 3.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 70.100 / 2. 70.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 11.100 / 0. 11.100 libpostproc 52. 0.100 / 52. 0.100 [asf @ 0000000001f3ead0] Could not find codec parameters (Video: none (MTS2 / 0x 3253544D), 800x400, 30000 kb/s) test.wmv: could not find codec parameters What does this mean and how can I solve it?

    Read the article

  • CIFS Mounting Permissions

    - by malco
    I have an issue that I;m going round in circles with, I hope you can help. The Set up: Server 1 (CIFS Client) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad Server 2 (CIFS Server) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad All users (apart from root) are AD authenticated and this, including groups, etc works happily. What's working: I have created a share on Server 2: [share2] path = /srv/samba/share2 writeable = yes Permissions on the share: drwxrwx---. 2 root domain users 4096 Oct 12 09:21 share2 I can log into a Windows machine as user5 (member of domain users) and everything works as it should, for example: If I create a file it shows the correct permissions and attributes on both the MS and the Linux sides. Where I Fall Down: I mount the share on Server 1 using: # mount //server2/share2 /mnt/share2/ -o username=cifsmount,password=blah,domain=blah Or using fstab: //server2/share2 /mnt/share2 cifs credentials=/blah/.creds 0 0 This mounts fine, but.... If I log su, or log onto server 1 as a normal user (say user5) and try to create a file I get: #touch test touch test touch: cannot touch `test': Permission denied Then if I check the folder the file was created but as the cifsmount user: -rw-r--r--. 1 cifsmount domain users 0 Oct 12 09:21 test I can rename, delete, move or copy stuff around as user5, I just can't create anything, what am I doing wrong? I'm guessing it's something to do with the mount action as when I log onto server2 as user5 and access the folder locally it all works as it should. Can anyone point me in the right direction?

    Read the article

  • Email delivering but not receiving from Gmail

    - by Karthik Malla
    Host Record type Value 65.75.241.26 / 24 PTR softmail.me. accs.softmail.me. A 65.75.241.26 beta.softmail.me. A 65.75.241.26 ftp.softmail.me. CNAME softmail.me. lists.softmail.me. CNAME softmail.me. mail.softmail.me. A 65.75.241.26 mssql.softmail.me. A 65.75.241.26 ns.softmail.me. A 65.75.241.26 sitebuilder.softmail.me. A 65.75.241.26 softmail.me. NS ns.softmail.me. softmail.me. A 65.75.241.26 softmail.me. MX (10) mail.softmail.me. test.softmail.me. A 65.75.241.26 webmail.softmail.me. A 65.75.241.26 www.accs.softmail.me. CNAME accs.softmail.me. www.beta.softmail.me. CNAME beta.softmail.me. www.softmail.me. CNAME softmail.me. www.test.softmail.me. CNAME test.softmail.me. The above are the DNS settings of my email server and mail incoming/outgoing is done by softmail.me:25/110 using which I am able to send emails but unable to receive. Can anyone tell me where the problem lies at?

    Read the article

  • SVG doesn't work on subdomain - some browsers try to download rather than display

    - by John Catterfeld
    I have a site with a 'development' subdomain, which displays my SVG file exactly as intended. However when I copy it across to www, or any other subdomain (e.g. 'test') some browsers try to open the file in an external editor, therefore asking me to download the file rather than displaying it. For example: http://development.mysite.com/test.svg - works http://www.mysite.com/test.svg - doesn't work The SVG file: <?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg xmlns="http://www.w3.org/2000/svg" version="1.1"> <circle cx="100" cy="50" r="40" stroke="black" stroke-width="2" fill="red" /> </svg> This happens in Firefox, Chrome and Safari, however IE9 and above display the file as intended. It is a Windows hosting, and I have <mimeMap fileExtension=".svg" mimeType="image/svg+xml" /> in my web.config file My hunch is that there must be some setting on the server which I need my hosting company to make. Can anyone suggest what might cause this issue?

    Read the article

  • GPO startup script not copying files

    - by marcwenger
    I created a GPO startup script to execute for computers in a specific AD container. The script takes a file from the AD netlogon share and places it on a directory on the computer. Given the right permissions (ie: myself) can execute the script just fine and the file copies. But it doesn't work on startup - the file does not copy over from the AD server. The startup script should run as localsystem (am I right?). So the question is why do the files not copy on startup? Could it be because of: Is it permissions of the local system user? Reading the registry is problematic on startup? Obtaining files from the AD netlogon folder is problematic on startup? Am I missing it completely? My test machine does have the registry key and local directories as described in the script. I myself have standard user permissions on the test machine. AD server is Windows 2008, test client is Windows XP SP3 (and soon to be Windows 7, which I assume permissions issues will be inevitable) Dim wShell, fso, oraHome, tnsHome, key, srcDir Set wShell = WScript.CreateObject("WScript.Shell") Set fso = CreateObject("Scripting.FileSystemObject") key = "HKLM\Software\Oracle\Oracle_Home" On Error Resume Next orahome = wShell.RegRead(key) If err.Number = 0 Then tnsHome = oraHome + "\" + "network\admin\" srcDir = wShell.ExpandEnvironmentStrings("%logonserver%") + "\netlogon\UpdatedFiles\" fso.CopyFile srcDir + "file1.ext", tnsHome, true End If Side note: To ensure that the script is properly deployed, I purposely put some errors in the script, and on the next startup the error message appeared. So I know the GPO is deployed properly.

    Read the article

  • Linux - Block ssh users from accessing other machines on the network

    - by Sam
    I have set up a virtual machine on my network for uni project development. I have 6 team members and I don't want them to SSH in and start sniffing my network traffic. I already have set the firewall on my W7 pcs to ignore any connection attempts from the Virtual Machine, but would like to go a step further and not allow any network access from the VM to other machines on my network. Team members will be access the VM by SSH. The only external port forwarded is to vm:22. The VM is running in VirtualBox on a bridged network connection. Running latest Debian. If someone could tell me how to do this I would be much obliged.

    Read the article

  • Internet Explorer ignoring PHP setcookie() from CentOS server

    - by Hussein Sabbagh
    In the past we used a windows XAMPP server for an internal website. It worked fine but had some intermittent issues and we decided to move to a LAMP server on CentOS. We made the switch today but it turns out Internet Explorer ignores every attempt I make at saving a cookie. There is no underscore in the URL being used... the URL is actually the same as the one the XAMPP server used, where I was able to save cookies without any problems. It really doesn't make any sense to me, all of the code is the same. The only thing to change is the version of PHP and the server OS. The website works on all other browsers except IE. I can't even make a simple setcookie call. On a blank test page I use setcookie("test", "test", time()+36000, "/"); sleep(5); print_r($_COOKIE); and there is nothing there. Our users can't log into the website because of this and I have no idea what the issue is. If anyone can provide any clues or resolutions I would greatly appreciate it. Obviously the easy answer is to not use IE, but that is not an option in this case.

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • How to show a warning message when entering a folder?

    - by Valter Henrique
    I don't know if this is possible, but, I have a folder which I would like to show some warning message when the user enters in it. In my case would say that the folder could be deleted without previous warning to save some disk space. I already create a file inside the folder with the warning message: WARNING! ########################################################################################################################################################## Please, be advised, that the folder /company-backup/amazon-s3 can be deleted without previous WARNING to save disk space as the INFRASTRUCTURE TEAM judge necessary. Best regards, Infrastructure Team. ########################################################################################################################################################### Is that possible ? Any idea ?

    Read the article

  • How do I get a network printer installed in ubuntu 9.04?

    - by SoaperGEM
    My girlfriend's work computer now has Linux on one of the partitions, and for the most part it's running fine--except that I can't seem to get the network printers configured right. There are two of them: a Lanier MP 7500/LD275 and a Lanier MP C3000/LD430c, and Linux seems to have found them both automatically. I'll go through the steps of what I did, and what exactly went wrong. I went to Administration Printing, and clicked the new printer button. It searched for printers and found them both, listed under "Network Printers." I added them as new printers in succession. However, when I clicked "Print a Test Page," it failed saying there was a broken pipe. The device URIs were saved as socket://[ip address]:9100. I changed these to lpd://[ip address] per some online tutorial, which at first looked like it might have worked (but didn't). Then when I tried to print a test page, it first said Processing (and sometimes even Processing - printing test page, 4%, but always subsequently displays Idle - /usr/lib/cups/backend/lpd failed. Help! What do I do? It seems like Linux can find these printers just fine, and the drivers seem to be in place, so what's going wrong?

    Read the article

  • /etc/hosts: What is loghost? (fresh install of Solaris 10 update 9)

    - by cjavapro
    # # Internet host table # ::1 localhost 127.0.0.1 localhost XX.XX.XX.XX myserver loghost What is the purpose of loghost? If it was not for having loghost in there, all the /etc/hosts files on all the servers in this particular network could be identical. Edit: I looked at /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) Very interesting. when shutting down,, alerts go to all users probably through *.emerg * Looking at ifdef, it seems that the first parameter checks to see if current machine is a loghost, second parameter is what to do if it is and third parameter is what to do if it is not. Edit: If you want to test a logging rule you can use svcadm restart system-log to restart the logging service and then logger -p notice "test" to send a test log message where notice can be replaced with any type such as user.err, auth.notice, etc.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >