Search Results

Search found 2581 results on 104 pages for 'mike crittenden'.

Page 34/104 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Bluetooth radio device is not available

    - by Mike
    I reinstalled my Windows 7 operating system, and have since been unable to detect my Logitech m555b bluetooth mouse. Here is some info about my system: I have have a working connection with a bluetooth printer. I have a message in the "My Bluetooth" section of explorer stating "Bluetooth radio device is not available". The Device Manager indicates that the Generic Bluetooth radio is working correctly. The bluetooth "F12" switch is on. Mouse batteries are new and the right way around. I'm presuming that I need a bluetooth driver which is non generic. The computer is a Clevo P150HMx (the version with the GTX485 GPU) with the default WLAN / bluetooth combo card manufactured by Realtek (I think it's a Realtek RTL8188CE card). I think I have the drivers installed, but I still get the generic driver in the Device Manager. I'm confused. Please help, I'm going mad on the touchpad. (Thanks for the touchup wizlog)

    Read the article

  • Sharepoint site settings add on ssl port number?

    - by Mike
    WSS 3.0 IIS6/WinSever2003 CAG We have several WSS sites on a SharePoint WSS box that talk to the outside, all of which are SSL enabled. So you get a CAG(Citrix Access Gateway) to translate the 443 port to the local ssl port on the server. Everything is set up and works fine until you get into the Site Settings and start rooting around, it seems like a very unstable link library. Links will try to use the local ssl port number instead of the 443 standard; it will try to skip the step. Is that the site? Any ideas on how to fix it?

    Read the article

  • Completely remove MySQL from macbook pro

    - by mike
    Im prety sure i completely removed mysql from my system, except for one thing. When i type mysql in the command line i get this bash: /opt/local/bin/mysql5: No such file or directory How is it still recognizing where it thinks mysql should be? I'm trying to build it myself in /usr/local, and when i do install it there, i still get that error message for it looking for it in opt/local.

    Read the article

  • CentOS 6 - YUM Local Repo - Ensure consistent package distribution

    - by Mike Purcell
    I've read a few guides outlining how to setup a local YUM repo, but none of them explicitly stated an answer to my question; If I set up a local YUM repo, does that mean that any CentOS servers which pull from said repo will never be "ahead" of the local YUM repo? I want to ensure a consistent package distribution across all my servers. Right now, when I do a yum update, even on a daily basis, the servers can be out of alignment. For example if I run YUM update on my dev server in the morning, then run YUM update on one of my production servers in the afternoon, the production server may have picked up a new version of a package that the dev server did not pick up, due to the time window between the update commands. Rather, I'd prefer that I run yum update from my dev server which has access to remote upstream yum repos, then let it sit for 2 weeks, after which I run yum update on my production servers against the local repo on my dev server.

    Read the article

  • Setup Firefox to save .pages as .zip automatically

    - by Mike Dtrick
    What do I want to do? I would like Firefox to save files with the .pages extension as .zip files automatically. Scenario You are browsing through your emails and you notice your friend just sent you an email with a file attached (a .pages in this example). Unfortunately, you have a laptop that runs Windows. Your friend continues to send tons of emails with .pages files attached and you are tired of manually saving the files as a .zip file. Ultimately, you would like Firefox to be set up so that the download/file manager recognizes the .pages extension and automatically converts it to a .zip file. What have I done? I have saved files manually by selecting save as "All Files" and setting the extension to .zip. I've gone through Firefox and their documentation and have not found anything on how to complete this task. Why am I doing this? To save time (only a few seconds, not the main reason). I would like to setup a simple solution that "converts" a file automatically without having to recall steps on how to achieve the task manually (for clients who aren't exactly tech savvy). So that clients with Windows can access the files. IMPORTANT NOTE: I am not trying to save the web page, rather an Apple document equivalent to Microsoft Word. UPDATE: The really easy method would be to save one file, right click it, choose properties and open all .pages files up with WinRAR (or any other program that extracts files from a compressed folder). For the sake of learning, I am going to "neglect" this method and continue to do some research on Firefox add-ons. I would still like to have Firefox or the download manager to do the bulk of the work for converting the file.

    Read the article

  • Monitor mode 802.11 captures on OSX

    - by Mike A
    I'm trying to determine the difference between capturing 802.11 frames in the following ways on OSX (10.8.5). It's a bit esoteric, but I use "Option 2" to capture frames for later analysis, and am wondering if I'm missing something. Option 1: use "airportd": $sudo /usr/libexec/airportd en0 sniff Option 2: use "airport" followed by tcpdump: sudo /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport --channel= sudo tcpdump -I -P -i en0 -w /tmp/capture.pcap (or alternatvely eliminate the -w and watch packets real-time). From what I can tell: Both commands, according to the wifi icon on OSX, put the interface into 'monitor' mode. Both commands output a pcap file that is readable in both wireshark/tcpdump & Eye PA. Both commands appear to capture management, control and data frames. The rub: Option 1 disconnects you from the network. This is expected, when putting an interface into 'monitor' mode. Option 2 does NOT disconnect you, provided you've set the channel to the same channel your currently connected to. This has a distinct advantage of keeping your connection up while capturing in monitor mode. My question: Option 2 does not seem like it should work, or more specifically, it does not seem like I should be able to remain connected while also capturing frames in monitor mode. On a wired NIC, you can be 'promiscuous' and still send frames, though I didn't think the same was true for wireless NIC. I'm questioning the validity of capturing frames w/ Option 2?

    Read the article

  • Enabled Network Discovery on Server, and now VNC and Squeezebox clients don't work

    - by Mike Hanson
    I've recently setup a Windows Server 2008. It's running an email server, Squeezebox server, MS SQL Server, etc. I'm doing remote maintenance with UltraVNC. I had everything working fine. Then the server needed to access a network share on another machine, and I was prompted to turn on network discovery, which I did. I chose the Home rather than Public option. Since doing that, some things have stopped working, while others are still fine. Shared folders and the the Email services (ports 25 and 110) are still accessible. VNC (port 5900) and Squeezeboxes (port 9000) no longer work. Here's what I've tried to try to solve the problem: Checked the network discovery settings, to see if anything looked strange. Checked the firewall settings, and those ports appear to be open. Also in the firewall settings, the entries for Private domain Network Discovery were all on, but the Domain/Public ones were off. I tried turning those on. In the services, turned on Function Discovery Resource Publication and SSDP Discovery. Any other suggestions?

    Read the article

  • Why does my 5.1 surround work in testing only?

    - by Mike Pateras
    I've got a 5.1 speaker setup. In both this SoundMax utility (I think it came with my motherboard), and in the Windows 7 sound test, all 5.1 speakers work properly, but that's the only time that I can get audio to come out of the rear speakers and the center channel. When playing games, video, music, etc., I only seem to get 2.1 speakers worth of sound, even though I've configured everything for 5.1 surround sound. How can I get my 5.1 surround sound working during actual use?

    Read the article

  • iMac memory limit

    - by Mike
    I have an iMac that was from the first generation of aluminum iMacs. The reported model is "iMac 7,1". This iMac's manual says I can put 2 2GB modules, but when this manual was made we don't have modules with more than 2GB and also we had Leopard then, that I suppose can handle less memory than Snow leopard. Today we have 4GB modules, so can I put two 4GB modules and make it 8GB? thanks.

    Read the article

  • syslog ip ranges to specific files using `rsyslog`

    - by Mike Pennington
    I have many Cisco / JunOS routers and switches that send logs to my Debian server, which uses rsyslogd. How can I configure rsyslogd to send these router / switch logs to a specific file, based on their source IP address? I do not want to pollute general system logs with these entries. For instance: all routers in Chicago (source ip block: 172.17.25.0/24) to only log to /var/log/net/chicago. all routers in Dallas (source ip block 172.17.27.0/24) to only log to /var/log/net/dallas. Finally, these logs should be rotated daily for up to 30 days and compressed. NOTE: I am answering my own question

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • Windows Server 2003 DNS cached lookups modification

    - by Mike
    Hi, Is it possible to modify the entries in the cached lookup? I need to temporarily change the resolution of an IP address of a domain name to something else. I can't wait until DNS updates. Sorry, forgot to mention that the interface of the server has DNS set to itself. DNS server is running.

    Read the article

  • Remotely viewing IP camera on Belkin N450 DB router

    - by Mike Miller
    I need to setup a wireless IP camera (Trendnet TV-IP501W) on my network so that it is remotely visible from anywhere. Right now I successfully connected it to my home network but nothing else. My router is a Belkin N450 DB. Any help would be much appreciated, including what this would be referred to as so I could more easily ask another forum. I believe it is something like "port forwarding" but I'm not sure. Ok, I believe I found this in the "virtual servers" section. It asks for enabling with a check box, description, inbound port, type, private IP, & private port. In that order I have checked enabling, "camera", 150, TCP, 81, and 81? I'm assuming inbound ports are the numbers I use for the home network - xxx.xxx.x.150 and the 81 was for private. I used my WAN IP and added :81 and .81 but didn't get it. What am I doing wrong? Ok, I believe I found this in the "virtual servers" section. It asks for enabling with a check box, description, inbound port, type, private IP, & private port. In that order I have checked enabling, "camera", 150, TCP, 81, and 81? I'm assuming inbound ports are the numbers I use for the home network - xxx.xxx.x.150 and the 81 was for private. I used my WAN IP and added :81 and .81 but didn't get it. What am I doing wrong?

    Read the article

  • CentOS - Configuring Puppet to play nice with SELinux

    - by Mike Purcell
    I am running into an issue every time I attempt to start the puppetmasterd service, for which I receive the following error message: root@service1 ~ # -> /etc/init.d/puppetmaster start Starting puppetmaster: Could not prepare for execution: Got 1 failure(s) while initializing: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /etc/puppet/ssl [FAILED] Apparently there was a known issue with this scenario as outlined in this bug report, however in the bug report it states the issue has been resolved in selinux-policy-3.9.16-29.fc15, but the latest CentOS default upstream version is 3.7.19-155.el6_3.4. So I am trying to figure out the best solution. I can either create a local security policy to allow puppetmasterd the access it needs, or keep researching and install a newer version of selinux-policy outside of the default upstream channel. Anyone have any recommendations? Please don't recommend disabling SELinux... ----- Update ----- Here is the puppet.conf: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl [master] certname=puppetmaster.ownij.lan dns_alt_names=puppetmaster.ownij.lan [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig server=puppetmaster.ownij.lan And here are the denials per the audit log: type=AVC msg=audit(1349751364.985:666): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751364.985:666): arch=c000003e syscall=4 success=no exit=-13 a0=1391420 a1=7fffef09ed10 a2=7fffef09ed10 a3=120c500 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.302:667): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.302:667): arch=c000003e syscall=4 success=no exit=-13 a0=1d18530 a1=7fffef0d04d0 a2=7fffef0d04d0 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.465:668): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.465:668): arch=c000003e syscall=4 success=no exit=-13 a0=1af3930 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.467:669): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.467:669): arch=c000003e syscall=4 success=no exit=-13 a0=1b17aa0 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751366.401:670): avc: denied { write } for pid=15093 comm="puppetmasterd" name="puppet" dev=dm-0 ino=132035 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:puppet_etc_t:s0 tclass=dir type=SYSCALL msg=audit(1349751366.401:670): arch=c000003e syscall=83 success=no exit=-13 a0=2d7a400 a1=1f9 a2=2d7a40f a3=7fffef0a6df0 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) And the audit log if I pass through audit2allow: root@service1 ~ # -> fgrep puppetmasterd /var/log/audit/audit.log | audit2allow -m puppetmasterd module puppetmasterd 1.0; require { type home_root_t; type puppetmaster_t; type puppet_etc_t; type puppet_var_run_t; type httpd_sys_content_t; class lnk_file { relabelfrom relabelto }; class file { relabelfrom read getattr open }; class dir { write read search getattr setattr }; } #============= puppetmaster_t ============== allow puppetmaster_t home_root_t:dir { search getattr }; allow puppetmaster_t httpd_sys_content_t:dir read; allow puppetmaster_t httpd_sys_content_t:file { read getattr open }; #!!!! The source type 'puppetmaster_t' can write to a 'dir' of the following types: # puppet_log_t, puppet_var_lib_t, puppet_var_run_t, puppetmaster_tmp_t allow puppetmaster_t puppet_etc_t:dir { write setattr }; allow puppetmaster_t puppet_etc_t:lnk_file { relabelfrom relabelto }; allow puppetmaster_t puppet_var_run_t:file relabelfrom;

    Read the article

  • ISA caching with no cache-related info in response header

    - by Mike M. Lin
    From the documentation, I can't figure out what criteria an ISA server uses to figure out if a cached file is valid when no cache-related info is in the response header. Let's say I got this header in my response on Thu, 13 Jan 2011 18:43:35 GMT: HTTP/1.1 200 OK Date: Thu, 13 Jan 2011 18:43:35 GMT Server: Apache/2.2.3 (Red Hat) Content-Language: en X-Powered-By: Servlet/2.5 JSP/2.1 Keep-Alive: timeout=15 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-1 There's no cache directive, no last-modified field, no expires field. How will the ISA server decide for how long to cache this response?

    Read the article

  • Multiple iTunes audiobook downloads

    - by Mike Pateras
    I bought an audiobook on iTunes on my computer. Can I, using the same account, download the file onto my phone, via the iTunes app, without being charged again? Or do I have to sync? I'm afraid to try it, because I know if I am charged, Apple won't give me a re-fund, even though I've already purchased the book.

    Read the article

  • How best to manage my growing data in Excel?

    - by Mike
    This isn't a question about formulas or features in Excel. I'm debating the correct/best way to manage the growing amount of data 'I have to' manage in Excel (I produce PIVOT tables/reports for my management). DATA: I record the number of publications we order: cost, date ordered, start and end of subscription, who requested it, when they ordered it, when I ordered it, will it be cancelled next year, etc, etc, etc. DILEMMA: Obviously we re-order a lot of the same publications, so depending on how I manage the data I could be duplicating all over the place. OPTION 1: So, do I use ROWs = publication name in Row 1 and all the related columns for each financial year are copied and pasted after each financial year ready for the new FY information? This will lead to me going to column ZZ. OPTION 2: Or, do I use COLUMNs = each row has only one FY information for each publication and if we re-order or cancel a publication I re-type the publication name in a row below and fill in appropriate columns? This will lead to a long list of publications down to row 10000, and potential for misspelling of repeat ordered publication names. IDEAS: What's the best way - thinking in terms of pivot table best practice, being able to sum or count easy, report formatting, etc. Any best practices much appreciated.

    Read the article

  • Cat5 wiring in my home [closed]

    - by Mike
    I have problem with cat5 cabling First I ran a length of cable 30 metres to my bedroom 1 and and connected both ends to a wall sockets internally using my punch down tool, both ends look fine. I also ran a cable from bedroom 1 to bedroom 2 so my son can use same internet connection , i then parallel connected my cables in bedroom 1 against using punch down tool using same colours all the way through. I ran a Ethernet cable from modem to first wall socket close to it then at bedroom end used another Ethernet cable to connect pc, it wouldn't connect to pc so I disconnected bedroom 2 cable from bedroom 1 socket. I connected bedroom 1 to pc and it worked, but how do I connect bedroom 2 (obviously cat 5 cable bedroom 1 to 2 is in place) As soon as I connect one wire from bedroom2 I loose Internet connection!

    Read the article

  • Setting up 2 external monitors on a laptop with VGA splitter

    - by mike
    I have a laptop with a graphics card that supports 2 displays. I would like to know the easiest way to set it up so I can close my laptop lid and use 2 external monitors (unique displays). I use it primarily for office applications and video and want a quality, clear picture. The laptop has 1 VGA port and I have 2 24" 1920x1200 monitors that have VGA and DVI ports. So a few questions: Can I just use a VGA splitter? (seen mixed feedback on this) Would it a VGA to 2 DVI splitter give a better picture quality? (if it exists) Would I be better upgrading laptop to one with 2 digital ports ( I just see a lot with VGA and HDMI though) specs: Model: Toshiba Satellite C675-S106 (Windows 7) Graphics Card: Intel HD Graphics 3000 (supports 2 displays) Processor: Intel Core i3-2350M

    Read the article

  • "Network Error - 53" while trying to mount NFS share in Windows Server 2008 client

    - by Mike B
    CentOS | Windows 2008 I've got a CentOS 5.5 server running nfsd. On the Windows side, I'm running Windows Server 2008 R2 Enterprise. I have the "Files Services" server role enabled and both Client for NFS and Server for NFS are on. I'm able to successfully connect/mount to the CentOS NFS share from other linux systems but am experiencing errors connecting to it from Windows. When I try to connect, I get the following: C:\Users\fooadmin>mount -o anon 10.10.10.10:/share/ z: Network Error - 53 Type 'NET HELPMSG 53' for more information. (IP and share name have been changed to protect the innocent :-) ) Additional information: I've verified low-level network connectivity between the Windows client and the NFS server with telnet (to the NFS on TCP/2049) so I know the port is open. I've further confirmed that inbound and outbound firewall ports are present and enabled. I came across a Microsoft tech note that suggested changing the "Provider Order" so "NFS Network" is above other items like Microsoft Windows Network. I changed this and restarted the NFS client - no luck. I've confirmed that the share folder on the NFS server is readable/writable by all (777) I've tried other variations of the mount command like: mount 10.10.10.10:/share/ z: and mount 10.10.10.10:/share z: and mount -o anon mtype=hard \\10.10.10.10:/share * No luck. As per the command output, I tried typing NET HELPMSG 53 but that doesn't tell me much. Just "The network path was not found". I'm lost on how to proceed with troubleshooting. Any ideas?

    Read the article

  • Is it possible to use WebMatrix with pure IIS?

    - by Mike Christensen
    I'd like to check out WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings. I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on that server). I put in the site URL for the Site Name and Destination URL. When I click Validate Connection, I get: Am I doing something wrong, or is this just not possible to do?

    Read the article

  • Windows 2008 R2 Task Scheduler triggered an event for unknown reasons.

    - by Mike
    Today I arrived at the office only to find that a task, which was scheduled to trigger at 5:30PM EST each Friday, had triggered on its own at 6:01AM EST this morning. I checked the event logs as well as the task schedule log and all of the evidence points to a timed trigger starting this task with the correct credentials, however the task history reports the task has not been triggered since last Friday when it ran to completion successfully. I do not have this task set to random start times or start if missed. This is the first time I have observed this happen in the Windows Task Scheduler and want to know if anyone else has come across this, why it happened and how to fix it?

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • Reasons for missing IP info in `last` output on pts logins?

    - by Mike Pennington
    I have five CentOS 6 linux systems at work, and encountered a rather strange issue that only seems to happen with my userid across all the linux systems I have... This is an example of the problem from entries I excepted from the last command... mpenning pts/19 Fri Nov 16 10:32 - 10:35 (00:03) mpenning pts/17 Fri Nov 16 10:21 - 10:42 (00:21) bill pts/15 sol-bill.local Fri Nov 16 10:19 - 10:36 (00:16) mpenning pts/1 192.0.2.91 Fri Nov 16 10:17 - 10:49 (12+00:31) kkim14 pts/14 192.0.2.225 Thu Nov 15 18:02 - 15:17 (4+21:15) gduarte pts/10 192.0.2.135 Thu Nov 15 12:33 - 08:10 (11+19:36) gduarte pts/9 192.0.2.135 Thu Nov 15 12:31 - 08:10 (11+19:38) kkim14 pts/0 :0.0 Thu Nov 15 12:27 - 15:17 (5+02:49) gduarte pts/6 192.0.2.135 Thu Nov 15 11:44 - 08:10 (11+20:25) kkim14 pts/13 192.0.2.225 Thu Nov 15 09:56 - 15:17 (5+05:20) kkim14 pts/12 192.0.2.225 Thu Nov 15 08:28 - 15:17 (5+06:49) kkim14 pts/11 192.0.2.225 Thu Nov 15 08:26 - 15:17 (5+06:50) dspencer pts/8 192.0.2.130 Wed Nov 14 18:24 still logged in mpenning pts/18 alpha-console-1. Mon Nov 12 14:41 - 14:46 (00:04) You can see two of my pts login entries above that do not have a source IP address associated with them. My CentOS machines have as many as six other users that share the systems, but the mpenning userid is the only one that has this issue. Approximately 5% of my logins see this issue, but no other usernames exhibit this behavior. Questions Given the kind of scripts I keep on these systems (which control much of our network infrastructure), I'm a little spooked by this and would like to understand what would cause my logins to occasionally miss source addresses. Is there anything (other than malicious activity) that would reasonably explain the behavior? Other than bash history timestamping, are there other things I can do to track the issue down? Informational Since this started happening, I enabled bash history time-stamping (i.e. HISTTIMEFORMAT="%y-%m-%d %T " in .bash_profile) and also added a few other bash history hacks; however, that does not give clues to what happened during the previous occurrences. All the systems run CentOS 6.3... [mpenning@typo ~]$ uname -a Linux typo.local 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux [mpenning@typo ~]$ EDIT If I use last -i mpenning, I see entries like this... mpenning pts/19 0.0.0.0 Fri Nov 16 10:32 - 10:35 (00:03) mpenning pts/17 0.0.0.0 Fri Nov 16 10:21 - 10:42 (00:21)

    Read the article

  • Under what conditions will sendmail try to immediately resend a message instead of waiting for the standard requeue interval?

    - by Mike B
    CentOS 5.8 | Sendmail 8.14.4 I used to think that if SendMail experienced a temporary (400-class) error during delivery, it would place the message in a deferred queue (e.g. /var/spool/mqueue) and retry an hour later. For the most part, that appears to be the case. But every now and then, I'll notice log entries like this (email/domains renamed to protect the innocent :-) ) : Dec 5 01:43:03 foobox-out sendmail [11078]: qBE3l7js123022: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=124588, relay=exbox.foo.com. [10.10.10.10], dsn=4.0.0, stat=Deferred: 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel Dec 5 01:53:34 foobox-out sendmail [12763]: qBE3l7js123022: to=<[email protected]>, delay=00:10:31, xdelay=00:00:00, mailer=relay, pri=214588, relay=exbox.foo.com., dsn=4.0.0, stat=Deferred: 452 4.3.1 Insufficient system resources Dec 5 02:53:35 foobox-out sendmail [23255]: qBE3l7js123022: to=<[email protected]>, delay=01:10:32, xdelay=00:00:01, mailer=relay, pri=304588, relay=exbox.foo.com. [10.10.10.10], dsn=2.0.0, stat=Sent (<[email protected]> Queued mail for delivery) Why did Sendmail try again just 10 minutes after the first attempt and then wait another hour before trying again? If this is expected behavior, what scenarios will cause this faster requeue interval to occur?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >