Search Results

Search found 2004 results on 81 pages for 'tom baker'.

Page 24/81 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Read access to Active Directory property (uSNCreated)

    - by Tom Ligda
    I have an issue with read access to the uSNCreated property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNCreated property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNCreated property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNCreated property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • can 'Percona MySQL Data Recovery' be used to recover dropped tables if the datadir filesystem is mounted as /

    - by Tom Geee
    according to Percona: Unmount the filesystem or make it read-only if... You have filesystem corruption OR You have dropped tables in innodb_file_per_table format If I have innodb_file_per_table enabled, and accidently dropped a table, while the datadir is mounted as within the / partition , can data still be recovered? Obviously you can't work with an unmounted root filesystem. Our VPS host has a defaulted filesystem table which we cannot customize. I was wondering in case of any future scenario. edit: would mounting the / filesystem through NFS onto another system as read-only be a workaround? TIA.

    Read the article

  • If I change Windows admin user password, then I can't login to Outlook, why?

    - by Tom
    I am seeing this strange behaviour with Windows 7 and Outlook 2010. If I change the password of User1 (Admin user), login, and start Outlook, it asks for the pasword. It keeps saying "password incorrect". I can login by using same password on the webclient. If I change User1's password back to last one, Outlook starts without any prompting and I'm able to send and receive emails. Is there any link between the user account, its password and the PST file's password?

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • Why does "commit" appear in the mysql slow query log?

    - by Tom
    In our MySQL slow query logs I often see lines that just say "COMMIT". What causes a commit to take time? Another way to ask this question is: "How can I reproduce getting a slow commit; statement with some test queries?" From my investigation so far I have found that if there is a slow query within a transaction, then it is the slow query that gets output into the slow log, not the commit itself. Testing In mysql command line client: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=benchmark(9999999, md5('This is to slow down the update')) WHERE id = 21560; Query OK, 0 rows affected (2.32 sec) Rows matched: 1 Changed: 0 Warnings: 0 At this point (before the commit) the UPDATE is already in the slow log. mysql commit; Query OK, 0 rows affected (0.01 sec) The commit happens fast, it never appeared in the slow log. I also tried a UPDATE which changes a large amount of data but again it was the UPDATE that was slow not the COMMIT. However, I can reproduce a slow ROLLBACK that takes 46s and gets output to the slow log: mysql begin; Query OK, 0 rows affected (0.00 sec) mysql UPDATE members SET myfield=CONCAT(myfield,'TEST'); Query OK, 481446 rows affected (53.31 sec) Rows matched: 481446 Changed: 481446 Warnings: 0 mysql rollback; Query OK, 0 rows affected (46.09 sec) I understand why rollback has a lot of work to do and therefore takes some time. But I'm still struggling to understand the COMMIT situation - i.e. why it might take a while.

    Read the article

  • T42 Thinkpad, USB boot, CF for programs and storage?

    - by Tom K.
    Is this feasible? I have a Thinkpad T40. I'd like to get one of the tiny USB drives (like the Verbatim Store 'n' Stay series) of sufficient size to handle basic Linux OS booting (8GB?), then put 16Gb or greater memory card (SD or CF) in the PCMCIA slot with appropriate adapter for additional application programs and data storage. I know I could get a used or refurb HD, but I've had reliability issues in the past. I don't believe that new IDE drives are available.

    Read the article

  • Enterprise Tape Backup solutions

    - by Tom O'Connor
    I'm currently attempting to re-architect a backup solution where I'm working. We've got 2 NAS devices, one in the office, one in the datacentre. The servers in the DC back up to the DC NAS, which is then replicated to the Office NAS. The office NAS exports shares as CIFS and NFS, this bit is fine. At some point, I'll have to expand our storage capacity, currently we've got about 1.4TB of storage space, which is about 96% full. Previously, the tape backup was a script that ran tar a few times and squirted data onto a tape. It worked, but was by no means a perfect solution. Restores are a bit of a pest, adding new data to the backup requires editing the script as root. It's just all a bit non-ideal. I've been evaluating a number of "enterprise" ready backup solutions, such as Yosemite Backup from Barracuda, Acronis Backup/Restore, and something from Arkeia. In the process of evaluating these, I've found 2 big problems. Not all of them allow backup of mounted devices (such as a NFS mounted NAS) Many of these applications don't like our tape device. For the most part, (1) is essential. Our NAS has a feeble processor and can't run applications like backup agents. I suspect that the biggest problem is the tape device, which is a HP C7438A DAT72 connected via USB. Questions: Has anyone else got an USB DAT72 device working with similar software? Is there a better way to back up data from an "appliance" NAS device on which you can't run an agent? Would I be totally out of my mind to specify a cheap HP or Dell server with a couple of 1TB hard disks, and a SAS card to then talk to an HP Ultrium (or similar) device? The biggest drawback to this would be cost (400ish for the server, 200 for the SAS connectivity and 1700 for a LTO4 device) Notes: I'd love to be able to say that I'd get rid of tapes entirely, and use some form of hard disk backup. In a previous job, we had LaCie USB drives, which were decidedly unreliable.

    Read the article

  • How many disks is too many in this RAID 5 configuration??

    - by Tom
    HP 2012i SAN, 7 disks in RAID 5 with 1 hot spare, took several days to expand the volume from 5 to 7 300GB SAS drives. Looking for suggestions about when and how I would determine that having 2 volumes in the SAN, each one with RAID 5, would be better?? I can add 3 more drives to the controller someday, the SAN is used for ESX/vSphere VMs. Thank you...

    Read the article

  • JDBCRealm can't find sqlite file

    - by Tom A
    My authentication fails with java.sql.SQLException: no such table: credentials where credentials is the name of the user/password table. I have checked the db file and the table is there. I think you also get this error when sqlite jdbc can't even find the file. I am specifying my realm in a META-INF/context.xml file. Is there any trick to getting the path right? I have tried just about everything I can think of.

    Read the article

  • Vmware sphere setting up external network

    - by Tom Beech
    I've just setup vmware vsphere 5 on a remote server (rented dedicated server). I've added my first VPS (centos 5.8) barebones. It's not finding any IP (internal or external) on boot. I've had an extra external IP assigned to my server that I wanted to use on the VPS. I tried editing the eth0 config and adding the IP in there and turning off the DHCP, but it can't find any IP or ping google or do any networking type things. How do I route the IP to my VPS so I can access it remotely?

    Read the article

  • Need help diagnosing my machine

    - by Tom Collins
    I have something that just slows my computer to a crawl sometimes. Not running anything big. Yesterday all I had running (besides background apps) were Firefox & Windows Explorer and could barely even switch screens. Nothing showing up in the task manager as hogging CPUs. I have all non-essential services stopped (MySQl & MSSQL) unless I need them. I made some restore points not long ago, but they disappeared. This is a development mach with a LOT of apps installed, so I really, really do not want to re-install Windows. So, what I'm looking for are ideas or tools I can use to help diagnose this problem. The only clues I have is this started right after I installed Office 2013 (with Office 2010 still installed as well) installed Visual Studio 2012 (also keeping 2010 as a co-install) and installed MSSQL 2012 (upgrade from 2008, no co-install) Also, computer runs fine in Safe Mode. I've just ran out of ideas of what to check. Any help / suggestions would much appreciated. Thanks P.S. I'm running Win 7 Pro (x64). Office is also 64 bit. Visual Studio & MSSQL are 64 bit if that option was available (not sure).

    Read the article

  • So how does one use rockmongo to connect to a mongo sharded setup with replicasets?

    - by Tom
    I try to use rockmongo, to connect to our cluster. Our setup is a setup of two shards each consisting of a replicaset. I try to connect to the mongos instance and while rockmongo connects I get an error when trying to list the dbs: Execute failed:not master function (){ return db.getCollectionNames(); } This has something to do with the replica sets and everybody points to: $MONGO["servers"][$i] = array("replicaSet" => "xxxxx"); This is all fine, but I have two replicasets and as far as I understand I should connect to the mongos instance and not directly to the members of the set. So how does one use rockmongo to connect to a mongo sharded setup with replicasets?

    Read the article

  • Can access SSH but can't access cPanel web server

    - by Tom
    I've built a Cent OS 6.0 vps and then i've installed the latest cPanel/WHM. This isn't my first installation but i've noticed something weird especially that i've never used the 6.0 version.. when i tried to install cPanel, it didn't recognize wget so installed it, then cPanel said that Perl isn't installed, i've installed that and the installation went well since then. Now, when i've tried to access the server via the browser with the IP Adress as i've used to, it didn't work, it was just loading forever, i tried the 2087 port, still the same. but SSH works. I've also tried the commands to start the server manually but none of them worked. How to fix that? Edit: iptables -nL Result root@server [~]# iptables -nL Chain INPUT (policy ACCEPT) target prot opt source destination acctboth all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination acctboth all -- 0.0.0.0/0 0.0.0.0/0 Chain acctboth (2 references) target prot opt source destination tcp -- 216.119.149.168 0.0.0.0/0 tcp dpt:80 tcp -- 0.0.0.0/0 216.119.149.168 tcp spt:80 tcp -- 216.119.149.168 0.0.0.0/0 tcp dpt:25 tcp -- 0.0.0.0/0 216.119.149.168 tcp spt:25 tcp -- 216.119.149.168 0.0.0.0/0 tcp dpt:110 tcp -- 0.0.0.0/0 216.119.149.168 tcp spt:110 icmp -- 216.119.149.168 0.0.0.0/0 icmp -- 0.0.0.0/0 216.119.149.168 tcp -- 216.119.149.168 0.0.0.0/0 tcp -- 0.0.0.0/0 216.119.149.168 udp -- 216.119.149.168 0.0.0.0/0 udp -- 0.0.0.0/0 216.119.149.168 all -- 216.119.149.168 0.0.0.0/0 all -- 0.0.0.0/0 216.119.149.168 all -- 0.0.0.0/0 0.0.0.0/0

    Read the article

  • apache name virtual host - two domains and SSL

    - by Tom
    I'm trying to setup Apache(2.2.3) to run two websites with SSL using both different domains and IP addresses. Both websites run fine on port 80 but when I tried to enable SSL for website2 I get a ssl_error_bad_cert_domain error; website2 picks up the SSL cert for website1. Here is my setup in httpd.conf: # Website1 NameVirtualHost 192.168.10.1:80 <VirtualHost 192.168.10.1:80> DocumentRoot /var/www/html ServerName www.website1.org </VirtualHost> NameVirtualHost 192.168.10.1:443 <VirtualHost 192.168.10.1:443> SSLEngine On SSLCertificateFile conf/ssl/website1.cer SSLCertificateKeyFile conf/ssl/website1.key </VirtualHost> # Website2 NameVirtualHost 192.168.10.2:80 <VirtualHost 192.168.10.2:80> DocumentRoot /var/www/html/chart ServerName www.website2.org </VirtualHost> NameVirtualHost 192.168.10.2:443 <VirtualHost 192.168.10.2:443> SSLEngine On SSLCertificateFile conf/ssl/website2.cer SSLCertificateKeyFile conf/ssl/website2.key </VirtualHost> Update: In answer to Shane (this wouldn't fit in comment box) here is the output from apachectl -S: VirtualHost configuration: 192.168.10.2:80 is a NameVirtualHost default server www.website2.org (/etc/httpd/conf/httpd.conf:1033) port 80 namevhost www.website2.org (/etc/httpd/conf/httpd.conf:1033) 192.168.10.2:443 is a NameVirtualHost default server bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1040) port 443 namevhost bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1040) 192.168.10.1:80 is a NameVirtualHost default server www.website1.org (/etc/httpd/conf/httpd.conf:1017) port 80 namevhost www.website1.org (/etc/httpd/conf/httpd.conf:1017) 192.168.10.1:443 is a NameVirtualHost default server bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1024) port 443 namevhost bogus_host_without_reverse_dns (/etc/httpd/conf/httpd.conf:1024) wildcard NameVirtualHosts and _default_ servers: _default_:443 192.168.10.1 (/etc/httpd/conf.d/ssl.conf:81) Syntax OK

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • General Website Security

    - by Tom
    I pay monthly for a website hosting service that provides me with PHP and FTP support. I can upload my files and create directories and such. Now, I am wondering... If I upload a folder full of images.. or music.. basically personal stuff to my website and name it 'junk1234' can other people find it? Or even search engines? If so, How would I restrict any but those who know the folder name from seeing files in it? Possibly httaccess files? I also have cpanel installed.

    Read the article

  • High Availability with 2 servers?

    - by Tom R
    Is it possible to have a high availability setup with 2 servers? Running Windows Web Server 2008 and MSSQL Web Edition (as I know Express isn't supported)? Getting to the point where our one dedicated server needs scaled out and going to a second server already more than doubles the cost as need to use Web Edition rather than Express (db is only 500MB).

    Read the article

  • rsyslogd not monitoring all files

    - by Tom O'Connor
    So.. I've installed Logstash, and instead of using the logstash shipper (because it needs the JVM and is generally massive), I'm using rsyslogd with the following configuration. # Use traditional timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $IncludeConfig /etc/rsyslog.d/*.conf # Provides kernel logging support (previously done by rklogd) $ModLoad imklog # Provides support for local system logging (e.g. via logger command) $ModLoad imuxsock # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none;local6.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg * # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log In /etc/rsyslog.d/logstash.conf there are 28 file monitor blocks using imfile $ModLoad imfile # Load the imfile input module $ModLoad imklog # for reading kernel log messages $ModLoad imuxsock # for reading local syslog messages $InputFileName /var/log/rabbitmq/startup_err $InputFileTag rmq-err: $InputFileStateFile state-rmq-err $InputFileFacility local6 $InputRunFileMonitor .... $InputFileName /var/log/some.other.custom.log $InputFileTag cust-log: $InputFileStateFile state-cust-log $InputFileFacility local6 $InputRunFileMonitor .... *.* @@10.90.0.110:5514 There are 28 InputFileMonitor blocks, each monitoring a different custom application logfile.. If I run [root@secret-gm02 ~]# lsof|grep rsyslog rsyslogd 5380 root cwd DIR 253,0 4096 2 / rsyslogd 5380 root rtd DIR 253,0 4096 2 / rsyslogd 5380 root txt REG 253,0 278976 1015955 /sbin/rsyslogd rsyslogd 5380 root mem REG 253,0 58400 1868123 /lib64/libgcc_s-4.1.2-20080825.so.1 rsyslogd 5380 root mem REG 253,0 144776 1867778 /lib64/ld-2.5.so rsyslogd 5380 root mem REG 253,0 1718232 1867780 /lib64/libc-2.5.so rsyslogd 5380 root mem REG 253,0 23360 1867787 /lib64/libdl-2.5.so rsyslogd 5380 root mem REG 253,0 145872 1867797 /lib64/libpthread-2.5.so rsyslogd 5380 root mem REG 253,0 85544 1867815 /lib64/libz.so.1.2.3 rsyslogd 5380 root mem REG 253,0 53448 1867801 /lib64/librt-2.5.so rsyslogd 5380 root mem REG 253,0 92816 1868016 /lib64/libresolv-2.5.so rsyslogd 5380 root mem REG 253,0 20384 1867990 /lib64/rsyslog/lmnsd_ptcp.so rsyslogd 5380 root mem REG 253,0 53880 1867802 /lib64/libnss_files-2.5.so rsyslogd 5380 root mem REG 253,0 23736 1867800 /lib64/libnss_dns-2.5.so rsyslogd 5380 root mem REG 253,0 20768 1867988 /lib64/rsyslog/lmnet.so rsyslogd 5380 root mem REG 253,0 11488 1867982 /lib64/rsyslog/imfile.so rsyslogd 5380 root mem REG 253,0 24040 1867983 /lib64/rsyslog/imklog.so rsyslogd 5380 root mem REG 253,0 11536 1867987 /lib64/rsyslog/imuxsock.so rsyslogd 5380 root mem REG 253,0 13152 1867989 /lib64/rsyslog/lmnetstrms.so rsyslogd 5380 root mem REG 253,0 8400 1867992 /lib64/rsyslog/lmtcpclt.so rsyslogd 5380 root 0r REG 0,3 0 4026531848 /proc/kmsg rsyslogd 5380 root 1u IPv4 1200589517 0t0 TCP 10.10.10.90 t:40629->10.10.10.90:5514 (ESTABLISHED) rsyslogd 5380 root 2u IPv4 1200589527 0t0 UDP *:45801 rsyslogd 5380 root 3w REG 253,3 17999744 2621483 /var/log/messages rsyslogd 5380 root 4w REG 253,3 13383 2621484 /var/log/secure rsyslogd 5380 root 5w REG 253,3 7180 2621493 /var/log/maillog rsyslogd 5380 root 6w REG 253,3 43321 2621529 /var/log/cron rsyslogd 5380 root 7w REG 253,3 0 2621494 /var/log/spooler rsyslogd 5380 root 8w REG 253,3 0 2621495 /var/log/boot.log rsyslogd 5380 root 9r REG 253,3 1064271998 2621464 /var/log/custom-application.monolog.log rsyslogd 5380 root 10u unix 0xffff81081fad2e40 0t0 1200589511 /dev/log You can see that there are nowhere near 28 logfiles actually being read. I really had to get one file monitored, so I moved it to the top, and it picked it up, but I'd like to be able to monitor all 28+ files, and not have to worry. OS is Centos 5.5 Kernel 2.6.18-308.el5 rsyslogd 3.22.1, compiled with: FEATURE_REGEXP: Yes FEATURE_LARGEFILE: Yes FEATURE_NETZIP (message compression): Yes GSSAPI Kerberos 5 support: Yes FEATURE_DEBUG (debug build, slow code): No Atomic operations supported: Yes Runtime Instrumentation (slow code): No Questions: Why is rsyslogd only monitoring a very small subset of the files? How can I fix this so that all the files are monitored?

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell)?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >