Search Results

Search found 3070 results on 123 pages for 'cron jobs'.

Page 98/123 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • What are the differences between enterprise software/architecture patterns and open source software?

    - by Jeffrey
    I am mainly a business app developer and I hear terms like CQRS, ServiceBus, SOA, DDD, BDD, AOP a lot. My question is that do these patterns/practices exist only in the "enterprise" world? In contract to the enterprise world is the open source community. Highly trafficked sites like Digg, LiveJournal whenever there is an article mentioning about how they built/scaled their sites all I am hearing is what open source software (Memcached, NoSQL) they used in order to scale/simplify the way they tackle software problems and they rarely mention those above terms. Is it because they are not as sophisticated as those of enterprise level software (I doubt it)? Or are people just making up those terms/practices/patterns in order to keep them jobs? Or am I confusing myself with differences between software development and internet website scaling?

    Read the article

  • postfix revived and delivered have the same values (?)

    - by thinkingbig
    I have configured my first server (Debian with ISPConfig). Generally i want to send bulk e-mails to our users, i configure postfix and turn on postfix... but... After 1 hour of sending emails i have logs like this: Grand Totals messages 21886 received 21883 delivered 0 forwarded 0 deferred 234 bounced 0 rejected (0%) 0 reject warnings 0 held 0 discarded (0%) 30805k bytes received 31280k bytes delivered 3 senders 3 sending hosts/domains 12588 recipients 3 recipient hosts/domains Per-Hour Traffic Summary time received delivered deferred bounced rejected -------------------------------------------------------------------- 0000-0100 0 0 0 0 0 0100-0200 0 0 0 0 0 0200-0300 0 0 0 0 0 0300-0400 0 0 0 0 0 0400-0500 0 0 0 0 0 0500-0600 0 0 0 0 0 0600-0700 0 0 0 0 0 0700-0800 0 0 0 0 0 0800-0900 0 0 0 0 0 0900-1000 0 0 0 0 0 1000-1100 0 0 0 0 0 1100-1200 0 0 0 0 0 1200-1300 0 0 0 0 0 1300-1400 0 0 0 0 0 1400-1500 0 0 0 0 0 1500-1600 15311 15306 0 168 0 1600-1700 6575 6577 0 66 0 1700-1800 0 0 0 0 0 1800-1900 0 0 0 0 0 1900-2000 0 0 0 0 0 2000-2100 0 0 0 0 0 2100-2200 0 0 0 0 0 2200-2300 0 0 0 0 0 2300-2400 0 0 0 0 0 Host/Domain Summary: Message Delivery sent cnt bytes defers avg dly max dly host/domain 21521 30353k 0 3.4 m 15.5 m wp.pl 355 919k 0 54.9 s 13.0 m mysenderdomainexample.pl 7 8477 0 1.7 s 1.9 s prokonto.pl Host/Domain Summary: Messages Received msg cnt bytes host/domain 21879 30786k mysenderdomainexample.pl 5 16196 mx4.wp.pl 1 3200 mx3.wp.pl Senders by message count 21783 [email protected] 96 [email protected] 6 from=< **So, my question is: 1) Why i have recived and delivered have the same values (approx)? 2) How can I check if an email has been delivered? 3) How to change default "root" and "www-data" user (FROM / RETURN PATH) to another? I have changed this in script, but postfix ignore scripting values and send every mail from root (we have .php send cron's in /etc/crontab) 4) WHY APPROX 100 % MAILS RECIVED HAS BEEN ADRESED TO MY SENDER HOST? Host/Domain Summary: Messages Received Waiting for respond, Regards TB**

    Read the article

  • MySQL break out group clause from subquery

    - by Anton Gildebrand
    Here is my query SELECT COALESCE(js.name,'Lead saknas'), count(j.id) FROM jobs j LEFT JOIN job_sources js ON j.job_source=js.id LEFT JOIN (SELECT * FROM quotes GROUP BY job_id) q ON j.id=q.job_id GROUP BY j.job_source The problem is that it's allowed for each job to have more than one quote. Because of that i group the quotes by job_id. Now sure, this works. But i don't like the solution with a subquery. How can i break out the group clause from the subquery to the main query? I have tried to add q.job_id to the main group clause, both before and after the existing one but don't get the same results.

    Read the article

  • "lock request time out period exceeded" Error When Trying to See DB Hierarchies

    - by Lloyd Banks
    I have a DB that I can run basic queries (albeit much slower than normal) off of. When I try to see the hierarchy trees for tables, views, or procedures in SSMS Object Explorer, I get the "lock request time out period exceeded". My Report Server reports that run off of objects in this DB are no longer completing. Jobs associated with procedures stored on this DB also do not run. I tried using sp_who2 to find and kill all connections on the DB. This has not solved the problem. What is going on here? How can I resolve this?

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • How to convert submitted form array to this array on php?

    - by drfanai
    Lets suppose i have the following array submitted by a html form: array( 'firstname' => array('Sara','Jim'), 'lastname' => array('Gibson','Jobs') ); What i wanna achieve is the following array: array( array( 'firstname' => 'sara', 'lastname' => 'Gibson' ), array( 'firstname' => 'Jim', 'lastname' => 'Jim' ) ); I need a function to automatically sort the array not manually by entering data but automatically processing array data.

    Read the article

  • iPad as programming platform--What future do touch screens have with programming?

    - by user94154
    I read this question a few weeks ago. I thought about it when I first saw the iPad. Do you think it would be possible to set up a development environment on the iPad? I think it would be awesome if there was an InstantRails App, a Django App, maybe even 280 North's Atlas could run on it :). Would you develop using an on-screen keyboard and a 10 inch screen? Steve Jobs seems to think touch-screens are the future of web browsing. What Future does touch have with programming?

    Read the article

  • Regular Expression .net flavor

    - by user1440109
    Dont ask how this works but currently it does ("^\|(.?)\|*$")....kinda. This removes all extra pipes...part one....I have searched all over no anwser yet. I am using VB2011 beta...asp web form......vb coding though! I want to capture special character pipe (|) which is used to seperate words...i.e. car|truck|van|cycle problem is users lead with, trail with, use multiple, and use spaces before and after...i.e. |||car||truck | van || cycle. another example: george bush|micheal jordon|bill gates|steve jobs <-- this would be correct but when I do remove space it takes correct space out. so I want to get rid of whitespace leading, trailing, any space before | and space after | and only allow one pipe (|)....in between alphanumeric of course.

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • tftpd starts randomly

    - by Mutant
    A few days ago my Little Snitch filter starts popping up tftpd. I'd never seen this before, so I immediately start freaking out thinking my Mac has been compromised. I can't find anything unusual on the system. The process usually dies before I can trace it (little snitch never allowed the connection just left the popup up). I finally caught it once, and found this: [10:32]: sudo lsof -nlP | fgrep tftp Password: tftpd 1924 18446744 cwd DIR 1,3 1326 2 / tftpd 1924 18446744 txt REG 1,3 29856 163979456 /usr/libexec/tftpd tftpd 1924 18446744 txt REG 1,3 600576 163686622 /usr/lib/dyld tftpd 1924 18446744 txt REG 1,3 303300608 189014898 /private/var/db/dyld/dyld_shared_cache_x86_64 tftpd 1924 18446744 0u IPv4 0x34a76100fcbb06e3 0t0 UDP *:55818 tftpd 1924 18446744 2u IPv4 0x34a76100f1113c53 0t0 UDP *:69 [10:32]: ps ax | fgrep 1924 1924 ?? S 0:00.00 /usr/libexec/tftpd -i /private/tftpboot 1949 s000 S+ 0:00.00 fgrep 1924 For the life of me I can't figure out what is starting this. Nothing in cron, launchdaemons, etc. Google searches haven't yielded much either. The connection IP is different each time. So my question is: Has anyone seen anything like this before?

    Read the article

  • Designing a Chart that expands as more data is entered in Excel

    - by Matt Ridge
    I have a worksheet that pulls data from another, it is designed to only show late jobs, and it works perfectly. I have it where it is broken down into quarters, and it gathers all this data and does everything I want. Except this one last bit... I want to have it where it shows charts, if there is data in said area the chart would self populate, otherwise it would be blank. If more data is entered into the range expand the chart accordingly. Attached is a simplified workbook with what it does, and what I'd like to see it do. I don't even know if this is possible... I thought if I wrote a script to make it so that the data changes with each addition it may fix my problem, but I'm not sure if that is the best way in this situation. https://dl.dropbox.com/u/3327208/Excel/Charts.xlsx

    Read the article

  • installing NUMPY for Mac OSX 10.7. (Lion) for use with Python

    - by user1744871
    I Need to use nltk and numpy with Python. I am a newbie to Python and initially used the python 2.7.3 that came with my mac (currently running OSX 10.7.5). I learned that the apple version of python may not be robust so I downloaded the standard version from python.org. I have downloaded the nltk program for Mac OSX 10.7. This seemed to install fine. I am trying to download and install numpy using the instructions from the scipy website. http://www.scipy.org/Installing_SciPy/Mac_OS_X. I cloned numpy from github but when I tried to build it using the following command $ python setup.py build I received the following error MacOS/Python: can't open file 'setup.py': [Errno 2] No such file or directory I also tried to build it using the scons command $ python setupscons.py scons --jobs=2 and received the following error /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python: can't open file 'setupscons.py': [Errno 2] No such file or directory Can anyone think of a possible workaround?

    Read the article

  • Log4net rollingfile appender skipping every other day in asp.net mvc app

    - by tasty_spider_men
    Hi, I've set up some log4net logging on an asp.net mvc application i've had running for a little over a month now. I've set up rolling file (and smtp) appender on it. The application is hit several thousand times a day - alot of actions are logged. On top of these a number of batch jobs are run as part of the application that also write to the log. Now - as i've been occupied with other work, i've not attended to this application much assuming things to be working based on my test setup. Unfortunately i discover that the rolling file appender on my production server seems to be skipping every other day. Ie i have logs : 10/4/2010, 10/4/2010.1, 10/4/2010.2, 12/4/2010, 12/4/2010.1 , 14/4/2010, 14/4/2010.1 etc. Any idea what could be causing this? It is absolutely impossible that there has no action on these odd days for the lifetime of the application. Cheers

    Read the article

  • what do i need to do now that I want to take programming hobby to next level ?

    - by hohog
    i've always wanted to make games but did not start actively learning programming by myself until 1st year of university. i kept going throughout university learning new languages, showing off things i had made, while neglecting my major in Biology. Anyways, i've ended up with an Economics degree, with a portfolio of SaaS and web apps i had created so i could eat during my final year. So far, I'm getting a few interviews here and there in web programming positions. When I get a logic pretest, I fail miserably. or job requires comp sci degree. I mean I can easily design and code an entire app which I emphasize through my portfolio.... but i dont know why I am so slow at logic puzzles on prescreening interview... So what should I do now ? get certificates in languages ? go back to school and learn CS ? is it too late to get into windows programming jobs than web programming ?

    Read the article

  • How to skip the invalid rows while inserting the data into Database

    - by Dinesh
    We have a statement., that is inserting some rows in a temporary table (say e.g., 10 rows), while inserting 5th row, it has some issue with one of the column format and giving an error and then it stopped inserting the rows. What I want is, it should skip the error rows and insert valid rows. For those error rows, it can skip that error column and insert with some null value & different status. create table #tb_pagecontent_value (pageid int,formid uniqueidentifier, id_field xml,fieldvalue xml,label_final xml) … … insert into #tb_pagecontent_xml select A.pageid,B.formid,A.PageData.query('/CPageDataXML/control') from Pagedata A inner join page B on A.PageId=B.PageId inner join FormAssociation C on B.FormId=C.FormId where B.pageid in (select pageId from jobs where jobtype='zba' and StatusFlag!=1) in the above e.g., I want to apply that logic. Any help is appreciated.

    Read the article

  • Java and Different Types of Stacks

    - by Rarge
    Currently the only stack I know anything about is Vector, I normally use this in place of an array but I understand that there is other types of stacks and they all suit different jobs. The project I am currently working on requires me to be inserting objects in a certain position inside a stack, not always the front of the stack and I am under the impression that a Vector may not be the best class for this job. Could somebody please give me a brief description of the other types of stacks available to me with the Java language and their advantages and disadvantages? Are these names homogeneous? E.g. Are they only used in the Java language or are they used as general terms in Computer Science? Thank you

    Read the article

  • How can I configure different worker pools using celery?

    - by Chris R
    I need to deploy a queued execution service with (generally) the following three classes of worker: A periodic, low-priority job class that takes a long time and can be processed serially; these jobs should only use 0..2 workers in the system at most. A periodic, deadline-sensitive job class that take a short to medium amount of time (say, topping out at 5 minutes) An ad-hoc job class, that is higher priority than #1, but can interleave with #2. Any workers from class #2 that are inactive when this type of job comes in should handle it, without ever starving the pool of workers for #2 All three job classes are the same task, the only difference between them is how they're requested; they'll take the same input and generate the same output, but each one has different performance guarantees. How can I implement this using celery?

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • pyramid - How to handle complex URL in a elegant way?

    - by Lingfeng Xiong
    I'm writing a admin website which control several websites with same program and database schema but different content. The URL I designed like this: http://example.com/site A list of all sites which under control http://example.com/site/{id} A brief overview of select site with ID id http://example.com/site/{id}/user User list of target site http://example.com/site/{id}/item A list of items sold on target site http://example.com/site/{id}/item/{iid} Item detailed information # ...... something similar As you can see, nearly all URL are need the site_id. And in almost all views, I have to do some common jobs like query Site model against database with the site_id. Also, I have to pass site_id whenever I invoke request.route_path. So... is there anyway for me to make my life easier? Thanks in advance.

    Read the article

  • Error Exception 2048 on my website using OpenClassifieds software

    - by Rich Baxter
    I have a website called e-waitress.net, it is a jobs employment website for the restaurant industry. I installed an open source program called OpenClassifieds and a couple days after installing the OpenClassifieds, I get an error message when I enter my url that is this: ErrorException [ 2048 ]: date_default_timezone_get(): It is not safe to rely on the system's timezone settings. You are required to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/New_York' for 'EDT/-4.0/DST' instead ~ APPPATH/ko322/classes/kohana/date.php [ 592 ] I am wondering is this a server issue with my host provider or is it within the OpenClassifieds installation software? I've reinstalled the software twice and it's returned this error after a couple days of working great. Any ideas?

    Read the article

  • How can I make a dashboard with all pending tasks using Celery?

    - by e-satis
    I want to have some place where I can watch all the pendings tasks. I'm not talking about the registered functions/classes as tasks, but the actual scheduled jobs for which I could display: name, task_id, eta, worker, etc. Using Celery 2.0.2 and djcelery, I found `inspect' in the documentation. I tried: from celery.task.control import inspect def get_scheduled_tasks(nodes=None): if nodes: i = inspect(nodes) else: i = inspect() scheduled_tasks = [] dump = i.scheduled() if dump: for worker, tasks in dump: for task in tasks: scheduled_task = {} scheduled_task.update(task["request"]) del task["request"] scheduled_task.update(task) scheduled_task["worker"] = worker scheduled_tasks.append(scheduled_task) return scheduled_tasks But it hangs forever on dump = i.scheduled(). Strange, because otherwise everything works. Using Ubuntu 10.04, django 1.0 and virtualenv.

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • In which layer should I join 2 entities together?

    - by William
    I use Spring MVC and a regular JDBC. I've just learned that I should separate business process into layers which are presentation layer, controller layer, service layer, and repository/DAO layer. Now suppose that I have an Entity called Person that can have multiple Jobs. Job itself is another entity which have its own properties. From what I gathered, the repository layer only manages one entity. Now I have one entity that contains another entity. Where do I "join" them? The service layer? Suppose I want to get a person whose job isn't known yet (lazy loading). But the system might ask what the job of that particular person is later on. What is the role of each layer in this case? Please let me know if I need to add any detail into this question.

    Read the article

  • Automatically update SVN repository on another server

    - by Mikey C
    We have 2 Ubuntu web servers, one of which is our staging server (Staging) and the other is our live server (Live). Staging has our Subversion repository, as well as the latest version of our sites on it. Because the SVN server is running on Staging, I've added post-commit hook scripts so that the staging server automatically has the latest code. Easy. However, I'd like one of the repositories on Live to also stay updated. This is a repository of images, PDFs and suchlike. When a team member commits to this, I'd like it to automatically update on the live servers so it can be used in mailings, content managed pages etc. I'd add something to the post-commit to SSH across and update, but for security, we can only SSH from one server to another as user 'commandLine', whereas the 'www-data' user runs the post-commit. I'd rather not run a cron on Live to update every 5 minutes, but I can't see another way of doing it without altering all our user permissions. Any ideas?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >