Search Results

Search found 20299 results on 812 pages for 'git log'.

Page 105/812 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • gitolite post commit hook to update redmine's repository

    - by eliocs
    Hello, I currently have a ubuntu server machine which has gitolite and redmine installed. Redmine accesses repository copies which are updated using a cron task. Having a cron task to pull the updates seems like an overkill is there anyway a gitolite post-commit script could execute a pull as the redmine user. My current update script looks like this: */15 * * * * redmine cd /home/redmine/repositories/support && git pull The post-commit script I guess should be similar, how can I give the gitolite user the privileges to execute the pull as the redmine user? Thanks in advance. p.s: don't have enough reputation to create de gitolite tag.

    Read the article

  • Can't install the gitosis

    - by Shuoling Liu
    While I am trying to init the gitosis, I got the following errors, any idea? :~$ sudo -H -u git gitosis-init < ida_rsa.pub [sudo] password for chinablc: Sorry, try again. [sudo] password for chinablc: Traceback (most recent call last): File "/usr/bin/gitosis-init", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2562, in <module> working_set.require(__requires__) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 626, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 524, in resolve raise DistributionNotFound(req) # XXX put more info here pkg_resources.DistributionNotFound: gitosis==0.2

    Read the article

  • Is my webserver being abused for banking fraud?

    - by koffie
    Since a few weeks i'm getting a lot of 403 errors from apache in my log files that seem to be related to a bank frauding scheme. The relevant log entries look like this (The ip 1.2.3.4 is one I made up, I did not modify the rest of each line) www.bradesco.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 427 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.bb.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.santander.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.banese.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" the logformat I use is: LogFormat "%V:%p %U %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" The strange thing is that all these domains are domains of banks and 3 out of the 4 domains are also in the list of the bank frauding scheme described on: http://www.abuse.ch/?p=2925 I would really like to know if my server is being abused for bank frauding or not. I suspect not, because it's giving 403 to all requests. But any extra checks that I can do to ensure that my server is not being abused are welcome. I'm also curious on how the "bad guys" expected my server to behave. I.e. are they just expecting my server to act as a proxy to hide the ip of the fake site, or are they expecting that my server will actually serve the fake banking website? Is the ip 1.2.3.4 more likely to be the ip of a victim or the ip of a bad guy. I suspect a bad guy, because it's quite unlikely that a real person would visit 4 bank sites in a second. If it's from a bad guy I'm very curious at what he is trying to do.

    Read the article

  • Is there a way to correct wrongly typed password / abort the operation while on password prompt in the console in MINGW32?

    - by jakub.g
    I sometimes mistype a password when being asked for it, e.g. by Git when pushing to remote repository. The password is not displayed (even masked as asterisks) in the console. Is there a way either to correct the password, or to abort the operation? Backspace for editing and CtrlC for aborting do not seem to work. I want to save some time instead of waiting for the remote authentication to fail, or providing a bad password, then Enter, CtrlC. Edit: Unfortunately CtrlU doesn't work for me (MINGW32 @ Windows XP). Any other guesses?

    Read the article

  • How to analyse logs after the site was hacked

    - by Vasiliy Toporov
    One of our web-projects was hacked. Malefactor changed some template files in project and 1 core file of the web-framework (it's one of the famous php-frameworks). We found all corrupted files by git and reverted them. So now I need to find the weak point. With high probability we can say, that it's not the ftp or ssh password abduction. The support specialist of hosting provider (after logs analysis) said that it was the security hole in our code. My questions: 1) What tools should I use, to review access and error logs of Apache? (Our server distro is Debian). 2) Can you write tips of suspicious lines detection in logs? Maybe tutorials or primers of some useful regexps or techniques? 3) How to separate "normal user behavior" from suspicious in logs. 4) Is there any way to preventing attacks in Apache? Thanks for your help.

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • bootstraping a SparkleShare project

    - by WoJ
    I just tried SparkleShare as a possible replacement for dropbox/insynch. It looks quite promising, being based on open standards. I was wondering if someone has gone though the process of "bootstraping" a SparkleShare project. I have the initial files I would like to keep synchronized on two clients and the server (as plain files). I was wondering if there would be a way to set a project up so that I would not need to download/upload all the files back and forth (as they are readily available on all three systems). I guess this would involve some git kung-fu I am far from mastering. Thanks!

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • Mount drive at /Volumes/NAME/ or similar in Cygwin

    - by Adam
    Hi.. I'm using Cygwin on Windows 7. When I plug in an USB stick, the drive automatically gets mounted to /cygdrive/x . This is good and really easy to use. My problem is that the drive letter changes sometimes, and when I've got remotes set up in git - I've got one called usb at /cygdrive/h/ - this sometimes doesn't work and I have to change the remote URL. That's just an example, there are other scenarios where I wouldn't want it to change. I like what the Mac does, and puts mounts a volume at /Volumes/STICK (STICK is the Volume name of my usb stick). Is there any way I can do this, or something similar under Cygwin. Thanks

    Read the article

  • Mount drive at /Volumes/NAME/ or similar in Cygwin

    - by Adam
    Hi.. I'm using Cygwin on Windows 7. When I plug in an USB stick, the drive automatically gets mounted to /cygdrive/x . This is good and really easy to use. My problem is that the drive letter changes sometimes, and when I've got remotes set up in git - I've got one called usb at /cygdrive/h/ - this sometimes doesn't work and I have to change the remote URL. That's just an example, there are other scenarios where I wouldn't want it to change. I like what the Mac does, and puts mounts a volume at /Volumes/STICK (STICK is the Volume name of my usb stick). Is there any way I can do this, or something similar under Cygwin. Thanks

    Read the article

  • How do I get transparent, efficient, file system snapshotting or versioning on ext3/4?

    - by shovas
    I've long thought about versioning file systems. This is a killer feature and I've looked at Wayback, ext3cow, zfs, fuse solutions, or just cvs/svn/git overlays. I consider ext3cow the model for my requirements. Transparent, efficient, but I can do without the extra ls abc@timestamp feature. As long as I somehow get automated, transparent versioning of my files. It could be instantaneous or it could be based on snapshots on intervals of 10s, 30s, 1m, 5m, 15m, etc. Just something that will efficiently deal with thousands of files in a given directory all of various sizes, most small, but some upwards of 100m to 1gb. ZFS isn't really an option as I'm on linux (and would prefer not to use it through fuse as I already have an ext3 setup I want to version, not something new). What solutions are out there?

    Read the article

  • How to roll the log file on startup in logback

    - by Mike Q
    Hi all, I would like to configure logback to do the following. Log to a file Roll the file when it reaches 50MB Only keep 7 days worth of logs On startup always generate a new file (do a roll) I have it all working except for the last item, startup roll. Does anyone know how to achieve that? Here's the config... <appender name="File" class="ch.qos.logback.core.rolling.RollingFileAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg \(%file:%line\)%n</Pattern> </layout> <File>server.log</File> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <FileNamePattern>server.%d{yyyy-MM-dd}.log</FileNamePattern> <!-- keep 7 days' worth of history --> <MaxHistory>7</MaxHistory> <TimeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <MaxFileSize>50MB</MaxFileSize> </TimeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> </appender>

    Read the article

  • What is likely cause of Android runtime exception "No suitable Log implementation" related to loggin

    - by M.Bearden
    I am creating an Android app that includes a third party jar. That third party jar utilizes internal logging that is failing to initialize when I run the app, with this error: "org.apache.commons.logging.LogConfigurationException: No suitable Log implementation". The 3rd party jar appears to be using org.apache.commons.logging and to depend on log4j, specifically log4j-1.2.14.jar. I have packaged the log4j jar into the Android app. The third party jar was packaged with a log4j.xml configuration file, which I have tried packaging into the app as an XML resource (and also as a raw resource). The "No suitable Log implementation" error message is not very descriptive, and I have no immediate familiarity with Java logging. So I am looking for likely causes of the problem (what class or configuration resources might I be missing?) or for some debugging technique that will result in a different error message that is more explicit about the problem. I do not have access to source code for the 3rd party jar. Here is the exception stack trace. When I run the app, I get the following exception as soon as one of the third party jar classes attempts to initialize its internal logging. DEBUG/AndroidRuntime(15694): Shutting down VM WARN/dalvikvm(15694): threadid=3: thread exiting with uncaught exception (group=0x4001b180) ERROR/AndroidRuntime(15694): Uncaught handler: thread main exiting due to uncaught exception ERROR/AndroidRuntime(15694): java.lang.ExceptionInInitializerError ERROR/AndroidRuntime(15694): Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log implementation ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:842) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:601) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:333) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:307) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:645) ERROR/AndroidRuntime(15694): at org.apache.commons.configuration.ConfigurationFactory.<clinit>(ConfigurationFactory.java:77)

    Read the article

  • How to create a log file in the folder which will be created at run time

    - by swati
    Hello Everyone, I new to apache logger.I am using apache log4j for my application. I am using the following configuration file configure the root logger log4j.rootLogger=INFO, STDOUT, DAILY configure the console appender log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender log4j.appender.STDOUT.Target=System.out log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout log4j.appender.STDOUT.layout.conversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} [%p] %c:%L - %m%n configure the daily rolling file appender log4j.appender.DAILY=org.apache.log4j.DailyRollingFileAppender log4j.appender.DAILY.File=log4jtest.log log4j.appender.DAILY.DatePattern='.'yyyy-MM-dd-HH-mm log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout log4j.appender.DAILY.layout.conversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} [%p] %c:%L - %m%n So when my application runs it creates a folder called somename_2010-04-09-23-09 . My log file has to be created inside of this somename_2010-04-09-23-09 folder.(Which created at run time..). Is there anyway to do that.. Is there anyway we can specify in the configuration file so that it will create at run time the log file inside of the folder somename_2010-04-09-23-03 folder..? I would really appreciate if some one can answer to my questions. Thanks, Swati

    Read the article

  • http_access.log on WebSphere 6.1.0.29

    - by DavidG
    I am running WebSphere 6.1.0.29 and I need to track the requests being made to an Enterprise Application. Previously I did this by routing the requests through a proxy server, but I need to repeat the exercise and I figure there must be a simpler way. Does anyone know how to enable HTTP access logging? I have been through the console an thought I had enabled http_access.log and http_error.log via: Application servers server1 HTTP error and NCSA access logging (where 'server1' is the application server) I've enabled the service at startup, and ticked the boxes to enable access logging and error logging - however... nothing has happened. I have restarted the server, restarted the Enterprise apps and even did a "find . -name" for the log files - but they don't seem to be anywhere on the system. I saw on a JavaRanch thread someone suggested writing a custom filter for requests in an application, but this seems like wild overkill - plus I am doing the logs to test a pre-built binary, so I don't want to mess with the code. Anyone have any ideas/suggestions? Help! :-)

    Read the article

  • Writing to a new log file each day with TraceSource

    - by Cipher
    I am using a logger in my application to write to files. The source, switch and listeners have been defined in the app.config file as follows: <system.diagnostics> <sources> <source name="LoggerApp" switchName="sourceSwitch" switchType="System.Diagnostics.SourceSwitch"> <listeners> <add name="myListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="myListener.log" /> </listeners> </source> </sources> <switches> <add name="sourceSwitch" value="Information" /> </switches> </system.diagnostics> Inside, my .cs code, I use the logger as follows: private static TraceSource logger = new TraceSource("LoggerApp"); logger.TraceEvent(TraceEventType.Information, 1, "{0} : Started the application", DateTime.Now); What would I have to do to create a new log file each day instead of writing to the same log file every time?

    Read the article

  • Log in to subdomain via main domain

    - by Mattias
    I have a website, available through multiple domainnames. like www.domain1.com .... www.domain5.com All my customers have their own subdomain. like: customer1.domain1.com customer2.domain1.com .... customer351.domain4.com Currently i dont use SSL, each customer log in their own account via their sub domain. I want to change this, and make all customers log in on a central log in page, that would use SSL, for example. https://login.domain1.com And somehow redirect each user to the correct sub domain adress. (Sub domain that don't use SSL) How do I do this, and maintain security? One idea i had: Login - add random value somewhere in the database, Redirect to subdomain, with querystring the randomvalue. And after that the session takes care of it, Each value can be used once only.. But how secure is that? I guess someone would ask the question "why?" to me. Because SSL costs money. And unfortunately i dont have a lot of it. :D Thanks for your time!

    Read the article

  • Python - Problems using mechanize to log into a difficult website

    - by user1781599
    × 139886 I am trying to log in to betfair.com by using mechanize. I have tried several ways but it always fail. This is the code I have developed so far, can anyone help me to identify what is wrong with it and how I can improve it to log into my betfair account? Thanks, import cookielib import urllib import urllib2 from BeautifulSoup import BeautifulSoup import mechanize from mechanize import Browser import re bf_username_name = "username" bf_password_name = "password" bf_form_name = "loginForm" bf_username = "xxxxx" bf_password = "yyyyy" urlLogIn = "http://www.betfair.com/" accountUrl = "https://myaccount.betfair.com/account/home?rlhm=0&" # This url I will use to verify if log in has been successful br = mechanize.Browser(factory=mechanize.RobustFactory()) br.addheaders = [("User-Agent","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.90 Safari/537.1")] br.open(urlLogIn) br.select_form(nr=0) print br.form br.form[bf_username_name] = bf_username br.form[bf_password_name] = bf_password print br.form #just to check username and psw have been recorded correctly responseSubmit = br.submit() response = br.open(accountUrl) text_file = open("LogInResponse.html", "w") text_file.write(responseSubmit.read()) #this file should show the home page with me logged in, but it show home page as if I was not logged it text_file.close() text_file = open("Account.html", "w") text_file.write(response.read()) #this file should show my account page, but it should a pop up with an error text_file.close()

    Read the article

  • Logging events as an Office 2007 application opens.

    - by Joshua King
    Is it possible to log what a Microsoft Office 2007 application does as it starts up. We are having an issue with Word where it hangs on the splash screen for a particular user and no one else and would like to find out what is causing it to hang. Windows event viewer only shows that the application was terminated unexpectedly because of a hang.

    Read the article

  • tomcat6 on ubuntu fails when user set to root

    - by J G
    I'm well aware that running tomcat6 is really bad from a security point of view - and opens the box it is running on to all kinds of security risks and attack vectors. That said: When I change the entry in the /etc/init.d/tomcat6 to TOMCAT6_USER=root and then run sudo /etc/init.d/tomcat6 start I get [fail] and nothing is written to the logs under /var/log/tomcat6 and no entry for tomcat6 is created under /var/run How do I diagnose what is going wrong?

    Read the article

  • Nagios vs Splunk

    - by dan_vitch
    I am looking to implement log tracking at my current company. After some research it seems Nagios and Splunk are the two best options. I was wondering if there is a consensus with which is better. I understand that Splunk can be quite pricey if the non-free version is used. That being said I can imagine the answer to my question will be "If you have the money use Splunk, if not use Nagios"

    Read the article

  • need help with logparser on iis logs

    - by user36440
    I am using logparser 2.2 and need a script that does two things: finding urls that contain a value within referer need to loop over 30 folders logparser -rpt:-1 "select count(*)INTO feeds.txt from u_ex100302*.log where to_lowercase(cs(Referer)) like '/feeds%'"

    Read the article

  • Free Windows Application Blocker/Monitor

    - by Click Ok
    I want a free application monitor that when detects certain keywords on window title by example, it closes the application (or prevents that it opens/install). Nice extra will be if the program log the application activity too and internet sites accessed by any browser. Thank you very much! PS: I'm using Windows 7 Ultimate.

    Read the article

  • SQL Server 2005, Huge LDF file.

    - by Scott Jackson
    Hi, I have a database running on SQL Server 2005. The database is 20Gb and the LDF file is 35Gb ! I'm now running low on disk space and want to shrink the log file. How can I do this and how can I stop this happening again ? Many thanks Scott

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >