Search Results

Search found 4390 results on 176 pages for 'git daemon'.

Page 155/176 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • iptables logging not working?

    - by vps_newcomer
    OS: Ubuntu 10.04 Logging daemon: rsyslog For some reason i'm not getting any iptables logs, even thought i don't look through them very often i'd still like to get it working for the sake of it working XD Here is my /etc/ryslog.d/iptables.conf :msg, contains, "[IPTABLES]" -/var/log/iptables.log & ~ My iptables logging prefix is "[IPTABLES]" followed by whatever else (example [IPTABLES] Denied xyz) the /var/log/iptables.log file is being created, however its not getting any entries. I can see the logging entries in dmesg but not in syslog or messages. Whats going on? EDIT: My iptables logging rules: # logging limit LoggingLimit=5/min LoggingPrefix=IPTABLES # Logging chain iptables -N LOG_REJECT iptables -A LOG_REJECT -j LOG # join INPUT to LOG_REJECT iptables -A INPUT -j LOG_REJECT # logging iptables -A LOG_REJECT -p tcp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied TCP: " #--log-level 7 iptables -A LOG_REJECT -p udp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied UDP: " #--log-level 7 iptables -A LOG_REJECT -p icmp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied ICMP: " #--log-level 7 Update: I found a thread that has the same symptoms as i do, apparently is a kernel bug. I am using a VPS so could anyone point me on how to upgrade my kernel or apply a workaround? I couldn't find a 2.6.34 kernel listed in apt-cache. Thread: http://www.linode.com/forums/viewtopic.php?t=5533

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • Mysql start fails with Operating System error 13

    - by curious
    I have XAMPP on my Ubuntu Lucid system and everything worked fine. But there seems to be some problem now and mysql wouldn't start. I had tried to recover a few Drupal databases and hence copied the raw files to /opt/lampp/var/mysql folder like all other database folders. And, I guess that could have caused the problem. I am pasting the last few lines of the error log. Someone please help me out. 100814 15:17:47 mysqld_safe Starting mysqld daemon with databases from /opt/lampp/var/mysql 100814 15:17:47 [Note] Plugin 'FEDERATED' is disabled. 100814 15:17:47 [ERROR] Can't open shared library 'libpbxt.so' (errno: 0 API version for STORAGE ENGINE plugin is too different) 100814 15:17:47 [Warning] Couldn't load plugin named 'PBXT' with soname 'libpbxt.so'. 100814 15:17:48 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name /opt/lampp/var/mysql/ibdata1 InnoDB: File operation call: 'open'. InnoDB: Cannot continue operation.

    Read the article

  • Noob tab widget example not running

    - by michbeck
    hey all, i'm trying to reproduce the tabwidget example (http://developer.android.com/resources/tutorials/views/hello-tabwidget.html). i'm not really sure what's the problem, i've git no errors while compiling, but i cannot see the application on emulators screen :-/ would be excellent if maybe anyone could have a look at my classes and tell me what's my mistake? I've packed my project here: http://etanto.com/TabTest.zip thanx a lot in advance, folks! michbeck here's the console dump while the run: [2010-06-10 09:18:34 - TabTest] Launching a new emulator with Virtual Device 'Virtual1' [2010-06-10 09:18:35 - TabTest] New emulator found: emulator-5554 [2010-06-10 09:18:35 - TabTest] Waiting for HOME ('android.process.acore') to be launched... [2010-06-10 09:19:05 - TabTest] WARNING: Application does not specify an API level requirement! [2010-06-10 09:19:05 - TabTest] Device API version is 8 (Android 2.2) [2010-06-10 09:19:05 - TabTest] HOME is up on device 'emulator-5554' [2010-06-10 09:19:05 - TabTest] Uploading TabTest.apk onto device 'emulator-5554' [2010-06-10 09:19:05 - TabTest] Installing TabTest.apk... [2010-06-10 09:19:22 - TabTest] Success! [2010-06-10 09:19:22 - TabTest] \TabTest\bin\TabTest.apk installed on device [2010-06-10 09:19:22 - TabTest] Done!

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • What is the best way to make versioned files available for download

    - by Saif Bechan
    I have a small PHP framework which I want to make available for download. It is located in a git repository. But the last version is not always the one that I want to make available for download. Is there some place I can make the versions available for download. Another thing about this framework is that I bring out additional components for the framework. These also have different versions. Is there somewhere where I can add the whole project, and people can browse trough everything and download what they need. Or should I make this myself?

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Teamcity and Grails

    - by WaZ
    Hi there, My current requirement is: I have to package my grails app and use teamcity for continuous build. The only problem is the build agents don't have groovy and grails installed (don't ask why) I want to package my app with Groovy and Grails directories and check in Git. So that there is no dependency. And the package has everything to run the app. Can anybody please help. Please let me know if you want me to rephrase my question.

    Read the article

  • Sharing a fabfile across multiple projects

    - by Matthew Rankin
    Fabric has become my deployment tool of choice both for deploying Django projects and for initially configuring Ubuntu slices. However, my current workflow with Fabric isn't very DRY, as I find myself: copying the fabfile.py from one Django project to another and modifying the fabfile.py as needed for each project (e.g., changing the webserver_restart task from Apache to Nginx, configuring the host and SSH port, etc.). One advantage of this workflow is that the fabfile.py becomes part of my Git repository, so between the fabfile.py and the pip requirements.txt, I have a recreateable virtualenv and deployment process. I want to keep this advantage, while becoming more DRY. It seems that I could improve my workflow by: being able to pip install the common tasks defined in the fabfile.py and having a fab_config file containing the host configuration information for each project and overriding any tasks as needed Any recommendations on how to increase the DRYness of my Fabric workflow?

    Read the article

  • What steps should I take to secure Tomcat 6.x?

    - by PAS
    I am in the process of setting up an new Tomcat deployment, and want it to be as secure as possible. I have created a 'jakarta' user and have jsvc running Tomcat as a daemon. Any tips on directory permissions and such to limit access to Tomcat's files? I know I will need to remove the default webapps - docs, examples, etc... are there any best practices I should be using here? What about all the config XML files? Any tips there? Is it worth enabling the Security manager so that webapps run in a sandbox? Has anyone had experience setting this up? I have seen examples of people running two instances of Tomcat behind Apache. It seems this can be done using mod_jk or with mod_proxy... any pros/cons of either? Is it worth the trouble? In case it matters, the OS is Debian lenny. I am not using apt-get because lenny only offers tomcat 5.5 and we require 6.x. Thanks!

    Read the article

  • How to auto syncronize files with network drive on Windows XP?

    - by stephenmm
    Windows XP: I would like to auto synchronize files between a a local drive and a network drive. I am aware of Windows Briefcase but it is very slow and I have to tell it to synchronize. I really like the way Dropbox does there synchronization as it is almost instantaneous. It is very impressive. I would just use Dropbox but I cannot install it on the remote machine. Is there some tool or script I can create that will watch a particular folder for any changes and then sync those changes to the networked drive automatically and nearly instantaneously? CLARIFICATION: I would like this tool/script to to be a daemon that starts when windows starts and continually monitors a folder for any changes to its contents. Once it observes changes in the source or the destination it synchronizes the files that changed (Very similar to the way Dropbox works). I have a good idea about how I would do this in a Perl script and if a tool does not exist that does this I will write it myself in Perl. If someone has already done this can they share the script?

    Read the article

  • Is Android IPC plumbing exposed in any official and/or supported way?

    - by mathrick
    I'm interested in knowing how much the IPC mechanisms are meant to be exposed to the outside world. That is, if I wanted to impersonate a dalvik VM instance without having my app actually written in Java, am I allowed to do so, or will the protocol change the next time I look away from the screen? If it's allowed, what are the stability guarantees or lack thereof? Is there anything like documentation, or am I supposed just to read the fine sources on android.git.kernel.org? The purpose of it all would be to write apps in !Java languages while retaining the ability to construct GUIs. I don't care or mind if the code is technically inside a dalvik process as a JNI callout, what I'm interested in is "if I'm really good at pretending I'm Java over the wire, can I do everything actual Java code can? Or is there something that's only available as Java bytecode and nothing else?"

    Read the article

  • Finding which files were "FIXED"and how many times between two specific date by using Trac?

    - by mkafkas
    I need to find out that how many times and which files are fixed or changed due to a bug between two specific dates in an open source project which uses Trac. I selected Webkit project for that purpose. (https://trac.webkit.org/) However, it can be any open source project. What can I do for that? How do I start? Do i have to use version control systems like svn or git for intergration? I am kinda newbie for these bug-tracking and issue-tracking systems.

    Read the article

  • What should I do or learn to better prepare myself for a co-op position?

    - by Chris Vinz
    I'm currently taking computer systems in a technical institute and I will start looking for a coop job by next September. Since summer vacation is only a few weeks away, I was wondering what I should learn or do to help me land a job and do well in it. I'm pretty sure I'm ahead of most of my classmates since I got around 1.5 years "head start." For now, I'm planning to learn how to use source control (git - for no reason really) and was actually thinking of learning Scheme through SICP and maybe build something nice with it at the end. On the other hand, I'm wondering if it's better to expand on what I know right now and I'm thinking of C++ since I enjoy it a lot more than others like Java. Can I get advice on this? thanks!

    Read the article

  • version control + continuous integration with Flex + Ruby or Django

    - by user306584
    trying to pick version control, continuous integration, and host for Flex + Ruby or Django smallish project. Question: version control: I've used SVN and CVS in the past. I hear great things about git. Not sure what to pick. continuous integration: I've heard good things about hudson and cruiseControl. Not sure what to pick hosting: is my own server the only way to go? Are the decent cloud options that are not too expensive? or should I look for some free hosting service? thank you for your help! f

    Read the article

  • Simple Version Contol

    - by JM01
    We work on a lot of small website projects. There are three of us in different physical locations. I would like a system that is very simple where the main concern is checking out and checking in web files (php, css, images, js) so that we don't accidentally overwrite each other's code. We also need a way to synch our local file systems with the files on the webserver and with each other. Rolling back to older versions is nice but features like branching and merging are not important. It seems like GIT may be overkill for our purpose or maybe not. Can you recommend anything?

    Read the article

  • how can I have a teammate restart heroku server from his machine

    - by josh
    I have a rails app up on heroku. Sometimes the server bombs out and I have to go to the console and execute heroku restart so that servers get restarted. This seems to fix the problem. However, I am not on my machine all the time. I would like to have a team member have this capability as well. For this to happen...what does he need to do? Does he need to first have access to the github repository so that he can push and pull code to the repository and then install heroku on his machine? Can this be done without git hub? can he just install heroku?

    Read the article

  • How do you install cocos2d-iphone on the Mac?

    - by johnfromberkeley
    There are no good instructions for installing cocos2d for iPhone on the mac. I downloaded the current build from git, a folder called "cocos2d-iphone-0.99.1". i put this folder in /Developer/Libary. Q: is this right? I tried running the file called "install_template.sh". it said the templates were already installed. Instead, I manually dragged the templates folders where they belong, and they ~do~ appear in the XCode's "New Project" dialog. When I create a new cocos2d project, I see all these red links for project files, instead of the regular black links. When I try to open them in the finder, nothing happens. I can tell that something is not linked. Can someone please help walk me through this? Thanks!

    Read the article

  • What is the use of commit messages?

    - by eteubert
    Hi folks, I struggled asking that question but here it is. I am using source control since several years for multiple projects using different systems (svn, hg, git) and I learned how to improve my messages by following guidelines etc. But as far as I can remember I never ever had a look at them afterwards. So ... how do you profit from your own commit messages? When I need to go back because I smashed something and need a fresh start, I usually just go back to the latest "node" (where I started or merged a branch). Do I write those messages just for people monitoring the project who are curious what is going on? Regards

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Does Mac OS X throttle the RATE of socket creation?

    - by pbhogan
    This may seem programming related, but this is an OS question. I'm writing a small high performance daemon that takes thousands of connections per second. It's working fine on Linux (specifically Ubuntu 9.10 on EC2). On Mac OS X if I throw a few thousand connections at it (roughly about 16350) in a benchmark that simply opens a connection, does it's thing and closes the connection, then the benchmark program hangs for several seconds waiting for a socket to become available before continuing (or timing out in the process). I used both Apache Bench as well as Siege (to make sure it wasn't the benchmark application). So why/how is Mac OS X limiting the RATE at which sockets can be used, and can I stop it from doing this? Or is there something else going on? I know there is a file descriptor limit, but I'm not hitting that. There is no error on accepting a socket, it's simply hangs for a while after the first (roughly) 16000, waiting -- I assume -- for the OS to release a socket. This shouldn't happen since all prior the sockets are closed at that point. They're supposed to come available at the rate they're closed, and do on Ubuntu, but there seems to be some kind of multi (5-10?) second delay on Mac OS X. I tried tweaking with ulimit every-which-way. Nada.

    Read the article

  • Deploy tracking with Ruby on Rails and Capistrano

    - by TK
    Like every commit has a reason and purpose, I think each deploy has a purpose and reason. Source code commits have a comment. But deploying doesn't have any. How do I record a reason and purpose for each deploy automatically? I need to keep a record of: Who deployed to where and what time. Why deployed? Bug fixes? Feature update? Emergency fix not on iteration plan? Which git or svn ref was used? Have anybody felt the need for this kind of system? How do you feel about my approach? How can I achieve my goal? I'm currently using Capistrano for deployment.

    Read the article

  • Working with version control on a Drupal/CMS project

    - by Jens Ljungblad
    I was wondering how teams that develop sites using Drupal (or any other CMS) integrate version control, subversion, git or similar, into their workflow. You'd obviously want your custom code and theme files under version control but when you use a CMS such as Drupal a lot of the work consists of configuring modules and settings all of which is stored in the database. So when you are a team of developers, how do you collaborate on a project like this? Dumping the database into a file and putting that file under version control might work I guess, but when the site is live the client is constantly adding content which makes syncing a bit problematic. I'd love to know how others are doing this.

    Read the article

  • Missing the Rails 2.3.4 gem. Even though it's installed!

    - by Shereef
    Running Snow Leopard. Tried uninstalling, and re-installing. Still getting the same error whenever I run a rake task. mbpro:redmine shereef$ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.0.0] mbpro:redmine shereef$ rails -v Rails 2.3.4 mbpro:redmine shereef$ which rails /usr/local/bin/rails mbpro:redmine shereef$ gem -v 1.3.5 mbpro:redmine shereef$ which gem /usr/local/bin/gem mbpro:redmine shereef$ rake -v (in /Users/shereef/Documents/Code/BetterMeans/redmine) Missing the Rails 2.3.4 gem. Please gem install -v=2.3.4 rails, update your RAILS_GEM_VERSION setting in config/environment.rb for the Rails version you do have installed, or comment out RAILS_GEM_VERSION to use the latest version installed. mbpro:redmine shereef$ which rake /usr/bin/rake mbpro:redmine shereef$ $PATH -bash: /usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin: No such file or directory mbpro:redmine shereef$

    Read the article

  • Protocol (or service publish/discovery) to detect devices in network

    - by Gobliins
    we connect some embedded devices in a network. What i am looking for now, is a way to find the devices IP and identify them. We work with Windows PC´s and i am about to write a C# tool that should do this. I thought about send a udp broadcast and in the ack i.e. is the device´s ip, which would mean the device needs a daemon runnig to assign an ip itself. Running a service (like a printer) on the device, and on the PC just lookup for the service. I read about some things like apipa, zeroconf, ipv4 local link, bonjour, dns-sd, mdns, bonjour; They can automatically assign ip´s and publish services in a network. My Question is, can someone recommend me what would be good for my task? -The protocol or Service should be low on ressource (memory/cpu usage) use. -Are there some standard protocolls to use? -Is DNS a good idea or would it be to ressource consumpting just for finding a device´s IP? -Should also work when no dhcp servers are around. edit: To clarify a bit: The IP configuration is automatic. The problem to focus is how to tell the PC which IP in the network (or a direct connection in this vase there would only be one) belongs to the device (identity).

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >