Search Results

Search found 467 results on 19 pages for 'brad r'.

Page 4/19 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • My current iptable configuration doesn't work [on hold]

    - by Brad
    sudo chkconfig iptables off /etc/init.d/iptables on ### Clear/flush iptables sudo iptables -F sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT ### Allow SSH iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT ### Allow YUM updates sudo iptables -A OUTPUT -o eth0 -p tcp --dport 80 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp --dport 443 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT ### Add your rules form the link above, here # ftp,smtp,imap,http,https,pop3,imaps,pop3s sudo iptables -A INPUT -i eth0 -p tcp -m multiport --dports 21,25,143,80,443,110,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp -m multiport --sports 21,25,143,80,110,443,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT ## allow dns sudo iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT && sudo iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT # handling pings sudo iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT sudo iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT # manage ddos attacks sudo iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT ## Implement some logging so that we know what's getting dropped sudo iptables -N LOGGING sudo iptables -A INPUT -j LOGGING sudo iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7 sudo iptables -A LOGGING -j DROP # once a rule affects traffic then it is no longer managed # so if the traffic has not been accepted, block it sudo iptables -A INPUT -j DROP sudo iptables -I INPUT 1 -i lo -j ACCEPT sudo iptables -A OUTPUT -j DROP # allow only internal port forwarding sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT sudo iptables -P FORWARD DROP # create an iptables config file sudo iptables-save > /root/dsl.fw ### Append the following to the rc.local file sudo nano /etc/rc.local ####--- /sbin/iptables-restore < sudo /root/dsl.fw ####--- /etc/init.d/iptables save ## check to see if this setting is working great. sudo service iptables restart ## log out/in testing sudo chkconfig iptables on What is the problem with this setup? If I restart the server it doesn't allow me back in SSH, and there may be a problem with Yum Original source of information: https://gist.github.com/Jonathonbyrd/1274837#file-instructions

    Read the article

  • Error headers: ap_headers_output_filter() after putting cache header in htaccess file

    - by Brad
    Receiving error: [debug] mod_headers.c(663): headers: ap_headers_output_filter() after I included this within the htaccess file: # 6 DAYS <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=518400, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> Any help is appreciated as to what I could do to fix this?

    Read the article

  • What tells initramfs or the Ubuntu Server boot process how to assemble RAID arrays?

    - by Brad
    The simple question: how does initramfs know how to assemble mdadm RAID arrays at startup? My problem: I boot my server and get: Gave up waiting for root device. ALERT! /dev/disk/by-uuid/[UUID] does not exist. Dropping to a shell! This happens because /dev/md0 (which is /boot, RAID 1) and /dev/md1 (which is /, RAID 5) are not being assembled correctly. What I get is /dev/md0 isn't assembled at all. /dev/md1 is assembled, but instead of using /dev/sda2, /dev/sdb2, /dev/sdc2, and /dev/sdd2, it uses /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd. To fix this and boot my server I do: $(initramfs) mdadm --stop /dev/md1 $(initramfs) mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 $(initramfs) mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 $(initramfs) exit And it boots properly and everything works. Now I just need the RAID arrays to assemble properly at boot so I don't have to manually assemble them. I've checked /etc/mdadm/mdadm.conf and the UUIDs of the two arrays listed in that file match the UUIDs from $ mdadm --detail /dev/md[0,1]. Other details: Ubuntu 10.10, GRUB2, mdadm 2.6.7.1 UPDATE: I have a feeling it has to do with superblocks. $ mdadm --examine /dev/sda outputs the same thing as $ mdadm --examine /dev/sda2. $ mdadm --examine /dev/sda1 seems to be fine because it outputs information about /dev/md0. I don't know if this is the problem or not, but it seems to fit with /dev/md1 getting assembled with /dev/sd[abcd] instead of /dev/sd[abcd]2. I tried zeroing the superblock on /dev/sd[abcd]. This removed the superblock from /dev/sd[abcd]2 as well and prevented me from being able to assemble /dev/md1 at all. I had to $ mdadm --create to get it back. This also put the super blocks back to the way they were.

    Read the article

  • Basic Auth on DirectoryIndex Only

    - by Brad
    I am trying to configure basic auth for my index file, and only my index file. I have configured it like so: <Files index.htm> Order allow,deny Allow from all AuthType Basic AuthName "Some Auth" AuthUserFile "C:/path/to/my/.htpasswd" Require valid-user </Files> When I visit the page, 401 Authorization Required is returned as expected, but the browser doesn't prompt for the username/password. Some further inspection has revealed that Apache is not sending the WWW-Authenticate header. GET http://myhost/ HTTP/1.1 Host: myhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 401 Authorization Required Date: Tue, 21 Jun 2011 21:36:48 GMT Server: Apache/2.2.16 (Win32) Content-Length: 401 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Authorization Required</title> </head><body> <h1>Authorization Required</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html> Why is Apache doing this? How can I configure it to send that header appropriately? It is worth noting that this exact same set of directives work fine if I set them for a whole directory. It is only when I configure them to a directory index that they do not work. This is how I know my .htpasswd and such are fine. I am using Apache 2.2 on Windows. On another note, I found this listed as a bug in Apache 1.3. This leads me to believe that this is actually a configuration problem on my end.

    Read the article

  • How do you use environment variables, such as %CommonProgramFiles%, in the PATH and have them recogn

    - by Brad Knowles
    I'm trying to add C:\Program Files\Common Files\xxx\xxx to the system PATH environment variable by appending %CommonProgramFiles%\xxx\xxx to the existing path. After rebooting, I open a command prompt and check the PATH. It expands correctly. However, when using Process Explorer from Sysinternals to view the Environment variables on services.exe, it shows the unexpanded version. Coincidentally, the paths using %SystemRoot% expand and are recognized just fine. I've tried altering the PATH through the Environment Variables window from System Properties and through direct Registry manipulation, neither seems to work. Is it possible to use other environment variables, besides %SystemRoot% in PATH and have services.exe understand it?

    Read the article

  • Ubuntu 10.04 with HD flash

    - by Brad Robertson
    Just noticed that 10.04 is out. My media server has been packed away for a few months but I might dust if off and give 10.04 a shot but I thought I'd see if anyone has any success stories with HD flash in either Chrome or Firefox. I'm currently running Ubuntu 9.10 and it was a large enough pain to get VDPAU working with my Zotac Ion-ITX-C board (eventually found an mplayer PPA that had it compiled in) From reading the 10.04 docs it looks like this is standard now, but I'm wondering about streaming HD, from, say flash or Divx. I've never been able to get HD flash to play without it being extremely choppy, and I chalk this up to the lack of hardware assisted decoding like VDPAU (a guess). My board certainly isn't a competitor in CPU power or memory, which is why i've needed the HW accelerated decoding for HD vids in the past. Just wondering if anyone has had any success stories playing HD vid online (flash, divx or what have you)

    Read the article

  • How do I stop IIS from sending minutely GET requests to my proxied mongrel server?

    - by brad
    I have a rails application running on Windows Server 2008 running IIS7.5. I am using Application Request Routing to send requests to the Mongrel server via IIS (I didn't want to set it up like this but this was the environment I have been forced to use). IIS seems to send a GET request to the Mongrel server once every minute. This is not a huge deal but it does cause a lot of pollution in my logs and also creates a large amount of unwanted session data. I would really like to stop it from doing this. Is there a way?

    Read the article

  • DELL switch 6248 port and mac mapping using SNMP

    - by Brad
    I have a Dell 6248 switch. I connect some of my servers to it and want to know which server nic connected to which switch port. I try using snmpwalk to get this information, but I just can get mac/ip mapping of my server nic from switch, I still can't get which switch port it connect. I try a tool named Managed Switch Port Mapping tool, it can show which switch port is connected to which nic/ip. I use WireShare to get all snmp packets but still can't find what's the snmp oid to get this information. Anyone knows how to get this?

    Read the article

  • How do I set a postgresql password in pgpass.conf for the Administrator account on Windows Server 2008?

    - by brad
    I have a pgpass.conf file that works well for my default user. It is in C:/Users/myuser/AppData/Roaming/postgresql/pgpass.conf. It reads like so; localhost:5432:*:postgres:password1 I have a process that runs under the Administrator account. When I run whoami under this process I get nt authority/system. I want to be able to access the database from this process but it gets stuck because it needs a password. I have tried putting the above pgpass.conf into C:/Users/Administrator/AppData/postgresql/pgpass.conf and C:/Users/Administrator/AppData/Roaming/postgresql/pgpass.conf but it does not work. Is this the correct place for this file? Am I even able to do this as the Administrator. Unfortunately I cannot change the user that this process runs under.

    Read the article

  • Supervisord appears to be running, but monitored programs aren't launched

    - by Brad Montgomery
    I've got supervisord 3.0a8 installed from the system package on ubuntu 10.04 (64bit). The supervisor service appears to be running, but it's not launching the configured programs. Interestingly enough, this exact configuration is running on another system, and is working as expected. The main config file looks like this: ; /etc/supervisor/supervisord.conf [unix_http_server] chmod=0700 file=/var/run/supervisor.sock [supervisord] logfile=/var/log/supervisor/supervisord.log childlogdir=/var/log/supervisor pidfile=/var/run/supervisord.pid [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///var/run/supervisor.sock [include] files = /etc/supervisor/conf.d/*.conf A sample program config looks like this: ; /etc/supervisor/conf.d/sample.conf [program:sample] directory=/opt/sample command=/opt/sample/run.sh Where, the /opt/sample/run.sh is: #!/bin/bash while true; do T=`date` echo "[$T] Running!" >> /var/log/sample.log sleep 1 done And, here's some additional information regarding the running instance of supervisord: root@myhost:~# supervisorctl version 3.0a8 root@myhost:~# which supervisorctl /usr/bin/supervisorctl root@myhost:~# which supervisord /usr/bin/supervisord root@myhost:~# supervisorctl status # NOTE that there's no output! root@myhost:~# supervisorctl avail root@myhost:~# service supervisor status is running root@myhost:~# ps aux | grep supervisor root 21740 0.1 0.4 40772 10056 ? Ss 11:28 0:00 /usr/bin/python /usr/bin/supervisord root 21749 0.0 0.0 7624 932 pts/2 S+ 11:28 0:00 grep --color=auto supervisor root@myhost:~# cat /var/log/supervisor/supervisord.log 2012-04-26 11:28:22,483 CRIT Supervisor running as root (no user in config file) 2012-04-26 11:28:22,536 INFO RPC interface 'supervisor' initialized 2012-04-26 11:28:22,536 WARN cElementTree not installed, using slower XML parser for XML-RPC 2012-04-26 11:28:22,536 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-04-26 11:28:22,539 INFO daemonizing the supervisord process 2012-04-26 11:28:22,539 INFO supervisord started with pid 21740 root@myhost:~# ll /etc/supervisor/conf.d/ total 28 drwxr-xr-x 2 root root 4096 2012-04-26 11:31 ./ drwxr-xr-x 3 root root 4096 2012-04-25 18:38 ../ -rw-r--r-- 1 root root 66 2012-04-26 11:31 sample.conf root@myhost:~# ll /opt/sample/ total 12 drwxr-xr-x 2 root root 4096 2012-04-26 11:32 ./ drwxr-xr-x 4 root root 4096 2012-04-26 11:31 ../ -rwxr-xr-x 1 root root 97 2012-04-26 11:32 run.sh* root@myhost:~# python Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> Any help is greatly appreciated!

    Read the article

  • MySQL 5.5 on Windows server is horribly slow

    - by Brad
    I have had no luck getting MySQL 5.5 to be as fast as 5.1 or MariaDB on the exact same hardware/database/environment under Windows server 2003R2 or 2008R2. My benchmarks from our application: MySQL 5.5 + CentOS 5.2 (XenServer Virtual) = 28 seconds (box is "busy" not buried) MariaDB (5.1) + Windows 2003 (Physical box) = 130 seconds (box is 2% busy) MySQL 5.1 + Windows 2003 (Physical box) = 170 seconds (box is 2% busy) MySQL 5.5 + Windows 2003 (Physical box) = 305 seconds (As high as 600 seconds...) (box is 2% busy) The only difference between these runs is the removal of skip-locking and the running of mysql_upgrade.exe to update some tables for stored procs on 5.5. Yes, I know it's a release candidate, I'm feeding that back to MySQL as well. No slow queries are logged, it doesn't think it's being slow, it just is. I'm going to start tearing into the queries themselves to see if the INSERT/SELECT plans have gone buggo on 5.5. Any help would be appreciated! Thanks

    Read the article

  • Windows 2003 IIS FTP Server Migration w/ User Accounts

    - by Brad
    I'm trying to figure out the best way to migrate an FTP server from old hardware to new hardware. The server is on a domain, but not all the users setup on the server (to use FTP) are domain accounts, some are local to the server. For example, I have users both ways: domain\username machinename\username The new machine name will be different. So I need to copy all the files with permissions in tact from the old server to the new server. Then I need to convert all the user accounts from the old server to the new server. Then I need to change the file permissions so that they are no longer oldserver\username but newserver\username. Can this be accomplished all with CALCS? Is there an easy way that perhaps I'm missing?

    Read the article

  • What parts should I get for an ASRock x58 Extreme motherboard

    - by Brad Gilbert
    I just received an ASRock x58 Extreme motherboard, for my post on this question. It was a 2009 Tom's Hardware recommended buy. It is a Core i7 motherboard, with an X58 Express Chipset. It uses DDR3 RAM. What I want to know is, what parts should I get to finish it off. I'm looking for some good bargains, because of a lack of funds. The most taxing game I will probably play on it is OpenTTD. The only parts I currently have that are compatible: A Dynex 400W power supply. It appears to be an ATX 2.1 power supply, with the addition of a -5 rail. Apparently designed to be compatible with most ATX-style motherboards. Several PCI add-in cards. Mostly 10/100 Network cards Some sound cards Some video cards with a VGA connector Plenty of PATA drives. 8 GB - 80 GB Hard-drives A dozen or-so CD-ROM drives, only a handful of them are CD-RW drives. One DVD-ROM drive I have one LCD, with a 15 pin VGA connector, which I salvaged from the dump. The only thing wrong with it was some dead capacitors. It also has a stuck pixel.

    Read the article

  • Memcached Lagging

    - by Brad Dwyer
    Let me preface this by saying that this is a followup question to this topic. That was "solved" by switching from Solaris (SmartOS) to Ubuntu for the memcached server. Now we've multiplied load by about 5x and are running into problems again. We are running a site that is doing about 1000 requests/minute, each request hits Memcached with approximately 3 reads and 1 write. So load is approximately 65 requests per second. Total data in the cache is about 37M, and each key contains a very small amount of data (a JSON-encoded array of integers amounting to less than 1K). We have setup a benchmarking script on these pages and fed the data into StatsD for logging. The problem is that there are spikes where Memcached takes a very long time to respond. These do not appear to correlate with spikes in traffic. What could be causing these spikes? Why would memcached take over a second to reply? We just booted up a second server to put in the pool and it didn't make any noticeable difference in the frequency or severity of the spikes. This is the output of getStats() on the servers: Array ( [-----------] => Array ( [pid] => 1364 [uptime] => 3715684 [threads] => 4 [time] => 1336596719 [pointer_size] => 64 [rusage_user_seconds] => 7924 [rusage_user_microseconds] => 170000 [rusage_system_seconds] => 187214 [rusage_system_microseconds] => 190000 [curr_items] => 12578 [total_items] => 53516300 [limit_maxbytes] => 943718400 [curr_connections] => 14 [total_connections] => 72550117 [connection_structures] => 165 [bytes] => 2616068 [cmd_get] => 450388258 [cmd_set] => 53493365 [get_hits] => 450388258 [get_misses] => 2244297 [evictions] => 0 [bytes_read] => 2138744916 [bytes_written] => 745275216 [version] => 1.4.2 ) [-----------:11211] => Array ( [pid] => 8099 [uptime] => 4687 [threads] => 4 [time] => 1336596719 [pointer_size] => 64 [rusage_user_seconds] => 7 [rusage_user_microseconds] => 170000 [rusage_system_seconds] => 290 [rusage_system_microseconds] => 990000 [curr_items] => 2384 [total_items] => 225964 [limit_maxbytes] => 943718400 [curr_connections] => 7 [total_connections] => 588097 [connection_structures] => 91 [bytes] => 562641 [cmd_get] => 1012562 [cmd_set] => 225778 [get_hits] => 1012562 [get_misses] => 125161 [evictions] => 0 [bytes_read] => 91270698 [bytes_written] => 350071516 [version] => 1.4.2 ) ) Edit: Here is the result of a set and retrieve of 10,000 values. Normal: Stored 10000 values in 5.6118 seconds. Average: 0.0006 High: 0.1958 Low: 0.0003 Fetched 10000 values in 5.1215 seconds. Average: 0.0005 High: 0.0141 Low: 0.0003 When Spiking: Stored 10000 values in 16.5074 seconds. Average: 0.0017 High: 0.9288 Low: 0.0003 Fetched 10000 values in 19.8771 seconds. Average: 0.0020 High: 0.9478 Low: 0.0003

    Read the article

  • mod_proxy security

    - by brad
    I'm on Debian Lenny using apache2. in my proxy.conf I tried adding Allow from localhost as suggested in some other forums to get proxying to work. Didn't work. It only worked if I say Allow from all My question is this. Are there any security implications to this Allow from all directive? Most people were saying to make this as limited as possible, but "all" is the client right? I want anyone regardless of their IP to be forwarded properly. Is there a better way to configure this?

    Read the article

  • How do I get started with Chef?

    - by Brad Wright
    The chef documentation is pretty bad. And Google isn't helping me. Can anyone point me at a decent article or something that would help me get started? My specific issues are: How do I get a client to read my configuration? chef-solo seems like the best start (I don't want to run an OpenID server or Merb) How do I configure Apache to serve Django? I already know how to do this via regular server configuration, but I figure an example Chef recipe would be a good start;

    Read the article

  • Poor WAMP performance when using SMB/UNC Paths

    - by Brad
    I've configured a WAMP (Windows Apache MySQL and PHP) stack when when configured to use local storage takes 3-4 seconds to load. When I use an SMB/UNC share it takes 12-15 seconds to load. Here are the two lines in my httpd.conf: #DocumentRoot "//10.99.108.11/test_htdocs" #<Directory "//10.99.108.11/test_htdocs"> #DocumentRoot "C:/www" #<Directory "C:/www"> Is there performance tuning I can do on windows server 2008 R2 to improve performance or is there another way to improve performance using smb

    Read the article

  • How do you use environment variables, such as %CommonProgramFiles%, in the PATH and have them recognized by services.exe?

    - by Brad Knowles
    I'm trying to add C:\Program Files\Common Files\xxx\xxx to the system PATH environment variable by appending %CommonProgramFiles%\xxx\xxx to the existing path. After rebooting, I open a command prompt and check the PATH. It expands correctly. However, when using Process Explorer from Sysinternals to view the Environment variables on services.exe, it shows the unexpanded version. Coincidentally, the paths using %SystemRoot% expand and are recognized just fine. I've tried altering the PATH through the Environment Variables window from System Properties and through direct Registry manipulation, neither seems to work. Is it possible to use other environment variables, besides %SystemRoot% in PATH and have services.exe understand it?

    Read the article

  • How do I get VDPAU working with Ubuntu 9.1?

    - by Brad Robertson
    What do I need to do to get MKV HD videos playing with VDPAU and also Blu-ray discs? Lots of people say you need to compile the latest MPlayer (which I haven't had luck doing) for VDPAU. I found an mplayer ppa that says it has VDPAU compiled into it so I'd like to use that. What packages do I need for playing MKV files and Blu-ray with the video decoding offloaded to my GPU? So far I haven't had any luck with any of the tutorials I've found. I'm just looking for a quick synopsis that will tell me what I'm looking for as I'm kind of shooting in the dark. (I didn't know what VDPAU was until a few days ago.)

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

  • need to delete files owned by apache - unsure how to do that

    - by Brad
    Running apache on rhel server and sometimes I need to delete some files that I can not using my FTP program, because the FTP account I am logged into is not the apache user. I am on a Mac and there must be a way to accomplish this via terminal by either SSH'ing into the server. What credentials would I need to ssh into the server and delete the files/folders owned by apache Screen shot will show you what I mean when the file is owned by the apache user/group: http://cl.ly/e2192e6aadc8e4688c33 Any help is appreciated.

    Read the article

  • Deploy Rails app from Hudson

    - by brad
    I'm using hudson as my CI and it works great, builds run their tests, code metrics, all that good stuff. But at the moment, that's it, no automated deployment, I have to manually do that after. I haven't found any sort of capistrano plugin for hudson and I can't even see where I can just run my cap deploy after a successful build in Hudson. Does anyone have any idea what I need in order to automate a deployment to a testing server on a successful build? I'd like each commit to force a build and in term deploy to testing so I can see everything right away.

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • ubuntu ppa's and apt precedence

    - by Brad Robertson
    I'm trying to get the latest mplayer installed with vdpau support. Haven't had any luck so far. I found a PPA that says it works and I want to install it. I added the PPA to my apt.sources and did an update. I just wanted to know, how do I know that I'm now installing the mplayer package from the PPA, and not Ubuntu's standard repositories? From what I see I just install the same package using apt-get install mplayer. How do I know which mplayer I'm getting? Where do I specify what takes precedence?

    Read the article

  • How does java permgen relate to code size

    - by brad
    I've been reading a lot about java memory management, garbage collecting et al and I'm trying to find the best settings for my limited memory (1.7g on a small ec2 instance) I'm wondering if there is a direct correlation between my code size and the permgen setting. According to sun: The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation. To me this means that it's literally storing my class def'ns etc... Does this mean there is a direct correlation between my compiled code size and the permgen I should be setting? My whole app is about 40mb and i noticed we're using 256mb permgen. I'm thinking maybe we're using memory that could be better allocated to dynamic code like object instances etc...

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >