Search Results

Search found 6920 results on 277 pages for 'block'.

Page 91/277 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Mount an VHD on Mac OS X

    - by janm
    Is it possible (how) to mount an VHD file created by Windows 7 in OS X? I found some information about how to do this on linux. There is a fuse fs "vdfuse" which uses virtualbox libs to mount filesystems supported by virtualbox. However I was unable to compile the package on osx because nearly all headers are missing and I doubt that it would work anyway... EDIT #2: Okay I got my hands dirty and finally compiled vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) on osx. As a starting point I used macfuse (http://code.google.com/p/macfuse/) and looked at the example file systems. This led me to the following build script infile=vdfuse.c outfile=vdfuse incdir="your/path/to/vbox/headers" INSTALL_DIR="/Applications/VirtualBox.app/Contents/MacOS" CFLAGS="-pipe" gcc -arch i386 "${infile}" \ "${INSTALL_DIR}"/VBoxDD.dylib \ "${INSTALL_DIR}"/VBoxDDU.dylib \ "${INSTALL_DIR}"/VBoxVMM.dylib \ "${INSTALL_DIR}"/VBoxRT.dylib \ "${INSTALL_DIR}"/VBoxDD2.dylib \ "${INSTALL_DIR}"/VBoxREM.dylib \ -o "${outfile}" \ -I"${incdir}" -I"/usr/local/include/fuse" \ -Wl,-rpath,"${INSTALL_DIR}" \ -lfuse_ino64 \ -Wall ${CFLAGS} You actually don't need to compile VirtualBox on your machine, just install a recent version of VirtualBox. So now I can partially mount vhds. The separate partitions appear as block files Partition1, Partition2, ... on my mount point. However Mac OS X does not include a loopback file system and macfuse's loopback fs does not work with block files, so we need a loopback fs to mount the blockfiles as actual partitions.

    Read the article

  • iptables -- OK, **now** am I doing it right?

    - by Agvorth
    This is a follow up to a previous question where I asked whether my iptables config is correct. CentOS 5.3 system. Intended result: block everything except ping, ssh, Apache, and SSL. Based on xenoterracide's advice and the other responses to the question (thanks guys), I created this script: # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP Now when I list the rules I get... # iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any anywhere anywhere state INVALID 9 612 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:https 0 0 DROP all -- any any anywhere anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 5 packets, 644 bytes) pkts bytes target prot opt in out source destination I ran it and I can still log in, so that's good. Anyone notice anything major out of wack?

    Read the article

  • "one-off" use of http_proxy in a Chef remote_file resource

    - by user169200
    I have a use case where most of my remote_file resources and yum resources download files directly from an internal server. However, there is a need to download one or two files with remote_file that is outside our firewall and which must go through a HTTP proxy. If I set the http_proxy setting in /etc/chef/client.rb, it adversely affects the recipe's ability to download yum and other files from internal resources. Is there a way to have a remote_file resource download a remote URL through a proxy without setting the http_proxy value in /etc/chef/client.rb? In my sample code, below, I'm downloading a redmine bundle from rubyforge.org, which requires my servers to go through a corporate proxy. I came up with a ruby_block before and after the remote_file resource that sets the http_proxy and "unsets" it. I'm looking for a cleaner way to do this. ruby_block "setenv-http_proxy" do block do Chef::Config.http_proxy = node['redmine']['http_proxy'] ENV['http_proxy'] = node['redmine']['http_proxy'] ENV['HTTP_PROXY'] = node['redmine']['http_proxy'] end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing notifies :create_if_missing, "remote_file[redmine-bundle.zip]", :immediately end remote_file "redmine-bundle.zip" do path "#{Dir.tmpdir}/redmine-#{attrs['version']}-bundle.zip" source attrs['download_url'] mode "0644" action :create_if_missing notifies :decompress, "zipp[redmine-bundle.zip]", :immediately notifies :create, "ruby_block[unsetenv-http_proxy]", :immediately end ruby_block "unsetenv-http_proxy" do block do Chef::Config.http_proxy = nil ENV['http_proxy'] = nil ENV['HTTP_PROXY'] = nil end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing end

    Read the article

  • Apache2 default vhost in alphabetical order or override with _default_ vhost?

    - by benbradley
    I've got multiple named vhosts on an Apache web server (CentOS 5, Apache 2.2.3). Each vhost has their own config file in /etc/httpd/vhosts.d and these vhost config files are included from the main httpd conf with... Include vhosts.d/*.conf Here's an example of one of the vhost confs... NameVirtualHost *:80 <VirtualHost *:80> ServerName www.domain.biz ServerAlias domain.biz www.domain.biz DocumentRoot /var/www/www.domain.biz <Directory /var/www/www.domain.biz> Options +FollowSymLinks Order Allow,Deny Allow from all </Directory> CustomLog /var/log/httpd/www.domain.biz_access.log combined ErrorLog /var/log/httpd/www.domain.biz_error.log </VirtualHost> Now I when anyone tries to access the server directly by using the public IP address, they get the first vhost specified in the aggregated config (so in my case it's alphabetical order from the vhosts.d directory). Anyone accessing the server directly by IP address, I'd like them to just get an 403 or a 404. I've discovered several ways to set a default/catch-all vhost and some conflicting opinions. I could create a new vhost conf in vhosts.d called 000aaadefault.conf or something but that feels a bit nasty. I could have a <VirtualHost> block in my main httpd.conf before the vhosts.d directory is included. I could just specify a DocumentRoot in my main httpd.conf What about specifying a default vhost in httpd.conf with _default_ http://httpd.apache.org/docs/2.2/vhosts/examples.html#default Would having a <VirtualHost _default_:*> block in my httpd.conf before I Include vhosts.d/*.conf be the best way for a catch-all?

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • EC2 AMI won't boot after edit

    - by Eric Lars0n
    I did something stupid, I got a new laptop and copied everything over to the new one, then wiped the old one clean. Then I realized that I forgot to copy the private key out of .ssh that I use to connect to my AWS EBS backed instance. So I can't log in to my custom AMI. So I created a new Volume from the Snapshot of the AMI, then started up a public instance and attached the Volume to it, edit the sshd_config to allow for password log in. Unmounted the volume, detached it, made a snapshot of it, then made a new AMI from the snapshot. The new AMI launches, but never passes the Status Checks and is not reachable. What am I doing wrong? Or alternatively how can I fix my problem? Edit: Adding some of the console output Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize NET: Registered protocol family 2 Registering block device major 8 XENBUS: Timeout connecting to devices! Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)

    Read the article

  • Looking for issue tracker software for residential property management

    - by Rob
    This question is about a computer software (as per SU guidelines) application for centrally tracking issues concerning the management of a residental block of flats (apartments as they say in the US and France). Issues are incidents - and their resultant unplanned maintenance to address them, also planned one-off maintenance and also regular planned routine maintenance. I live in a block of flats (apartments), and along with other residents, are looking to more closely watch over issues with the communal, shared areas of the premises (corridors, courtyards, stairs, lifts, lights, trash/bin shed, bike stands, parking areas etc) and their maintenance, currently done by a property management company. Our own homes are our own affair internally, its the outside communal areas that I have the interest. The aim being to control costs and possibly reduce them, by proactively managing the property using historical data to predict issues and also to scrutinise maintenance charges against such data to ensure that the costs are as expected. Trending could also be established whereby recurrences of things can be detected and pre-empted to reduce costs. As a software professional, I'm aware of Bugzilla, eventum being free tools for software - which could be customised to fit this application, but wondered if there was something more appropriate. It might be useful for such software to be on a web server, with secure access, so that residents can log in and view the issues.

    Read the article

  • Extending ext4 partition on debian7.0 on vsphere

    - by VoidPointer
    I have allocated thin provisioning of 15GB when i found 8GB as insufficient. Now debian guest is not able to recognize the change of size. root@debian7-x64:~# lvdisplay --- Logical volume --- LV Path /dev/debian7-x64/root LV Name root VG Name debian7-x64 LV UUID EU6mg0-XTXC-ci3D-bQJi-7XN6-r8Hp-SYxcj0 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 1 LV Size 7.39 GiB Current LE 1892 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0 --- Logical volume --- LV Path /dev/debian7-x64/swap_1 LV Name swap_1 VG Name debian7-x64 LV UUID xDNtoz-tJUq-M5D6-GGCN-gzcD-fwUv-fYYDR1 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 2 LV Size 376.00 MiB Current LE 94 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 root@debian7-x64:~# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name debian7-x64 PV Size 7.76 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1986 Free PE 0 Allocated PE 1986 PV UUID SehkzH-Gq8Y-jI2f-27Tb-uv1Z-tR1R-5OnTxR root@debian7-x64:~# sfdisk -s /dev/sda: 15728640 /dev/mapper/debian7--x64-root: 7749632 /dev/mapper/debian7--x64-swap_1: 385024 total: 23863296 blocks Help me to extend this partition. No problem in rebooting. I dont have any live CD. Environment : debian 7, with lvm, on vsphere, ext4 partition. Can provide more details when needed.

    Read the article

  • Use autocomplete in dropdown cells with Excel 2007?

    - by Martin
    I want to make a survey with Excel and I therefore have defined the cells for the answers as a dropdown cell which only accepts answers from a certain list, e. g.: The two Lists List1 and List2 (yellow cells) are the possible answers for the questions in Block 1.x resp. 2.x (blue) . There might be a block 4 with more questions, which again use List1 for their possible answers. My problem is: I'd like to be able to use the autocompleate feature to fill in the blue cells with the dropdown menu, so that the user only types 5 and it automatically expands to "5: extremely important" or "5: extremely difficult". According to my research on the www, this should be possible if I add the list with possible answers directly above the cells where autocomplete should work (I did this with the green helper cells which could be hidden) . But I have to enter at least 4 characters 5: e to get the autocompleted suggestion. Is there a way to make autocomplete already replace a "5" by the corresponding valid term? As the survey file shall be distributed to a lot of people "outside", I can not use VBA magic because it may be blocked on their computer and might not work. EDIT: it seems to have to do with the numbers I use: If I'd start my List items with A, B, C instead of 1, 2, 3, it would work perfectly. Excel seems to ignore the pure numbers when they are entered and does not try to autocomplete them.. is there a workaround? (I hope it is clear what I want, it seems a little difficult to explain.)

    Read the article

  • PhpMyAdmin 500 Internal Server Error on Nginx/php5-fpm/Debian

    - by ThrownAway
    I downloaded PhpMyAdmin a while ago and am having a hard time getting it to work. Requesting localhost/phpmyadmin gives a 500 Internal Server Error response, but there's nothing in the error log. These are the steps I did: Downloaded the newest phpmyadmin and unzipped all the files to /var/vhosts/phpmyadmin/www/ Created a new php5-fpm pool and a server block on nginx Changed the owner of all the files inside phpmyadmin/ Tried requesting localhost/phpmyadmin and localhost/phpmyadmin/setup The phpmyadmin is running inside a chroot, and all the files are owned by www-data so it shouldn't be a permission error. I made a new php file in the same directory to produce an error and it logs just fine so it has to be just phpmyadmin. Here's my php5-fpm pool: [phpmyadmin] listen = /var/vhosts/phpmyadmin/tmp/.php.sock; user = www-data group = www-data chroot = /var/vhosts/phpmyadmin/ chdir = / php_admin_value[error_reporting] = E_ALL php_admin_value[error_log] = error.log php_admin_flag[log_errors] = on php_admin_flag[display_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /tmp And Nginx server block: server { listen 80; root /var/vhosts/phpmyadmin/www; server_name pma.domain; location / { try_files $uri $uri/ /index.html; autoindex on; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_pass unix:/var/vhosts/phpmyadmin/tmp/.php.sock; fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /www; } index index.html index.htm index.php; try_files $uri $uri/ =404; } Any ideas what could be wrong? Why is it not producing any errors even though I've forced them to be on?

    Read the article

  • Running .NET code in XML file [closed]

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at

    Read the article

  • Ruby installed on Ubuntu 10.10 slow on one machine but not other

    - by Aaron Jensen
    I have a machine that was provisioned several months ago. RVM was used to install ruby 1.9.3-p125 as well as 1.9.3-p125-perf. When I compared raw ruby performance to another identical machine the older machine smoked them. For example: ================================================================================ With in-block needle calculation ================================================================================ Rehearsal ---------------------------------------------- detect 3.790000 0.000000 3.790000 ( 3.800895) each 2.410000 0.000000 2.410000 ( 2.420860) any 3.960000 0.000000 3.960000 ( 3.972099) include 1.440000 0.000000 1.440000 ( 1.442862) ------------------------------------ total: 11.600000sec vs ================================================================================ With in-block needle calculation ================================================================================ Rehearsal ---------------------------------------------- detect 10.740000 0.000000 10.740000 ( 10.769366) each 6.080000 0.010000 6.090000 ( 6.106323) any 10.600000 0.000000 10.600000 ( 10.641606) include 4.160000 0.000000 4.160000 ( 4.171530) ------------------------------------ total: 31.590000sec I attempted to reinstall 1.9.3-p125 with rvm on the fast machine and that ruby is now slow. It's as if something changed in RVM, or I installed some package that made compiled versions of ruby perform significantly worse. I know this is a tough question to answer, but what things should I look into in order to track down why the performance has suffered so much? edit I just attempted to install with ruby-build and the version installed was fast. Something rvm is doing to build it in my environment is slow.

    Read the article

  • Nginx proxy upstream cached?

    - by Julian H. Lam
    Attempting to resolve an issue that's been annoying me for a bit. I've distilled the symptoms into a set of reproducible steps: I have two sites, siteA, and siteB. They are both Node.js applications running on different ports (for the sake of example, 4567 and 4568) Both applications have their own file in sites_available (plus a symlink from sites_enabled), which contain the directives proxy_pass http://node_siteA/ and proxy_pass http://node_siteB/ respectively, inside of a location block. They also each have an upstream block (defined globally?): upstream node_siteA { upstream node_siteB { server 127.0.0.1:4567; server 127.0.0.1:4568; } } Site A and Site B have nothing to do with each other. Yes, I am restarting (reloading, actually) nginx every time I make a change. If I take down site B and attempt to access it via the web, I am served site A. Why is this? Thoughts Other times, when I create a new Site C, for example, nginx refuses to show me anything except "Welcome to nginx!" for ~5 minutes. This suggests a resolver timeout, perhaps? When I access Site B after its config has been deleted, and it sends me to Site A, this sounds like nginx sending me to servers in a round-robin fashion...

    Read the article

  • hosting company blocking google bots and crawlers [closed]

    - by Jayapal Chandran
    Hi, I am having a site for the past three years and it is very active for the past two years. Until not the site is working well and also now but not after the hosting company blocked google bots. Many pages appeared in the first page of the google search. After they started blocking i couldn't see my links in the first page instead they appeared after 5 pages or they did not appear at all. Will hosting companies be so stupid that they block and dont mention it to their users. They want to protect themselves by making the websites at stake. I display google ads and not this month i got only half for this 10 days. I have made requests to other hosting companies like blue host and monster host that i wan to transfer my domain by making a condition that the will not block google bots which stops the business indirectly. so any kind of help will be helpful. how can i claim what i lost from the hosting company. what other hosting companies consider the users (by informing the events like changing the IP or blocking google bot.) It was really working hard to bring up my site but these people just crashed down my site in a few days. :-(

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Adjust iptables

    - by madunix
    cat /etc/sysconfig/iptables: # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d X.0.0.Y -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp -s X.Y.Z.W --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s M.M.M.M --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT I have the above following IPtables on my linux web server(Apache/MySQL), I want to have the following: Block any traffic from multiple IP's to my web server IP1:1.2.3.4.5, IP2:6.7.8.9 ..etc Limiting one host to 20 connections to 80 port, which should not affect non-malicious user, but would render slowloris unusable from one host. Limit MYSQL port 3306 access on my server only to the following IP range A.B.C.D/255.255.255.240 Block any ICMP traffic.

    Read the article

  • Extending partition on linux gparted but not more space in the vm

    - by Asken
    I have a vm test installation of a linux running a build server. Unfortunately I just pressed ok when adding the disk and ended up with an 8gb drive to play with. Well into the test the builds are consuming more and more space, of course. The vm drive was resized to 21gb and using gparted I expanded the drive partitions and that all worked fine but when I go back into the console and do df there's still only 8gb available. How can I claim the other 13gb I added? fdisk -l Disk /dev/sda: 21.0 GB, 20971520000 bytes 255 heads, 63 sectors/track, 2549 cylinders, total 40960000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006d284 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 40959999 20229121 5 Extended /dev/sda5 501760 40959999 20229120 8e Linux LVM vgdisplay --- Volume group --- VG Name ct System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.29 GiB PE Size 4.00 MiB Total PE 4938 Alloc PE / Size 1977 / 7.72 GiB Free PE / Size 2961 / 11.57 GiB VG UUID MwiMAz-52e1-iGVf-eL4f-P5lq-FvRA-L73Sl3 lvdisplay --- Logical volume --- LV Name /dev/ct/root VG Name ct LV UUID Rfk9fh-kqdM-q7t5-ml6i-EjE8-nMtU-usBF0m LV Write Access read/write LV Status available # open 1 LV Size 5.73 GiB Current LE 1466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Name /dev/ct/swap_1 VG Name ct LV UUID BLFaa6-1f5T-4MM0-5goV-1aur-nzl9-sNLXIs LV Write Access read/write LV Status available # open 2 LV Size 2.00 GiB Current LE 511 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1

    Read the article

  • Stronger laptop_mode in Linux

    - by Vi
    Can I have stronger laptop mode in Linux? I want to spin down the hard drive and prevent it to spin up even if something wants to read something not in cache. In general I want to have these modes: Normal Current laptop mode Stronger laptop mode: spin up only when needs to read something uncached (and cache it). No spinups to write something unless really memory pressure (Exception: explicit "sync" command in console). Kernel is allowed to keep processes in D-sleep for 10 seconds for that. Forced laptop mode: do not spin up, period. Keep offending processes in D-sleep unless I turn off this mode. Like there is a bomb instead of hard drive. I also want to have access times tracked (mount -o atime), but I don't want the hard drive to be spinned up only to update them. Is there some settings or kernel patches that can get closer to this? May be I should write special io scheduler for "forced laptop mode"? E.g. echo suspend > /sys/block/sda/queue/scheduler to lock the drive and echo cfq > /ys/block/sda/queue/scheduler to unlock it again?

    Read the article

  • Terminal is not letting me make commands unless I hit enter a bunch of times

    - by ninja08
    Whenever I open terminal it normally allows me to immediately begin making commands. Only earlier today I did the setup for github here https://help.github.com/articles/set-up-git And then all of a sudden the thing where I give terminal commands won't allow me to give it commands unless I hit enter a few times. This is what it looks like: Last login: Fri Nov 9 11:43:28 on ttys001 mysql.save: Permission denied mysql.save: Permission denied /Users/Nick/.zshrc:32: command not found:  . ~ git: ? ~ git: ? ~ git: ? See the big space? That's because it simply will never show the ~ git: thing unless I hit enter 3-4 times. Also, it never used to say ~ git: before I did the git setup. I'm not sure what I changed. I've checked the zshrc file and commented everything out to find the line causing the problem. I've done that and it turns out it was the source $ZSH/oh-my-zsh.sh Within the oh-my-zsh.sh file I've commented out each block of code for the file starting at the top and I've found that this block is causing it: # Load the theme if [ "$ZSH_THEME" = "random" ] then themes=($ZSH/themes/*zsh-theme) N=${#themes[@]} ((N=(RANDOM%N)+1)) RANDOM_THEME=${themes[$N]} source "$RANDOM_THEME" echo "[oh-my-zsh] Random theme '$RANDOM_THEME' loaded..." else if [ ! "$ZSH_THEME" = "" ] then if [ -f "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" ] then source "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" else source "$ZSH/themes/$ZSH_THEME.zsh-theme" fi fi fi

    Read the article

  • How can one restrict network activity to only the VPN on a Mac and prevent unsecured internet activity?

    - by John
    I'm using Mac OS and connect to a VPN to hide my location and IP (I have the 'send all traffic over VPN connection' box checked in teh Network system pref), I wish to remain anonymous and do not wish to reveal my actual IP, hence the VPN. I have a prefpan called pearportVPN that automatically connects me to my VPN when I get online. The problem is, when I connect to the internet using Airport (or other means) I have a few seconds of unsecured internet connection before my Mac logs onto my VPN. Therefore its only a matter of time before I inadvertently expose my real IP address in the few seconds it takes between when I connect to the internet and when I log onto my VPN. Is there any way I can block any traffic to and from my Mac that does not go through my VPN, so that nothing can connect unless I'm logged onto my VPN? I suspect I would need to find a third party app that would block all traffic except through the Server Address, perhaps Intego Virus Barrier X6 or little snitch, but I'm afraid I'm not sure which is right or how to configure them. Any help would be much appreciated. Thanks!

    Read the article

  • How can one restrict network activity to only the VPN on a Mac and prevent unsecured internet activity?

    - by John
    I'm using Mac OS and connect to a VPN to hide my location and IP (I have the 'send all traffic over VPN connection' box checked in teh Network system pref), I wish to remain anonymous and do not wish to reveal my actual IP, hence the VPN. I have a prefpan called pearportVPN that automatically connects me to my VPN when I get online. The problem is, when I connect to the internet using Airport (or other means) I have a few seconds of unsecured internet connection before my Mac logs onto my VPN. Therefore its only a matter of time before I inadvertently expose my real IP address in the few seconds it takes between when I connect to the internet and when I log onto my VPN. Is there any way I can block any traffic to and from my Mac that does not go through my VPN, so that nothing can connect unless I'm logged onto my VPN? I suspect I would need to find a third party app that would block all traffic except through the Server Address, perhaps Intego Virus Barrier X6 or little snitch, but I'm afraid I'm not sure which is right or how to configure them. Any help would be much appreciated. Thanks!

    Read the article

  • Linux as a gateway (no NAT)

    - by Hugo
    I'm trying to configure a linux server as a gateway/router, but I can't get it to work, and all information I've managed to find is NAT-related. I have a public IP block for the gateway and devices behind it, so I want the gateway to simply route packets to the internet - again: no NATing! I've managed to get the gateway to access the internet successfully (that was just a matter of configuring the IP and GW), and the computers behind it can communicate with it. [EDIT: more info] This is actually an IPv6 block (2800:40:403::0/48) (but I've found that most utilities and instructions can be easily adapted from IPv4 to IPv6 with little hastle). The server has too ports: wan: 2800:40:403::1/48 lan: 2800:40:403::3/48 One of the computers behind it is connected to it via a switch; 2800:40:403::7/48 The wan interface on the server can ping6 www.google.com without issues. The lan interface on the server and the client can mutually ping each other without issues (as well as SSH, etc). I've tried setting the server as a default gateway for the client, with no luck: client # route -A inet6 add default gw 2800:40:403::3 dev eth1 server # cat /proc/sys/net/ipv6/conf/all/forwarding 1 I don't want any filtering/firewalling/etc, just plain routing. Thanks.

    Read the article

  • file system damage

    - by jffrs
    I try recover the backup superblock on /dev/sda2 that contain ubuntu 12.04 LTS and partition ext4 with livecd ubuntu 10.04. the message is below root@ubuntu:/home/ubuntu# fsck.ext4 -b 163840 -B 4096 /dev/sda2 e2fsck 1.41.11 (14-Mar-2010) /dev/sda2 was not cleanly unmounted, check forced. Resize inode not valid. Recreate? yes Pass 1: Checking inodes, blocks, and sizes Programming error? block #7963637 claimed for no reason in process_bad_block. Programming error? block #11240437 claimed for no reason in process_bad_block. Root inode is not a directory. Clear? yes Inode 712 is in extent format, but superblock is missing EXTENTS feature Fix? yes Inode 98519 has compression flag set on filesystem without compression support. Clear? yes Inode 98519 has INDEX_FL flag set but is not a directory. Clear HTree index? what's the correct procedure?

    Read the article

  • Debian boot problems

    - by psp
    I've got Debian server with one disk. No dual boot or anything fancy. Just Debian 6.0 (Squeeze). I rebooted the server today and now it doesn't boot. I get the following (from GRUB): error: hd0,msdos out of disk I then get a grub prompt grub rescue> I've been googling for ages with no luck. /etc/fstab > #/etc/fstab: static file system information. > # > # <file system> <mount point> <type> <options> <dump> <pass> > aufs / aufs rw 0 0 > tmpfs /tmp tmpfs nosuid,nodev 0 0 I've run debian rescue mode and looked through the syslog. I see hundreds of entries like this: Jun 30 22:51:08 kernel: [ 615.217382] sd 2:0:0:0: [sda] Unhandled error code Jun 30 22:51:08 kernel: [ 615.217385] sd 2:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jun 30 22:51:08 kernel: [ 615.217389] sd 2:0:0:0: [sda] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 30 22:51:08 kernel: [ 615.217399] end_request: I/O error, dev sda, logical block 0 Jun 30 22:51:08 kernel: [ 615.217402] Buffer I/O error on device sda, logical block 0

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >