Search Results

Search found 8896 results on 356 pages for 'jason block'.

Page 282/356 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • iptables : how to allow incoming ftp traffic?

    - by logansama
    Hi, Still fighting my way through the jungle that is called iptables. I have managed to allow FTP access outside of our LAN: both these would work. NOTE: eth0 is the LAN interface and eth1 is the WAN interface. iptables -t filter -A FORWARD -i eth0 -p tcp --dport 20:21 -j ACCEPT or iptables -A FORWARD -i eth0 -o eth1 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But when i connect to a external FTP server i manage to log in and all is fine until it wishes to List the directory content. Then nothing happens as the data is blocked, due to the fact that i do not have a rule set up to allow it! (my last rule on the FORWARD chain is to block all traffic) I have tried a gazillion rules (many of which i did not understand) to try and allow the FTP traffic back through my server. One such rule for example was: iptables -A FORWARD -i eth1 -o eth0 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But i cannot get the List to work. It just times out after a while. Would anyone perhaps know how to build a rule which would allow FTP to List / allow such traffic back? Or have a link to sources i could work through? Thank you,

    Read the article

  • What are ways to prevent files with the Right-to-Left Override Unicode character in their name (a malware spoofing method) from being written or read?

    - by galacticninja
    What are ways to avoid or prevent files with the RLO (Right-to-Left Override) Unicode character in their name (a malware method to spoof filenames) from being written or read in a Windows PC? More info on the RLO unicode character here: http://www.fileformat.info/info/unicode/char/202e/index.htm http://en.wikipedia.org/wiki/Bi-directional_text Info on the RLO unicode character when used by malware: http://www.ipa.jp/security/english/virus/press/201110/E_PR201110.html Mirror link: http://webcache.googleusercontent.com/search?q=cache:KasmfOvbVJ8J:www.ipa.jp/security/english/virus/press/201110/E_PR201110.html+&cd=1&hl=en&ct=clnk You can try this RLO character test webpage: http://www.fileformat.info/info/unicode/char/202e/browsertest.htm The RLO character is also already pasted in the 'Input Test' field in that webpage. Try typing there and notice that the characters you're typing are coming out in their reverse orders (right-to-left, instead of left-to-right). In filenames, the RLO character can be specifically positioned in the filename to spoof or masquerade as having a filename or file extension that is different than what it actually has. (Will still be hidden even if 'Hide extensions for known filetypes' is unchecked.) The only info I can find that has info on how to prevent files with the RLO character from being run is from the Information Technology Promotion Agency, Japan website: http://www.ipa.jp/security/english/virus/press/201110/E_PR201110.html (Mirror link). They adviced to use the Local Security Policy settings manager to block files with the RLO character in its name from being run. Can anyone recommend any other good solutions to prevent files with the RLO character in their names from being written or being read in the computer, or a way to alert the user if a file with the RLO character is detected? My OS is Windows 7, but I'll be looking for solutions for Windows XP, Vista and 7, or a solution that will work for all those OSes, to help people using those OSes too.

    Read the article

  • reverse proxying with NGINX to two back-end servers

    - by aag
    I am trying to learn how to configure the Nginx proxy. All requests from external (www.external.com) should go to internal server 10.10.10.16:2080, except for www.external.com/nagios requests, which should go to internal 10.10.10.18. My location block looks as follows: location ~* / { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.16:2080; } # # nagios server location ~* /nagios/ { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.18; } The first location seems to work fine. However, any request to www.external.com/nagios sends the browser into the eternal pastures. Of course, 10.10.10.18/nagios was tested and works fine. What am I missing?

    Read the article

  • Trouble Letting Users Get to Certain Sites through Squid Proxy

    - by armani
    We have Squid running on a RHEL server. We want to block users from getting to Facebook, other than a couple specific sites, like our organization's page. Unfortunately, I can't get those specific pages unblocked without allowing ALL of Facebook through. [squid.conf] # Local users: acl local_c src 192.168.0.0/16 # HTTP & HTTPS: acl Safe_ports port 80 443 # File containing blocked sites, including Facebook: acl blocked dst_dom_regex "/etc/squid/blocked_content" # Whitelist: acl whitelist url_regex "/etc/squid/whitelist" # I do know that order matters: http_access allow local_c whitelist http_access allow local_c !blocked http_access deny all [blocked_content] .porn_site.com .porn_site_2.com [...] facebook.com [whitelist] facebook.com/pages/Our-Organization/2828242522 facebook.com/OurOrganization facebook.com/media/set/ facebook.com/photo.php www.facebook.com/OurOrganization My biggest weakness is regular expressions, so I'm not 100% sure about if this is all correct. If I remove the "!blocked" part of the http_access rule, all of Facebook works. If I remove "facebook.com" from the blocked_content file, all of Facebook works. Right now, visiting facebook.com/OurOrganization gives a "The website declined to show this webpage / HTTP 403" error in Internet Explorer, and "Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED): Unknown error" in Chrome. WhereGoes.com tells me the URL redirects for that URL goes like this: facebook.com/OurOrganization -- [301 Redirect] -- http://www.facebook.com/OurOrganization -- [302 Redirect] -- https://www.facebook.com/OurOrganization I tried turning up the debug traffic out of squid using "debug_options ALL,6" but I can't narrow anything down in /var/log/access.log and /var/log/cache.log. I know to issue "squid -k reconfigure" whenever I make changes to any files.

    Read the article

  • Possible attack on my SQL server?

    - by erizias
    Checking my SQL Server log I see several entries like this: Date: 08-11-2011 11:40:42 Source: Logon Message: Login failed for user 'sa'. Reason: Password did not match for the login provided. [CLIENT: 56.60.156.50] Date: 08-11-2011 11:40:42 Source: Logon Message: Error: 18456. Severity: 14. State: 8. Date: 08-11-2011 11:40:41 Source: Logon Message: Login failed for user 'sa'. Reason: Password did not match for the login provided. [CLIENT: 56.60.156.50] Date: 08-11-2011 11:40:41 Source: Logon Message: Error: 18456. Severity: 14. State: 8. And so on.. Is this a possible attack on my SQL Server from the chineese???! I looked up the IP adress, at ip-lookup.net which stated it was chineese. And what to do? - Block the IP adress in the firewall? - Delete the user sa? And how do I protect my web server the best?! :) Thanks in advance!

    Read the article

  • Debugging nginx URL rewrite: How do I figure out where the problem is?

    - by pjmorse
    I have a specific URL pattern on a site which needs to be redirected to the HTTPS version. This is a Django site; Nginx checks each URL in memcached, and if it doesn't find a cached version it proxies the request to Apache/mod_python for Django to render the page. The relevant configuration block is rewrite ^/certificate https://mysite.com/certificate ; rewrite ^/([a-zA-Z]{2})/certificate https://mysite.com/certificate ; ...and it doesn't appear to be working at all. Nginx is: $ nginx -V nginx version: nginx/0.7.65 built by gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) TLS SNI support disabled configure arguments: --prefix=/usr/local/nginx --pid-path=/var/run/nginx.pid --with-http_gzip_static_module --with-http_ssl_module How can I figure out if the problem is my patterns not matching, or a more obscure configuration problem? (The site is localized to three languages, and the localization is in the URL string, e.g. /US/news/, /DE/about, etc. It tracks localization in the session as well, defaulting to US, so if you just requested /news Django will rewrite to /US/news unless the user has a cookie indicating they're using a different localization. Django handles this, though, not Nginx.)

    Read the article

  • Can't mount hard drive. Ubuntu 12.04

    - by Sam
    I am trying to recover some pictures on my 320 GB Hard Disk, so I put in a Live Ubuntu CD and am in that right now. In the devices list, it shows my USB drive, but not my 320 GB Hard Disk. I can see the disk in Disk Utility (it says it's on /dev/sda), but it's not mounted, and it says it has a few bad sectors but it is OK. In Disk Usage Analyzer, it says my maximum capacity is 13.4 GB, so it's definitely not using the 320 GB Hard Disk. I tried the following: sudo mkdir /media/newhd (worked) sudo mount /dev/sda /media/newhd (didn't work. it says I must specify the filesystem type) I then tried: fsck.ext4 -f /dev/sda (didn't work. Said: Superblock invalid, trying to backup blocks. then: Bad magic number in super-block while trying to open /dev/sda. The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock) Does anyone have any ideas? The whole problem started when my Windows Vista said "Can't find operating system". Any ideas on how I can get on to my hard drive at /dev/sda?

    Read the article

  • Can you see something wrong in my .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • Prevent Linux from processing incoming ICMP Host unreachable packets

    - by bbc
    I have a test setup with one host on a network (10.1.0.0/16) talking via TCP to another one on another network (10.2.0.0/16) and a gateway in the middle. Sometimes, the TCP connection is lost and while scanning the trace (pcap), I looks like it's because of just one ICMP Host unreachable message sent by the gateway to 10.1.0.1 at some point. 10.1.0.1 then sends a TCP RST to 10.2.0.1. In my opinion, the gateway (pfSense) is broken or not configured correctly but anyway, for testing purposes, I'd like to block this kind of ICMP on the host (10.1.0.1) before it has an influence on my TCP connection (or does it? I'm not even sure). I've tried iptables: iptables -I INPUT -i eth0 -p icmp --icmp-type host-unreachable -j DROP but while it does a good job at preventing userpace applications like ping from receiving these ICMP messages, my TCP connection still comes to an end when the alleged "killer ICMP packet" is sent by the gateway. Am I right about how it is processed? If yes, then what can I do to achieve my goal?

    Read the article

  • Updating ASUS BIOS on Windows 7 64bit

    - by joesavage
    I've recently decided that I want to try and utilize my two graphics cards so that I can have a dual monitor setup. Unfortunately Windows only seems to notice my most recent graphics card installation - and so I've been told that I should look into my BIOS and try to enable two graphics cards. I could not find this setting anywhere in my M2N68-AM Plus v0210 BIOS. After some further research I figured that I should perhaps upgrade my BIOS, so I searched and managed to download the latest version (v1804) as a ROM file. However I am having difficulty figuring out how to install it. I've tried using the Asus EZFlash feature built into my BIOS, but when trying to load up a variety of different ROMs that are for my motherboard/BIOS I get the error: Boot block in file is not valid! I'm not totally sure what I should do to fix this, so I'm looking into other methods of upgrading my BIOS - however I can't really find any solutions that seem to work. Asus Update is for 32-bit only, AFUDOS doesn't appear to work on my Windows 7 64-bit system (I think it's supposed to run in DOS or something - but that just sounds confusing since I know nothing about DOS). Could anybody help me with this?

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • PHP-FPM processes holding onto MongoDB connection states

    - by Brendan
    For the relevant part of our server stack, we're running: NGINX 1.2.3 PHP-FPM 5.3.10 with PECL mongo 1.2.12 MongoDB 2.0.7 CentOS 6.2 We're getting some strange, but predictable behavior when the MongoDB server goes away (crashes, gets killed, etc). Even with a try/catch block around the connection code, i.e: try { $mdb = new Mongo('mongodb://localhost:27017'); } catch (MongoConnectionException $e) { die( $e->getMessage() ); } $db = $mdb->selectDB('collection_name'); Depending on which PHP-FPM workers have connected to mongo already, the connection state is cached, causing further exceptions to go unhandled, because the $mdb connection handler can't be used. The troubling thing is that the try does not consistently fail for a considerable amount of time, up to 15 minutes later, when -- I assume -- the php-fpm processes die/respawn. Essentially, the behavior is that when you hit a worker that hasn't connected to mongo yet, you get the die message above, and when you connect to a worker that has, you get an unhandled exception from $mdb->selectDB('collection_name'); because catch does not run. When PHP is a single process, i.e. via Apache with mod_php, this behavior does not occur. Just for posterity, going back to Apache/mod_php is not an option for us at this time. Is there a way to fix this behavior? I don't want the connection state to be inconsistent between different php-fpm processes.

    Read the article

  • TCP/UDP hole punching from and to the same NAT network

    - by Luc
    I was wondering if tcp/udp hole punching would still work when you are in the same network (behind a NAT), and what the packet's path would be. What happens when using hole punching on the same network, is that it will send a packet out with the same destination and source address. Only the source and destination port would differ. I imagine a router with NAT loopback enabled will handle this as it should, but how about other routers? Would they drop the packet, or would a router (the first?) from the ISP bounce the packet back after which it gets handled okay? I'm wondering because I was thinking about using this technique to circumvent a block between peers in a network (like a school network where clients can only access the internet, but any contact with each other is blocked). The only other option is to use a man in the middle as proxy (tunnel?). The disadvantage of this is that you have to have a server with significantly more bandwidth than one that would only do hole punching. Also the latency would increase significantly.

    Read the article

  • ASA access lists and Egress Filtering

    - by Nate
    Hello. I'm trying to learn how to use a cisco ASA firewall, and I don't really know what I'm doing. I'm trying to set up some egress filtering, with the goal of allowing only the minimal amount of traffic out of the network, even if it originated from within the inside interface. In other words, I'm trying to set up dmz_in and inside_in ACLs as if the inside interface is not too trustworthy. I haven't fully grasped all the concepts yet, so I have a few issues. Assume that we're working with three interfaces: inside, outside, and DMZ. Let's say I have a server (X.Y.Z.1) that has to respond to PING, HTTP, SSH, FTP, MySQL, and SMTP. My ACL looks something like this: access-list outside_in extended permit icmp any host X.Y.Z.1 echo-reply access-list outside_in extended permit tcp any host X.Y.Z.1 eq www access-list outside_in extended permit tcp any host X.Y.Z.1 eq ssh access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp-data established access-list outside_in extended permit tcp any host X.Y.Z.1 eq 3306 access-list outside_in extended permit tcp any host X.Y.Z.1 eq smtp and I apply it like this: access-group outside_in in interface outside My question is, what can I do for egress filtering? I want to only allow the minimal amount of traffic out. Do I just "reverse" the rules (i.e. the smtp rule becomes access-list inside_out extended permit tcp host X.Y.Z.1 any eq smtp ) and call it a day, or can I further cull my options? What can I safely block? Furthermore, when doing egress filtering, is it enough to apply "inverted" rules to the outside interface, or should I also look into making dmz_in and inside_in acls? I've heard the term "egress filtering" thrown around a lot, but I don't really know what I'm doing. Any pointers towards good resources and reading would also be helpful, most of the ones I've found presume that I know a lot more than I do.

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • SSH hangs when executing command remotely

    - by Serty Oan
    Client : OpenSSH_5.1p1 Debian-5ubuntu1 (Ubuntu 9.04) Server : OpenSSH_5.1p1 Debian-5 (Proxmox 2.6.24-7-pve) I use SSH to execute commands remotely on the server (module check_by_ssh of Nagios). But SSH hangs from time to time when trying to execute commands. I can log to the server via SSH but not executing a simple 'ls'. And it seems to block from all clients from the same IP address. Authentication is not the problem, may it be made by SSH keys or password. ssh -l root -p 2222 server.domain.tld 'ls' Here the client debug info debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR *** skipping approx 40 env var ignored debug1: Sending command: ls debug2: channel 0: request exec confirm 1 It hangs there. Then after a random time, it works again (without doing anything). Killing all sshd process on the server seems to work too. It works from a Putty. I saw that some people had trouble like this due to ISP reverse DNS problem, but it does not seem to be the case here. It can work for hours and then not work for half an hour or so. What could explain this behaviour ?

    Read the article

  • maximum size filesystem on my test .... approach?

    - by jocco
    Hello all I'm new at the site, and I have a question. I got this question at a test and really like to know the correct approach to solving this problem? Here is the question. In an indexed filesystem the first indexblock (inode) has 12 direct pointers and 1 pointer to an indirect indexblock. The filesystem is implemented on a disk with a diskblock-size of 1024 bytes. All pointers are 32 bit. Question: what is the maximum filesize (Kilobytes) of this filesystem? If it's possible not an just an answer but an explanation. edit: It was a multiple choice btw with 4 answers a. 13 K b. 268 K c. 524 K d. 1036 K As for my approach I only got as far as to know that 1 pointer is 32 bit Also I found something else here on the site which seems very usefull. http://stackoverflow.com/questions/2755006/understanding-the-concept-of-inodes Ok i got this far There are 12 blocks and each block is 1024 bytes. 1024 * 12 = 12288 bytes or 12 KB directly accessible. Please correct me if I'm wrong. Each pointer is 32 Bit = 4Byte And to be honest at this point I'm starting to get confused especially since my answer is way over any of my multiple choice answers.

    Read the article

  • Sendmail slow to accept emails

    - by Rich
    I have a PHP web app which is using SMTP to sendmail on localhost to send email. I would like sendmail to accept the mail request immediately and queue it for later sending, as I don't want to have user-facing request threads blocked on emails. Sendmail is installed with the default settings on RHEL web servers. Sometimes sendmail is blocking for a long time after the MAIL command is sent -- sometimes taking 60 or 90 seconds to accept the mail. The time take is usually very close to 60 or 90 sec, which makes me think this is some kind of timeout. I have looked in the sendmail logs, and there are plenty of "deferred" emails, but nothing which looks responsible for this delay. How can I diagnose what is slowing down sendmail? How can I configure sendmail to always accept the mail immediately and to queue the mail for later sending? Update: I'm not sure, but it looks like this might be linked to aol.com addresses. I strongly suspect that sendmail is doing some kind of blocking receipient address verification at the accept-email-for-sending stage. How can I disable that, so that sendmail doesn't block my UI threads? Update 2: This only seems to happen at busy times. Perhaps I am running out of sendmail threads or something? How can I check that?

    Read the article

  • Restrict access to one SVN repository (overwrite default)

    - by teel
    I'm trying to set up our SVN server so that by default the group developers will have access to all repositories, but I want to override that setting on some certain repositories where I want to allow access only to single defined users (or separate groups) The current configuration is SVN + WebDAV on Apache2. All my repositories are located at /var/lib/svn/ In dav_svn.authz I currently have [/] @developers = rw @users = r Now I want to add one repository (let's call it secret_repo) that would only allow access to one user who is also a member of the developers group.¨ I tried to do [secret_repo:/] * = secret_user = rw Where secret_user is the user I'd like to give access to the repository, but it doesn't seem to work. Currently the server is using Apache's LDAP module to authenticate users from our active directory domain and I'd like to keep it that way if possible. Also I seem to be able to browse all my repos freely with any web browser, which I'd like to block. Second problem is that I have webSVN on the server, which is using Apache's LDAP authentication. Everyone who is a member of our domain can access it, so I'd like to hide this secret_repo from websvn listing. It's configured not with parentPath("/var/lib/svn");. Do I really need to remove that and add every repository separately, except the ones I want to hide?

    Read the article

  • Wifi antenna extension with F-connector/RG-6(RG-59) cable?

    - by rjz2000
    In an older house, the wire mesh in walls surrounding the furnace behave like a Faraday cage and block wifi signals. It is also difficult to lay new cable, however there is television cable to multiple locations due to there once having been a roof-installed, television antenna. It would be relatively trivial to install the wifi router at the center distribution point, then have the antenna broadcasting/receiving the signal plugged in at each of the old television outlets. I assume that it would not be too difficult to find an adapter for SMA <- F-type connectors. The cable is actually RG-59 rather than RG-6, but I assume that it still has relatively good RF isolation along its length, which is no more than a couple hundred feet in any direction. Does anyone know a problem with the idea? Will a router get confused if there is /too little/ interference between the two antenna? Is that length of cable (~100ft) too long for the signal a router broadcasts? I have seen that it is also possible to use old ~$30/each FiOS cable modems available on eBay to extend a network over television cable. However, that seems like a less elegant solution, and might interfere with upnp and dlna services I'd like to have work on a single network. Thanks if anyone has answers or suggestions before I try this project!

    Read the article

  • Blue screen of Death on Install

    - by Toby Allen
    I have a machine with Windows Vista Installed. It has an Intel X25 SSD as the System Drive I want to reinstall (I plan to format and overwrite Vista) with XP. When I boot up using the Dell XP CD it loads the initial drivers then i get a Blue Screen. This is quite concerning. The installed OS works ok, but its giving problems so I want to remove it. Should I just format the SSD and try again? Will this make any difference? Can I do something to avoid hitting the Blue Screen? Its possible I had corrupt sectors on one of the other disks, will a new XP install use the System drive or drive 0? Can I force the install to use a specific drive when installing? Error: *** STOP: 0x0000007B (0xF78D2524,0x0000034,0x00000000,0x00000000) I never did find the answer, however I removed the SSD and tried to install on other disk - CRASH I disconnected the other disk and tried to install with only SSD plugged in - CRASH I removed 1 block of RAM - CRASH I used a windows 7 CD - NO CRASH

    Read the article

  • Adding a transaction ID to ruby-on-rails logs

    - by Blue Warrior NFB
    We have a RoR app (rails version 3.2.15 right now). As it has been getting busier, the log-files it's producing are becoming less and less useful for troubleshooting. When they come in like this, it's not a problem: Started GET "/accounts/28088166/kittens/22894/rendered_png?file_id=5d3eaec77954a489b5ddd75143091767&kitten_store_id=9970569bbacf7b6dbeb4eb9295960d69&size=large" for 172.16.202.30 at 2013-11-12 13:45:00 +0000 Processing by KittenController#rendered_png as HTML Parameters: {"file_id"="5d3eaec77954a489b5ddd75143091767", "kitten_store_id"="9970569bbacf7b6dbeb4eb9295960d69", "size"="large", "kitten_cam_id"="280941", "id"="kjlak357aw479607t"} Rendered text template (0.0ms) Sent data (1.8ms) Completed 200 OK in 1037.4ms (Views: 1.4ms | ActiveRecord: 98.4ms) Short request, quickly assembled, all the relevant log-lines are in one block. However, not all of our code renders in 1037ms. There are a few calls that can exceed several seconds, and during that time several of these quicker ones can come in. When that happens, its very, very hard to identify which log-lines belong to which GET. Sent data (4.1ms) Completed 200 OK in 767.4ms (Views: 3.2ms | ActiveRecord: 72.2ms) Completed 200 OK in 2338.0ms (Views: 0.2ms | ActiveRecord: 0.0ms) Ooookaaaay... which goes to what? Is it possible to add something like a transaction-ID to these log-lines? The log-spam would be interspersed, but at least grep-magic would give me the unified entries that I need.

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • trouble executing php scripts with nginx

    - by lovesh
    My nginx config looks like this server { listen 80; server_name localhost; location / { root /var/www; index index.php index.html; autoindex on; } location /folder1 { root /var/www/folder1; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location /folder2 { root /var/www/folder2; index index.php index.html index.htm; try_files $uri $uri/ index.php?$query_string; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } The problem with the above setup is that i am not able to execute php files. Now as per my understanding of nginx config rules, when i am in my webroot(/) which is /var/www the value of $document_root becomes /var/www so when i request for localhost/hi.php the fastcgi_param SCRIPT_FILENAME becomes /var/www/hi.php and that is the actual path of the php script. Similarly when i request for localhost/folder1/hi.php the $document_root becomes /var/www/folder1 because this is specified as the root in folder1's location block so again the fastcgi_param SCRIPT_FILENAME becomes /var/www/folder1/hi.php. But because the above configuration does not work so there is something wrong with my understanding. Please help?

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >