Search Results

Search found 6942 results on 278 pages for 'enabled'.

Page 239/278 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Windows 7 & Virtual PC and Internet (gateway) problems on host PC

    - by Mufasa
    I upgraded to Windows 7 on a PC that is a few years old. The CPU was one revision away from having Hyper-V on it. So, I had to install Microsoft Virtual PC 2007 (v6.0.156.0) to run full XP instances instead of the seamless XP virtualization that is advertised so much. That's fine though; the 'older' version is useful since I use it to run different versions of the whole XP/IE stack for testing. (I'm a web developer.) ...And for the one 16-bit application we still use at the office for scheduling. * sigh * The virtual instances work fine, including networking. My issue is that after a reboot or coming out of sleep mode, my host Windows 7 won't connect to the Internet. It will connect to the local network fine. If I disable the "Virtual Machine Network Services" item (I'll call "VMNS" from here on) in the LAN Connection properties box, it starts working. But than the Virtual PC instances lose their network connectivity. If I re-enable VMNS again in the same instance, everything works (Internet on host and in the virtualized instances). But after the next reboot/sleep cycle this starts over. The route table gave me a clue though. When doing a cycle w/ VMNS enabled: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.3.51 20 0.0.0.0 0.0.0.0 10.0.10.10 10.0.3.51 276 ... After VMNS is disabled, the first route goes away. I assume that is for VMNS to intercept virtualized instance's network connections and forward them correctly? Just a guess though. More info: I checked my Firewall settings and Services (because I'm sort of a control nazi and turn off a lot) but couldn't find anything that made sense and if turned on changed anything. So it might be something there I'm missing, but I don't know what. My current hacked solution: So, I figured I'd mess with the routes myself to see if that helped, it did. If I run a route delete 0.0.0.0 on the universal (0.0.0.0) gateway routes, and add back in just the 2nd line with route add 0.0.0.0 mask 0.0.0.0 10.0.10.10--the one that points to my actual gateway (10.0.10.10)--then I don't have to mess with the disable/enable cycle of VMNS, and everything works. Running those two commands is faster then bringing up connection options and disabling and re-enabling VMNS, but I still don't want to have use that hack script every boot either. (Oh, and I also tried messing with hard-coding TCP/IP settings in my network adapter, including setting high metrics, etc., but that didn't help either.) Any suggestions on the right way to fix this?

    Read the article

  • Software Raid 10 corrupted superblock after dual disk failure, how do I recover it?

    - by Shoshomiga
    I have a software raid 10 with 6 x 2tb hard drives (raid 1 for /boot), ubuntu 10.04 is the os. I had a raid controller failure that put 2 drives out of sync, crashed the system and initially the os didnt boot up and went into initramfs instead, saying that drives were busy but I eventually managed to bring the raid up by stopping and assembling the drives. The os booted up and said that there were filesystem errors, I chose to ignore because it would remount the fs in read-only mode if there was a problem. Everything seemed to be working fine and the 2 drives started to rebuild, I was sure that it was a sata controller failure because I had dma errors in my log files. The os crashed soon after that with ext errors. Now its not bringing up the raid, it says that there is no superblock on /dev/sda2, even if I assemble manually with all the device names. I also did a memtest and changed the motherboard in addition to everything else. EDIT: This is my partition layout Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0009c34a Device Boot Start End Blocks Id System /dev/sdb1 * 2048 511999 254976 83 Linux /dev/sdb2 512000 3904980991 1952234496 83 Linux /dev/sdb3 3904980992 3907028991 1024000 82 Linux swap / Solaris All 6 disks have the same layout, partition #1 is for raid 1 /boot, partition #2 is for raid 10 far plan, partition #3 is swap, but sda did not have swap enabled EDIT2: This is the output of mdadm --detail /dev/md1 Layout : near=1, far=2 Chunk Size : 64k UUID : a0feff55:2018f8ff:e368bf24:bd0fce41 Events : 0.3112126 Number Major Minor RaidDevice State 0 8 34 0 spare rebuilding /dev/sdc2 1 0 0 1 removed 2 8 18 2 active sync /dev/sdb2 3 8 50 3 active sync /dev/sdd2 4 0 0 4 removed 5 8 82 5 active sync /dev/sdf2 6 8 66 - spare /dev/sde2 EDIT3: I ran ddrescue and it has copied everything from sda except a single 4096 byte sector that I suspect is the raid superblock EDIT4: Here is some more info too long to fit here lshw: http://pastebin.com/2eKrh7nF mdadm --detail /dev/sd[abcdef]1 (raid1): http://pastebin.com/cgMQWerS mdadm --detail /dev/sd[abcdef]2 (raid10): http://pastebin.com/V5dtcGPF dumpe2fs of /dev/sda2 (from the ddrescue cloned drive): http://pastebin.com/sp0GYcJG I tried to recreate md1 based on this info with the command mdadm --create /dev/md1 -v --assume-clean --level=10 --raid-devices=6 --chunk=64K --layout=f2 /dev/sda2 missing /dev/sdc2 /dev/sdd2 missing /dev/sdf2 But I can't mount it, I also tried to recreate it based on my initial mdadm --detail /dev/md1 but it still doesn't mount It also warns me that /dev/sda2 is an ext2fs file system but I guess its because of ddrescue

    Read the article

  • RDP exits immediately after connecting to Windows Server 2008 R2

    - by carpat
    Background: I recently got a Windows cloud VPS server. I don't have much experience with server admin (I'm a programmer), and what little I do have is with linux servers. Ever since getting the server I've been having issues with RDP. I can connect about two or three times, after which point I can't connect until one of the tech guys "fixes" it (see below). When I connect, I can stay connected for hours with no problem. When the problem connecting starts, the first time I try to log in, the remote desktop window pops up, starts connecting, and then exits with "Your Remote Desktop session has ended". After that, for about 10-20 minutes if I try to connect again, the connections times out with Remote Desktop can't connect to the computer for one of these reasons: 1) Remote access on the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network then goes back to connecting once and immediately disconnecting. All of the updates are installed. The firewall has been correctly configured to let RDP traffic through. The remote setting is "Allow connections from computers running any version of Remote Desktop". I tried creating a second user, and when I can't connect, I can't connect to that user either. I've tried both soft and hard reboots, neither of which help. I've tried connecting from two different computers (both running Windows 7) from two different networks (work and home), and the behavior is the same. Everything else on the server continues to run fine (IIS-served http pages, Tomcat-served java pages, svn, ping). The "fix" that the tech guys supply is simply logging into the console on their end, after which point I can connnect 2 or 3 times again. The event viewer on the server has "authentication failure" (or something similar) events generated when I attempt to log in and can't. I can't get to the actual event at the moment as I'm currently in the can't connect stage, and waiting for the techs to log in. But when I searched for the event earlier this morning I couldn't find anything useful. Can anyone help?

    Read the article

  • Nginx not working properly on subdomains [SOLVED]

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • New harddrives failing within weeks.

    - by Jason Kealey
    I've experienced 8 hard disk failures in 3 months and have tried many things to solve the issue permanently but I have failed. I would like to know if you have any advice for me. System was running Win XP on an Asus P5W-DH Deluxe. I have setup a RAID-1 array. I started out with 2 x 500 GB 7200RPM Western Digital drives. One died. I took it out to RMA it. On the same day, the router was fried. Assumed a power surge occurred; connected an older UPS to protect the system. Once I got my hands on an identical disk, I installed it. The RAID array was rebuilt. A few days later, the other one died. Assumed the rebuild caused it to fail. Took it out for RMA. Before the other one arrived, the remaining one died. I then discovered I could re-enable them using the Intel Matrix Storage Manager. I re-enabled both and the system seemed fine for a week, until both died again. I got two new 1.5 TB 7200RPM Seagate drives and re-installed Windows 7. Also replaced the UPS and power supply. They both died again. The voltage on the plug is stable between 120 and 122V as per the UPS. None of the other devices have had any problems (monitors, etc.). At this point, I see two options: a) electrical issue in the house that was, for some reason, not blocked by the UPS. b) something else inside the system causing surges? motherboard? onboard raid controller? Failures happen fairly quickly, between 2 and 14 days after I fix the previous issue. I just gotten a new computer (Core i7) to replace it. If it is stable, I can determine that b) was the problem. If it fries its hard drive again, I can determine that it is an electrical issue in the house. Do you have any other thoughts? Any tools I can run on the drives that failed to get more information about the original SMART event history?

    Read the article

  • Server 2012 intermittently fails to respond to pings from single host, even with firewall disabled, but responds to non-ICMP requests fine

    - by James Westbury
    This one is kind of weird. I've got the following machines involved: DC01 - 10.1.2.42, Server 2012, domain controller & DNS server, physical machine nagiosv - 10.1.2.35, CentOS 6.4, Nagios, virtual machine CB01 - 10.1.3.81, Ubuntu 12.04 LTS, couchbase server, virtual machine So, I noticed something was wrong while configuring this new Nagios VM. I started seeing DC01's state flapping. I logged into nagiosv when I saw this happening, and attempted to ping DC01, both by FQDN and its IP address. Neither worked. I tried pinging the machine from CB01, which is another VM on the same virtual switch/physical NIC as nagiosv, and that worked fine. Pings still failing from nagiosv at this time. DC01 is also an internal DNS server, so I ran dig google.com from nagiosv, and was able to run a query against DC01 just fine: ;; Query time: 1 msec ;; SERVER: 10.1.2.42#53(10.1.2.42) ;; WHEN: Fri Nov 1 07:53:51 2013 ;; MSG SIZE rcvd: 204 Pings still failing from nagiosv, though. I can ping from DC01 to nagiosv, and that works, and I can still ping from other VMs on the same physical NIC into DC01, and that works. I should mention at this point that I've disabled the firewall on DC01 for testing purposes, and it doesn't make a damned bit of difference. (Even with the firewall enabled, I have a blanket exception for ICMP from the local subnet, so it shouldn't make a difference, but I figured I should test it anyway.) I loaded up Wireshark on DC01 and pinged it from nagiosv again. What I see is a bunch of echo requests coming in and not a single reply going back out. Filtered results here, showing all ICMP traffic during a 15-second period. A few more bits of info: There are no IP conflicts on the network. MAC addresses on the incoming pings match the MAC on the VM. There are no duplicate MACs on the network, as far as I can see. I have absolutely no idea why DC01 is failing to respond, here. Any ideas?

    Read the article

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • Trying to grok Linux quotas, where is the data stored?

    - by CarpeNoctem
    So all the tutorials and documentation for the Linux quota system has left me confused. For each filesystem with quotas enabled/on where is the actual quota information stored? Is it filesystem metadata or is it in a file? Say user foo creates a new file on /home. How does the kernel determine whether user foo is below their hard limit? Does the kernel have to tally up quota information on that filesystem each time or is it in the superblock or somewhere else? As far as I understand, the kernel consults the aquota.user file for the actual rules, but where is the current quota usage data stored? Can this be viewed with any tools outside repquota and the like? TIA!! Update: Thanks for the help. I had already read that mini-HOWTO. I am pretty clear on the usage of the user space tools. What I was unclear on is whether the usage data was ALSO in the file that stored per-user limits and you answered this with a yes. From what I can tell, rc.sysinit runs quotacheck and quotaon on startup. The quotacheck program analyzes the filesystem, updates the aquota.* files. It then makes use of quota.h and the quotactl() syscall to inform the kernel of quota info. From this point forward the kernel hashes that information and increments/decrements quota stats as changes occur. Upon shutdown, the init.d/halt script runs the quotaoff command RIGHT before the filesystems are unmounted. The quotaoff command does not appear to update the aquota.* files with the information the kernel has in memory. I say this because the {a,c,m}times for the aquota.user file are only updated upon a reboot of the system or by manual running the quotacheck command. It appears - as far as I can tell - that the kernel just drops it's up-to-date usage data on the floor at shutdown. This information is never used to update the aquota.* files. They are updated during startup by quotacheck(rc.sysinit). Seems silly to me since that updated info had already been collected by the kernel. So...in conclusion I am still not entirely clear on the methods. ;)

    Read the article

  • new PC not work with existing router, but works fine when directly connecting to cable modem

    - by user34786
    I bought a new desktop PC (eMachine ET1331G-03W from WalMart) with windows 7 installed, but I can not access internet by connecting to my existing wireless router(LinkSys BEFW11S4) with wired cable. Though all other existing desktops and laptops have no problem connecting to the same router. However, the new desktop PC works fine and able to connect to internet if I bypass the router and directly hook up with the cable modem. At new PC when connecting to the router, I got the below information by typing ipconfig, the IP address looks wrong to me: autoconfiguration IPv4 Address: 169.254.71.140 subnet mask: 255.255.0.0 default gateway: (empty) NetBIOS over Tcpip: Enabled Typing ipconfig at all other desktop and laptop have values like below, which are good to me: Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 192.168.1.140 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 The wireless router was on 192.168.1.1, I do not know why the new desktop got 169.254.71.140 IP? It should have something like 192.168.1.xxx, and it was configured to automatically get IP by DHCP. I have tried to switch cables,power off cable modem, router and reboot new pc many times and got no luck. So I believe this is only an issue related to router or new pc configuration. Can someone help me figure out the issue?

    Read the article

  • PHP `virtual()` with Apache MultiViews not working after upgrade to Ubuntu 12.04

    - by Izzy
    I use PHP's virtual() directive quite a lot on one of my sites, including central elements. This worked fine for the last ~10 years -- but after upgrading (or rather moving, as it is on a new machine) to Ubuntu 12.04 it somehow got broken. Example setup (simplified) To make it easier to understand, I simplify some things (contents). So say I need a HTML fragment like <P>For further instructions, please look <A HREF='foobar'>here</P> in multiple pages. 10 years ago, I used SSI for that, so it is put into a file in a central place -- so if e.g. the targeted URL changes, I only need to update it in one place. To serve multiple languages, I have Apache's MultiViews enabled -- and at $DOCUMENT_ROOT/central/ there are the files: foobar.html (English variant, and the default) foobar.html.de (German variant). Now in the PHP code, I simply placed: <? virtual("/central/foobar"); ?> and let Apache take care to deliver the correct language variant. The problem As said, this worked fine for about 10 years: German visitors got the German variant, all others the English (depending on their preferred language). But after upgrading to Ubuntu 12.04, it no longer worked: Either nothing was delivered from the virtual() command, or (in connection with framesets) it even ended up in binary gibberish. Trying to figure out what happens, I played with a lot of things. I first thought MultiViews was (somehow) not available anymore -- but calling http://<server>/central/foobar showed the right variant, depending on the configured language preferences. This also proved there was nothing wrong with file permissions. The error.log gave no clues either (no error message thrown). Finally, just as a "last ressort", I changed the PHP command to <? virtual("central/foobar.html"); ?> -- and that very same file was in fact included. So PHP's virtual() function basically worked -- but the language dependend stuff obviously did no longer work together with it as it did before. Of course I tried to find some change (most likely in PHP's virtual() command), using Google a lot, and also searching the questions here -- unfortunately to no avail. Finally: The question Putting "design questions" aside (surely today I would design things differently -- but at least currently I miss the time to change that for a quite huge amount of pages): What can be done to make it work again? I surely missed something -- but I cannot figure out what...

    Read the article

  • Is this iptables NAT exploitable from the external side?

    - by Karma Fusebox
    Could you please have a short look on this simple iptables/NAT-Setup, I believe it has a fairly serious security issue (due to being too simple). On this network there is one internet-connected machine (running Debian Squeeze/2.6.32-5 with iptables 1.4.8) acting as NAT/Gateway for the handful of clients in 192.168/24. The machine has two NICs: eth0: internet-faced eth1: LAN-faced, 192.168.0.1, the default GW for 192.168/24 Routing table is two-NICs-default without manual changes: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 (externalNet) 0.0.0.0 255.255.252.0 U 0 0 0 eth0 0.0.0.0 (externalGW) 0.0.0.0 UG 0 0 0 eth0 The NAT is then enabled only and merely by these actions, there are no more iptables rules: echo 1 > /proc/sys/net/ipv4/ip_forward /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # (all iptables policies are ACCEPT) This does the job, but I miss several things here which I believe could be a security issue: there is no restriction about allowed source interfaces or source networks at all there is no firewalling part such as: (set policies to DROP) /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT And thus, the questions of my sleepless nights are: Is this NAT-service available to anyone in the world who sets this machine as his default gateway? I'd say yes it is, because there is nothing indicating that an incoming external connection (via eth0) should be handled any different than an incoming internal connection (via eth1) as long as the output-interface is eth0 - and routing-wise that holds true for both external und internal clients that want to access the internet. So if I am right, anyone could use this machine as open proxy by having his packets NATted here. So please tell me if that's right or why it is not. As a "hotfix" I have added a "-s 192.168.0.0/24" option to the NAT-starting command. I would like to know if not using this option was indeed a security issue or just irrelevant thanks to some mechanism I am not aware of. As the policies are all ACCEPT, there is currently no restriction on forwarding eth1 to eth0 (internal to external). But what are the effective implications of currently NOT having the restriction that only RELATED and ESTABLISHED states are forwarded from eth0 to eth1 (external to internal)? In other words, should I rather change the policies to DROP and apply the two "firewalling" rules I mentioned above or is the lack of them not affecting security? Thanks for clarification!

    Read the article

  • Apache VirtualHost Blockhole (Eats All Requests on All Ports on an IP)

    - by Synetech inc.
    I’m exhausted. I just spent the last two hours chasing a goose that I have been after on-and-off for the past year. Here is the goal, put as succinctly as possible. Step 1: HOSTS File: 127.0.0.5 NastyAdServer.com 127.0.0.5 xssServer.com 127.0.0.5 SQLInjector.com 127.0.0.5 PornAds.com 127.0.0.5 OtherBadSites.com … Step 2: Apache httpd.conf <VirtualHost 127.0.0.5:80> ServerName adkiller DocumentRoot adkiller RewriteEngine On RewriteRule (\.(gif|jpg|png|jpeg)$) /p.png [L] RewriteRule (.*) /ad.htm [L] </VirtualHost> So basically what happens is that the HOSTS file redirects designated domains to the localhost, but to a specific loopback IP address. Apache listens for any requests on this address and serves either a transparent pixel graphic, or else an empty HTML file. Thus, any page or graphic on any of the bad sites is replaced with nothing (in other words an ad/malware/porn/etc. blocker). This works great as is (and has been for me for years now). The problem is that these bad things are no longer limited to just HTTP traffic. For example: <script src="http://NastyAdServer.com:99"> or <iframe src="https://PornAds.com/ad.html"> or a Trojan using ftp://spammaster.com/[email protected];[email protected];[email protected] or an app “phoning home” with private info in a crafted ICMP packet by pinging CardStealer.ru:99 Handling HTTPS is a relatively minor bump. I can create a separate VirtualHost just like the one above, replacing port 80 with 443, and adding in SSL directives. This leaves the other ports to be dealt with. I tried using * for the port, but then I get overlap errors. I tried redirecting all request to the HTTPS server and visa-versa but neither worked; either the SSL requests wouldn’t redirect correctly or else the HTTP requests gave the You’re speaking plain HTTP to an SSL-enabled server port… error. Further, I cannot figure out a way to test if other ports are being successfully redirected (I could try using a browser, but what about FTP, ICMP, etc.?) I realize that I could just use a port-blocker (eg ProtoWall, PeerBlock, etc.), but there’s two issues with that. First, I am blocking domains with this method, not IP addresses, so to use a port-blocker, I would have to get each and every domain’s IP, and update theme frequently. Second, using this method, I can have Apache keep logs of all the ad/malware/spam/etc. requests for future analysis (my current AdKiller logs are already 466MB right now). I appreciate any help in successfully setting up an Apache VirtualHost blackhole. Thanks.

    Read the article

  • Nginx not working properly on subdomains

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • VPN Connection Causes Internal LAN Connection Loss with Server

    - by sleepisfortheweak
    I've tried configuring basic PPTP VPN at my small business using a number of different tutorials. As far as I can tell, the actual VPN connection worked fine, but upon connecting a client, the Server 'disappears' from the internal LAN. The RRAS service must be stopped before the connection is restored. My Setup: The network is simply a DSL Gateway/Router to the outside functioning as NAT/Firewall/DHCP. The server is a Win Server 2008 machine at fixed IP 192.168.1.200. The server has 1 NIC, so I used the 'custom' option when configuring RRAS. The RRAS settings should be default except that I've disabled ports for connection types I'm not using and reduced PPTP ports to 10. I've also created an address pool and disabled DHCP packet forwarding. The server only functions as a File Share and now a VPN Server. Local LAN computers all have mapped network shares to the server authenticated based on Local User/Group setup on the server. The Problem: The moment a client connects through VPN, the server 'disappears' from the local network. All mapped drives disconnect and there is no response to a ping 192.168.1.200. Even if the client disconnects, the server does not re-appear at that address until the RRAS service is stopped. I've Tried: Using an Address Pool inside and outside the local subnet. Using DCHP Relay Checking Inbound/Outbound filters (none enabled) The fact that nothing I've tried has had any effect, and that I can connect and successfully obtain an IP tells me that it's something more fundamental I'm missing. My gut tells me that it's something to do with the second IP address added by the VPN client somehow taking over the interface or traffic from the local LAN accidently getting routed to the VPN client instead of handled at the server once RRAS has become 'active' when a client connects. Hopefully this may be obvious to someone with real IT experience. I've been doing this a while and almost never been stumped. I'm starting to think it might actually be something tricky since my setup is pretty basic yet refuses to work. I'll be happy to include more info if this doesn't ring any bells right away for anyone. Thanks

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • web services access not being reached thru the web browser [closed]

    - by Tony
    I am trying to reference my .asmx webservices in .NET but my server is not exposed to the internet. When I put on the following address I get the message mentioned below. What's the reason for not being able to see the directory? Am I missing something in my IIS configuraction? Am I missing anything in my permissions? Just as reference I have other folders with webservices and I have the same issue. When I login to the server I am doing it with my windows user and password (I am using windows authentication). It's necessary to mention that when I put the URL I am getting a popup screen to put in my userid and password but it seems that's not able to validate since keeps asking me a couple of times. Let me know if you need more information to address this issue . http://appsvr02/Inetpub/wwwroot/DevWebApi/ Internet Explorer cannot display the webpage What you can try: It appears you are connected to the Internet, but you might want to try to reconnect to the Internet. Retype the address. Go back to the previous page. Most likely causes: •You are not connected to the Internet. •The website is encountering problems. •There might be a typing error in the address. More information This problem can be caused by a variety of issues, including: •Internet connectivity has been lost. •The website is temporarily unavailable. •The Domain Name Server (DNS) is not reachable. •The Domain Name Server (DNS) does not have a listing for the website's domain. •If this is an HTTPS (secure) address, click tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds 1.Click the Favorites Center button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages) 1.Click Tools , and then click Work Offline. 2.Click the Favorites Center button , click History, and then click the page you want to view.

    Read the article

  • How to setup Proxy Cache with Nginx and Passenger

    - by tiny
    I use Nginx and Passenger for my rails application. I want to use proxy cache to cache my pages. However, every request go direct to my rails application. I don't know what wrong with my configuration. Below is my configuration: user www-data; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15; passenger_ruby /usr/bin/ruby1.8; passenger_max_pool_size 6; passenger_max_instances_per_app 1; passenger_pool_idle_time 0; rails_spawn_method conservative; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 512; sendfile on; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css text/javascript application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss; proxy_cache_path /var/www/cache/webapp levels=1:2 keys_zone=webapp:8m max_size=1000m inactive=600m; include vhosts/*.conf; include /opt/nginx/conf/sites-enabled/*; root /var/www; } server { listen 127.0.0.1:3008; server_name localhost; root /var/www/yoolk_web_app/public; # <--- be sure to point to 'public'! passenger_enabled on; rails_env development; passenger_use_global_queue on; } server { listen 80; server_name webpage.dev; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; error_page 503 http://$host/maintenance.html; location ~* (css|js|png|jpe?g|gif|ico)$ { root /var/www/web_app/public; expires max; } location / { proxy_pass http://127.0.0.1:3008/; proxy_cache webapp; proxy_cache_valid 200 10m; } #More Location }

    Read the article

  • Can't install .NET framework 4.0 on Windows XP professional version 2002 SP3 (OS bug?)

    - by that guy
    .NET framework 4.0 install fails on Windows XP professional version 2002 SP3: I tried to run setup using "run as..." to make sure the admin rights are used ("protect my computer..." tick was deselected of course). I tried everything: installing using online/offline setup, windows update. install goes a little and then "rolls back" and says: Installation did not succeed .NET Framework 4 has not been installed because: Fatal error during installation. for more information about this problem, see the log file. the full log: http://pastebay.net/1433771 Any ideas? EDIT1: I have found this in the log: "BlockIf: You must install the 32-bit Windows Imaging Component (WIC) before you run Setup. Please visit the Microsoft Download Center to install WIC, and then rerun Setup...." So I found it, and launched "wic_x86_enu.exe" - but it said: WIC Setup error Newer version of update is already on the system. I have already installed: .NET framewrok 2.0 SP2 .NET framewrok 3.0 SP2 .NET framewrok 3.5 SP1 but I need 4.0 . EDIT2: another attempt and it's log. (this time better copy of log file): http://pastebin.com/gmGfbM9a (copy to notepad and save as .htm and open with internet browser). I have tried all the solutions I could find - and nothing helped. I have found something weird: when I formatted the hard drive and installed windows xp again - the .NET framework 4.0 installed ok, but when I plugged my 100Mbit internet cable - the operating system kind off "locked itself" and the bug returned - I could no longer install .NET framework 4.0 again. There was no reason for that to happen, for example I have windows server 2003 in local network, but I don't have active directory enabled on it or anything like that - the server just has some folders shared and thats all (all server's "features" are default). I had the second pc with the same problem - with XP on it too. This seems like the bug of Operating System to me. I couldn't find what was causing the problem. After many days I gave up: backuped everything, formatted HDD and installed Windows 7 professional 64bit. .NET framework 4.0 installed with no problem on it.

    Read the article

  • OpenSSH (Windows) does not forward X11

    - by Shulhi Sapli
    I'm running Ubuntu 13.04 in VM and I wanted to do X11 forwarding to my host (Win 8), so far it works fine using PuTTY and XMing server for Windows. But I am curious why it doesn't work if I use OpenSSH binaries (it comes together with Git for windows). This is what I've done so far: ssh -X [email protected] (also tried with -Y) then gedit but received error of Cannot open display. echo $DISPLAY came out as empty. So, I try to export DISPLAY=localhost:0.0 but it still won't work. The DISPLAY environment that I set is exactly as when it runs with Putty. I also try changing the DISPLAY to 192.168.2.3:0.0 and other display number as well, but still it won't work. Of course I could just use Putty to make it work, but I was wondering why OpenSSH binaries does not work. I have enabled all settings required in both /etc/ssh/ssh_config and /etc/ssh/sshd_config. If I run with -v option, this is what I get F:\SkyDrive\Projects> ssh -X -v [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to 192.168.2.3 [192.168.2.3] port 22. debug1: Connection established. debug1: identity file /c/Users/Shulhi/.ssh/identity type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_rsa type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.2.3' is known and matches the RSA host key. debug1: Found key in /c/Users/Shulhi/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Shulhi/.ssh/identity debug1: Trying private key: /c/Users/Shulhi/.ssh/id_rsa debug1: Next authentication method: password [email protected]'s password: It seems that there is no request for X11 (I'm not sure if there is should be one too here). Any pointers why it doesn't work?

    Read the article

  • OpenVPN Chaining

    - by noderunner
    I'm trying to set up an OpenVPN "chain", similar to what is described here. I have two separate networks, A and B. Each network has an OpenVPN server using a standard "road warrior" or "client/server" approach. A client can connect to either one for access to the hosts/services on that respective network. But server A and B are also connected to each other. The servers on each network have a "site-to-site" connection between the two. What I'm trying to accomplish, is the ability to connect to network A as a client, and then make connections with hosts on network B. I'm using tun/routing for all of the VPN connections. The "chain" looks something like this: [Client] --- [Server A] --- [Server A] --- [Server B] --- [Server B] --- [Host B] (tun0) (tun0) (tun1) (tun0) (eth0) (eth0) The whole idea is that server A should route traffic destined to network B through the "site-to-site" VPN set up on tun1 when a client from tun0 tries to connect. I did this simply by setting up two connection profiles on server A. One profile is a standard server config running on tun0, defining a virtual client network, IP address pool, pushing routes, etc. The other is a client connection to Server B running on tun1. With ip_forwarding enabled, I then simply added a "push route" to the clients advertising a route to network B. On server A, this seems to work when I look at tcpdump output. If I connect as a client, and then ping a host on network B, I can see the traffic getting passed from tun0 to tun1 on Server A: tcpdump -nSi tun1 icmp The weird thing is that I don't see Server B receiving that traffic through the tunnel. It's as if Server A is sending it through the site-to-site connection like it should, but server B is completely ignoring it. When I look for the traffic on Server B, it simply isn't there. A ping from Server A -- Host B works fine. But a ping from a client connected to Server A to host B does not. I'm wondering if Server B is ignoring the traffic because the source IP does not match the client IP pool that it hands out to clients? Does anyone know if I need to do something on Server B in order for it to see the traffic? This is a complicated problem to explain, so thanks if you stuck with me this far.

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • How do I configure Reverse Group Membership Maintenance on an openldap server? (memberOf)

    - by emills
    I am currently working on integrating LDAP authentication into a system and I would like to restrict access based on LDAP group. The only way to do this is via a search filter and therefore I believe my only option to be the use of the "memberOf" attribute in my search filter. It is my understanding that the "memberOf" attribute is an operational attribute which can be created by the server for me anytime a new "member" attribute is created for any "groupOfNames" entry on the server. My main goal is to be able to add a "member" attribute to an existing "groupOfNames" entry and have a matching "memberOf" attribute be added to the DN I provide. What I have managed to achieve so far: I'm still pretty new to LDAP administration but based on what I found in the openldap admin's guide, it looks like Reverse Group Membership Maintence aka "memberof overlay" would achieve exactly the effect I am looking for. My server is currently running a package installation (slapd on ubuntu) of openldap 2.4.15 which uses "cn=config" style runtime configuration. Most of the examples I have found still reference the older "slapd.conf" method of static configuration and I have tried my best to adapt the configurations to the new directory based model. I have added the following entries to enable the memberof overlay module: Enable the module with olcModuleLoad cn=config/cn\=module\{0\}.ldif dn: cn=module{0} objectClass: olcModuleList cn: module{0} olcModulePath: /usr/lib/ldap olcModuleLoad: {0}back_hdb olcModuleLoad: {1}memberof.la structuralObjectClass: olcModuleList entryUUID: a410ce98-3fdf-102e-82cf-59ccb6b4d60d creatorsName: cn=config createTimestamp: 20090927183056Z entryCSN: 20091009174548.503911Z#000000#000#000000 modifiersName: cn=admin,cn=config modifyTimestamp: 20091009174548Z Enabled the overlay for the database and allowed it to use it's default settings (groupOfNames,member,memberOf,etc) cn=config/olcDatabase={1}hdb/olcOverlay\=\{0\}memberof dn: olcOverlay={0}memberof objectClass: olcMemberOf objectClass: olcOverlayConfig objectClass: olcConfig objectClass: top olcOverlay: {0}memberof structuralObjectClass: olcMemberOf entryUUID: 6d599084-490c-102e-80f6-f1a5d50be388 creatorsName: cn=admin,cn=config createTimestamp: 20091009104412Z olcMemberOfRefInt: TRUE entryCSN: 20091009173500.139380Z#000000#000#000000 modifiersName: cn=admin,cn=config modifyTimestamp: 20091009173500Z My current result: By using the above configuration, I am able to add a NEW "groupOfNames" with any number of "member" entries and have all the involved DNs updated with a "memberOf" attribute. This is part of the behavior I would expect. While I believe the following should have been accomplished with the memberof overlay, I still do not know how to do the following and I would gladly welcome any advice: Add a "member" attribute to an EXISTING "groupOfNames" and have a corresponding "memberOf" attribute be created automatically. Remove a "member" attribute and have the corresponding "memberOf" attribute" be removed automatically.

    Read the article

  • TrueCrypt partition will no longer mount

    - by sparkyuiop
    I am hoping for some advice to help me out of my situation, with luck. I have a computer running Windows 7 Ultimate x64 with 3 hard disks installed. On my 2TB hard disk 2 (non-system disk) I have 4 partitions. One is for music, another for video, a downloads partition and a 500GB RAW Truecrypt encrypted partition / volume that I had setup to mount with 4 photographs used as keyfiles. The 4 photographs are located in my 'Documents' partition which is one of four partitions on my 1.5TB hard disk 1 (non-system disk) When I setup the disk encryption I did not (I'm 99% sure) create a password, I only used the 4 photograph keyfiles to mount the volume. Recently my 1TB hard disk 0 (system / boot) started to fail so I decided to replace it. I was going to clone the old disk to a new disk but decided that a fresh installation would be more beneficial. Once I had transferred all the required 'User Data' from my old hard disk 0 (C: disk) I discarded it. I reinstalled Truecrypt, pointed to the partition, selected my 4 keyfiles photographs and I mounted my encrypted volume with no issues. In fact I mounted it several times after re-installing Windows and after reboots. Now all of a sudden when I try and mount it I get the message "incorrect keyfile(s) and/or password or not a Truecrypt volume". Now I am not sure why this happened as I do not recall exactly what I did between last mounting the volume successfully and it not mounting. Here are some of the possible things I may have done to cause it to stop working but I am at a loss as to where to start to try and resolve the problem. 1. I had swapped the drive letters to a preferred order. 2. I possibly swapped the physical SATA connectors on the mainboard. 3. I enabled 'Hot Plugging' for the two non-system hard disk SATA ports and the DVD SATA port in the BIOS. I have tried changing the encrypted partition drive letter as suggested in another post but this does not help. On my old system the encrypted drive was drive "X". I have about tried it with all the other free drive letters but alas nothing changes. I do not recall what drive letter was allocated to the encrypted partition before I changed them all. I have not tried to change the letter back to what it possibly was to start with as I am happy with the current layout. I will try this is anyone thinks it would be worthwhile though. I do hope I have managed to convey my situation in an understandable manner and live in hope someone could help me recover years of personal files. Thank you very much for taking the time to read my post and for any suggestions you may offer. Regards Phillip Thorne (UK) Anyone???

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

  • Asterisk server firewall script allows 2-way audio from incoming calls, but not on outgoing?

    - by cappie
    I'm running an Asterisk PBX on a virtual machine directly connected to the Internet and I really want to prevent script kiddies, l33t h4x0rz and actual hackers access to my server. The basic way I protect my calling-bill now is by using 32 character passwords, but I would much rather have a way to protect The firewall script I'm currently using is stated below, however, without the established connection firewall rule (mentioned rule #1), I cannot receive incoming audio from the target during outgoing calls: #!/bin/bash # first, clean up! iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD DROP # we're not a router iptables -P OUTPUT ACCEPT # don't allow invalid connections iptables -A INPUT -m state --state INVALID -j DROP # always allow connections that are already set up (MENTIONED RULE #1) iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # always accept ICMP iptables -A INPUT -p icmp -j ACCEPT # always accept traffic on these ports #iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT # always allow DNS traffic iptables -A INPUT -p udp --sport 53 -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT # allow return traffic to the PBX iptables -A INPUT -p udp -m udp --dport 50000:65536 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 10000:20000 -j ACCEPT iptables -A INPUT -p udp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -p tcp --destination-port 5060:5061 -j ACCEPT iptables -A INPUT -m multiport -p udp --dports 10000:20000 iptables -A INPUT -m multiport -p tcp --dports 10000:20000 # IP addresses of the office iptables -A INPUT -s 95.XXX.XXX.XXX/32 -j ACCEPT # accept everything from the trunk IP's iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT iptables -A INPUT -s 195.XXX.XXX.XXX/32 -j ACCEPT # accept everything on localhost iptables -A INPUT -i lo -j ACCEPT # accept all outgoing traffic iptables -A OUTPUT -j ACCEPT # DROP everything else #iptables -A INPUT -j DROP I would like to know what firewall rule I'm missing for this all to work.. There is so little documentation on which ports (incoming and outgoing) asterisk actually needs.. (return ports included). Are there any firewall/iptables specialists here that see major problems with this firewall script? It's so frustrating not being able to find a simple firewall solution that enabled me to have a PBX running somewhere on the Internet which is firewalled in such a way that it can ONLY allows connections from and to the office, the DNS servers and the trunk(s) (and only support SSH (port 22) and ICMP traffic for the outside world). Hopefully, using this question, we can solve this problem once and for all.

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >