Search Results

Search found 1519 results on 61 pages for 'chain'.

Page 46/61 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • smbclient timing out

    - by Sam Lee
    I am trying to set up a Samba share on a Centos machine. I want to connect to this server using smbclient on OS X. Here is what happens: > smbclient -L X.X.X.X timeout connecting to X.X.X.X:445 timeout connecting to X.X.X.X:139 Error connecting to X.X.X.X (Operation already in progress) Connection to X.X.X.X failed What could be going wrong? Here is my iptables dump on the Centos machine (the server): > iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 And finally, my smb.conf: [global] workgroup = workgroup security = SHARE load printers = No default service = global path = /home available = No encrypt passwords = yes [share] writeable = yes admin users = myusername path = /home/myhome/ force user = root valid users = myusername public = yes available = yes

    Read the article

  • How to route to a secondary interface on the same physical ethernet?

    - by sjose3612611
    INTERNET<->(wan)BRIDGED_DEVICE(lan)<->ETH_ROUTER<->LAN Problem: Need to access web server on BRIDGED_DEVICE's LAN from INTERNET via ROUTER (BRIDGED_DEVICE's web server cannot be accessed form INTERNET since it has no Public management IP). Cannot configure bridged device. It has a static IP on its LAN to which its web server binds. Attempt: Create a secondary/alias WAN Interface on ETH_ROUTER (e.g Primary: eth0.1 (for internet access) and Secondary: eth0.2 (for accessing web server on BRIDGED_DEVICE), (No VLANs). eth0.1 has a public IP; eth0.2 has a static private IP in the BRIDGED_DEVICE's subnet (e.g 10.0.X.Y). Iptables on ETH_ROUTER: Added a port forward (DNAT) from eth0.1 to eth0.2: iptables -t nat -I PREROUTING -i eth0.1 -p tcp --dport 80 -j DNAT --to-destination 10.0.X.Y iptables -t nat -I POSTROUTING -o eth0.2 -s 10.0.X.0/24 -j MASQUERADE Stateful firewall w/ overall drop policy on FORWARD chain, hence: iptables -I FORWARD -i eth0.1 -d 10.0.X.Y -p tcp --dport 80 -j ACCEPT Can ping from ETH_ROUTER to BRIDGED_DEVICE but unable to reach the web server from Internet. I see packet cont increasing for the DNAT rule but not sure where it disappears in the ETH_ROUTER after that. ETH_ROUTER is the only device that can be configured to achieve this. If familiar with this scenario, please suggest what I may be missing or doing wrong here or suggest techniques to debug?

    Read the article

  • Make isolinux 4.0.3 chainload itself

    - by chainloader
    I have a bootable iso which boots into isolinux 4.0.3 and I want to make it chainload itself (my actual goal is to chainload isolinux.bin v4.0.1-debian, which should start up the Ubuntu10.10 Live CD, but for now I just want to make it chainload itself). I can't get isolinux to chainload any isolinux.bin, no matter what version. It either freezes or shows a "checksum error" message. I'm using VMWare to test the iso. Things I have tried: .com32 /boot/isolinux/chain.c32 /boot/isolinux/isolinux-debug.bin (chainload self) this shows Loading the boot file... Booting... ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: Main image LBA = 53F00100 ...and the machine freezes. Then I've tried this (chainload GRUB4DOS 0.4.5b) chainloader /boot/isolinux/isolinux-debug.bin Result: Error 13: Invalid or unsupported executable format Next try: (chainload GRUB4DOS 0.4.5b) chainloader --force /boot/isolinux/isolinux-debug.bin boot Result: ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: No boot info table, assuming single session disk... isolinux: Spec packet missing LBA information, trying to wing it... isolinux: Main image LBA = 00000686 isolinux: Image checksum error, sorry... Boot failed: press a key to retry... I have tried other things, but all of them failed miserably. Any suggestions?

    Read the article

  • Change OpenVZ route to pass through ip failover

    - by Kevin Campion
    I have one dedicaced server with its own IP and another IP (failover) who refer to the first. I will wish to change the gateway of a Proxmox virtual machine (openvz) who runs on this dedicaced server to go through the failover IP rather than the ip of host main server. Once connected to a virtual machine, when I do a traceroute VE# traceroute www.google.fr traceroute to www.google.fr (209.85.229.104), 30 hops max, 60 byte packets 1 MY_SERVER_NAME.ovh.net (xxx.xxx.xxx.xxx FIRST_IP_MAIN_SERVER) 0.021 ms 0.010 ms 0.009 ms The first line tells me the ip of host main server. I would like that the traceroute display the second IP failover. VE# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.0.2.1 * 255.255.255.255 UH 0 0 0 venet0 default 192.0.2.1 0.0.0.0 UG 0 0 0 venet0 With iptables HOST# iptables -t nat -L Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- anywhere anywhere MASQUERADE all -- anywhere anywhere SNAT tcp -- anywhere 10.10.101.2 tcp dpt:www state NEW,RELATED,ESTABLISHED,UNTRACKED to:SECOND_IP_FAILOVER SNAT all -- 10.10.101.2 anywhere to:SECOND_IP_FAILOVER 10.10.101.2 is the virtual machine IP (interface venet0) Any ideas ?

    Read the article

  • my X server doesn't load a module called "glx", but my video drivers seem to be installed

    - by rumtscho
    I just got a new, very wide monitor (2560x1440) and there is no sense maximizing my applications. So I installed Compiz Config Settings Manager and enabled Grid. Nothing happened, the shortcuts don't move application windows. Went to System - Preferences - Appearance, the Visual effects tab. It at "disabled". When I try to set them to "normal" or "extra", a message box appears telling me that it's searching for video drivers, then disappears, and I get an error message "Desktop effects could not be enabled". I opened Xorg.0.log, and had errors there: (EE) Failed to load /usr/lib/xorg/modules/extensions//libglx.so (II) UnloadModule: "glx" (EE) Failed to load module "glx" (loader failed, 7) (II) LoadModule: "extmod" Going to Administration - System - Hardware drivers, it said that there are no available and/or installed hardware drivers. But apt-get said that it cannot install nvidia-glx-185, as it is already installed. Googling my error message suggested that I install and run something called envyng. This let me install the nvidia drivers again, and now I can see in the Hardware Drivers window that they are installed and active. But the error message in Xorg.0.log remains, and I still cannot enable the Compiz effects or use Grid. Now, I don't have enough Linux experience to understand if this is a single cause-effect-chain of problems, or three independent ones, but I'd appreciate help for any of them. I am running Ubuntu 9.10, the video card is a GeForce 7600GS.

    Read the article

  • How to set up GRUB2 chainloader to other Grub (Fedora, Debian) on GPT

    - by basic6
    I'm trying to set up a dedicated GRUB2 which (chain-)loads another GRUB on a disk with GPT partition table. Relevant partitions: /dev/sda1 BIOS_BOOT /dev/sda2 BOOT (ext2) /dev/sda3 FEDORA (ext4) /dev/sda6 DEBIAN (ext4) I installed Fedora first, using /dev/sda2 as boot partition. Then I installed Debian. The Debian installer recognized the Fedora installation and added it as boot entry, then installed its GRUB into the MBR. While this works for the moment, it's pretty messy, because every Debian update may change the boot config, removing the Fedora entry (tried it) and the other way around. That's why I want both systems to have their own boot loader and one main boot loader (that could reside on /dev/sda2), which loads one of them. This is what I've tried: Moved everything from /dev/sda2 to /dev/sda3/boot Removed /boot mount point in Fedora (so /dev/sda2 isn't used anymore) From a live Linux, installed GRUB2 to the MBR (grub-install --boot-directory=sda2 /dev/sda) Wrote a menu.lst: title Fedora root (hd0,2) chainloader +1 (Again, for Debian) Converted that to a grub.cfg script (grub-menu2cfg or something like that) When booting, actually got a GRUB2 menu with "Fedora" (and "Debian") When selecting any one of those: error: invalid signature Issued "grub-install /dev/sda6" (and ...sda3) from all kinds of live Linux systems, all of which failed with another error message (in the case of the Debian installer, without explanation at all) Added --force to the chainloader line, now it says "loading", then reboots Found douzens of howtos, none of which seem to work for me Since I get the self-made GRUB2 menu on bootup, I've at least successfully installed the first stage of GRUB, right? When trying to chainload, some signature is checked and seems to be wrong - how do I fix it? The boot menus (Fedora with its different Kernel versions and Debian with Debian and Fedora as well) are now on the system partitions (/dev/sda3, /dev/sda6), is there anything else to do on these partitions, so they can be chainloaded? Any help is greatly appreciated.

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • IP-dependent local port-forwarding on Linux

    - by chronos
    I have configured my server's sshd to listen on a non-standard port 42. However, at work I am behind a firewall/proxy, which only allow outgoing connections to ports 21, 22, 80 and 443. Consequently, I cannot ssh to my server from work, which is bad. I do not want to return sshd to port 22. The idea is this: on my server, locally forward port 22 to port 42 if source IP is matching the external IP of my work's network. For clarity, let us assume that my server's IP is 169.1.1.1 (on eth1), and my work external IP is 169.250.250.250. For all IPs different from 169.250.250.250, my server should respond with an expected 'connection refused', as it does for a non-listening port. I'm very new to iptables. I have briefly looked through the long iptables manual and these related / relevant questions: http://serverfault.com/questions/57872/iptables-question-forwarding-port-x-to-an-ssh-port-of-different-machine-on-the-n http://serverfault.com/questions/140622/how-can-i-port-forward-with-iptables However, those questions deal with more complicated several-host scenarios, and it is not clear to me which tables and chains I should use for local port-forwarding, and if I should have 2 rules (for "question" and "answer" packets), or only 1 rule for "question" packets. So far I have only enabled forwarding via sysctl. I will start testing solutions tomorrow, and will appreciate pointers or maybe case-specific examples for implementing my simple scenario. Is the draft solution below correct? iptables -A INPUT [-m state] [-i eth1] --source 169.250.250.250 -p tcp --destination 169.1.1.1:42 --dport 22 --state NEW,ESTABLISHED,RELATED -j ACCEPT Should I use the mangle table instead of filter? And/or FORWARD chain instead of INPUT?

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • AWS VPC public web application connecting to database via VPN

    - by Chris
    What I am trying to do is set up a web application that is public facing but makes calls to a database that is on an internal network. I have been trying to set up an AWS VPC with a public subnet, private subnet, and hardware VPN access but I can't seem to get it to work. Can someone help me understand what the process flow here should be? My understanding is that I need a public subnet to handle the website requests and then a private subnet to connect to the VPN but what I do not understand is how to send requests down the chain and get the response. Basically what I am asking is how can I query the database via VPN from that public website? I've tried during rout forwarding but I can't successfully complete the process. Does anyone have any advice on something I can read on this subject or an FAQ on setting something like this up? Is it even possible? I'm out of my league here, this is not my area of expertise but I'm being asked to solve this problem. Any help would be appreciated. Thanks

    Read the article

  • Data from a table in 1 DB needed for filter in different DB...

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID]

    Read the article

  • Blocking an IP in Webmin

    - by Dan J
    I've been checking my /var/log/secure log recently and have seen the same bot trying to brute force onto my Centos server running webmin. I created a chain + rule in Networking - Linux Firewall: Drop If source is 113.106.88.146 But I'm still seeing the attempted logins in the log: Jun 6 10:52:18 CentOS5 sshd[9711]: pam_unix(sshd:auth): check pass; user unknown Jun 6 10:52:18 CentOS5 sshd[9711]: pam_succeed_if(sshd:auth): error retrieving information about user larry Jun 6 10:52:19 CentOS5 sshd[9711]: Failed password for invalid user larry from 113.106.88.146 port 49328 ssh2 Here is the contents of /etc/sysconfig/iptables: # Generated by webmin *filter :banned-ips - [0:0] -A INPUT -p udp -m udp --dport ftp-data -j ACCEPT -A INPUT -p udp -m udp --dport ftp -j ACCEPT -A INPUT -p udp -m udp --dport domain -j ACCEPT -A INPUT -p tcp -m tcp --dport 20000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp --dport https -j ACCEPT -A INPUT -p tcp -m tcp --dport http -j ACCEPT -A INPUT -p tcp -m tcp --dport imaps -j ACCEPT -A INPUT -p tcp -m tcp --dport imap -j ACCEPT -A INPUT -p tcp -m tcp --dport pop3s -j ACCEPT -A INPUT -p tcp -m tcp --dport pop3 -j ACCEPT -A INPUT -p tcp -m tcp --dport ftp-data -j ACCEPT -A INPUT -p tcp -m tcp --dport ftp -j ACCEPT -A INPUT -p tcp -m tcp --dport domain -j ACCEPT -A INPUT -p tcp -m tcp --dport smtp -j ACCEPT -A INPUT -p tcp -m tcp --dport ssh -j ACCEPT -A banned-ips -s 113.106.88.146 -j DROP COMMIT # Completed # Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed # Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed

    Read the article

  • Unable to run cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] But the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is it ignoring my --sout input?

    Read the article

  • Ubuntu 12.04/12.10 can't detect windows or any other partitions(Asus z77 UEFI BIOS)

    - by user971155
    I've recently completed tinkering my new pc(motherboard ASUS z77 with UEFI BIOS) and unfortunately not everything works quite well. After installing windows 7 ultimate on a single primary partition(SATA drive) I decided to allocate one more logical partition for additional needs. When I tried doing it with the manager - it said that it couldn't allocate requested size even though I certainly asked for much less than it was available. I thought that it might have been a windows issue and proceded to installing Ubuntu 12.10 x64. When the graphical interface loaded it showed me a message stating that it can't find any other operating system on the drive. When I used custom partioning option it showed me none of my current partions(including that with windows). However, when I boot with "Try Ubuntu" feature it does find them ! I find it weird though. Here's what the console present me with: ubuntu@ubuntu:~$ sudo os-prober /dev/sda1:Windows 7 (loader):Windows:chain ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00072b98 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 100020223 49906688 7 HPFS/NTFS/exFAT /dev/sda3 100022270 1250263039 575120385 5 Extended /dev/sda4 566669312 1250263039 341796864 83 Linux I also tried creating partitions from disk utility which results in error: , Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/sda, start=51211402240, size=1923000000, type=0x83 Entering MS-DOS parser (offset=0, size=640135028736) MSDOS_MAGIC found looking at part 0 (offset 1048576, size 104857600, type 0x07) new part entry looking at part 1 (offset 105906176, size 51104448512, type 0x07) new part entry looking at part 2 (offset 51211402240, size 588923274240, type 0x05) Entering MS-DOS extended parser (offset=51211402240, size=588923274240) readfrom = 51211402240 MSDOS_MAGIC found Exiting MS-DOS extended parser looking at part 3 (offset 290134687744, size 349999988736, type 0x83) new part entry Exiting MS-DOS parser MSDOS partition table detected containing partition table scheme = 1 got it Error: Can't have overlapping partitions. ped_disk_new() failed Here's what I get when I try to install the system i.stack.imgur.com/pjlb9.png, i.stack.imgur.com/g1lXN.png P.S. It's strange that I even can't create any more partitions neither with disk-utility nor with windows 7 native tools

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • apache2 server running ruby on rails application has go daddy cert that works in chrome/firefox and ie 9 but not ie 8

    - by ryan
    I have a rails application up on a linode ubuntu 11 server, running apache2. I have a cert purchased from godaddy, (where we also bought our domain) and the cert is installed on my server. Part of my virtual host file: ServerName my_site.com ServerAlias www.my_site.com SSLEngine On SSLCertificateFile /path/my_site.com.crt SSLCertificateKeyFile /path/my_site.com.key SSLCertificateChainFile /path/gd_bundle.crt The cert works fine in Chrome, FireFox and IE 9+ but in IE 8- I get this error: There is a problem with this website's security certificate. The security certificate presented by this website was issued for a different website's address. I'm hosting multiple rails apps on this same server (4 right now plus some old php sites that don't need ssl). I have tried googling every possible combination of the error/situation that I could think of but at this point I'm shooting in the dark. The closest I could come up with is that some versions if IE don't support SNI. But that doesn't apply here because I am getting the warning on windows 7 machines running IE 8, and the SNI only seemed to apply to IE 8 if the operating system was windows XP. So why is this cert being accepted by all browsers but giving me a warning in IE 8? Edit: So doing a little more digging and I figured out some more. It turns out this is effecting IE 9 as well. However the problem seems to be that IE is not traversing the ssl chain to get to the right cert. FireFox and Chrome when I go to view certificate show the correct one, but IE is showing one of our other sites certificates. REAL QUESTION HERE: That being the case why is IE not getting the right certificate when others are and how do I fix it?

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • Iptables rules, forward between two interfaces

    - by Marco
    i have a some difficulties in configuring my ubuntu server firewall ... my situation is this: eth0 - internet eth1 - lan1 eth2 - lan2 I want that clients from lan1 can't communicate with clients from lan2, except for some specific services. E.g. i want that clients in lan1 can ssh into client in lan2, but only that. Any other comunication is forbidden. So, i add this rules to iptables: #Block all traffic between lan, but permit traffic to internet iptables -I FORWARD -i eth1 -o ! eth0 -j DROP iptables -I FORWARD -i eth2 -o ! eth0 -j DROP # Accept ssh traffic from lan1 to client 192.168.20.2 in lan2 iptables -A FORWARD -i eth1 -o eth2 -p tcp --dport 22 -d 192.168.20.2 -j ACCEPT This didn't works. Doing iptables -L FORWARD -v i see: Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 33 144 DROP all -- eth1 !eth0 anywhere anywhere 0 0 DROP all -- eth2 !eth0 anywhere anywhere 23630 20M ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- eth1 any anywhere anywhere 175 9957 ACCEPT all -- eth1 any anywhere anywhere 107 6420 ACCEPT all -- eth2 any anywhere anywhere 0 0 ACCEPT all -- pptp+ any anywhere anywhere 0 0 ACCEPT all -- tun+ any anywhere anywhere 0 0 ACCEPT tcp -- eth1 eth2 anywhere server2.lan tcp dpt:ssh All packets are dropped, and the count of packets for the last rule is 0 ... How i have to modify my configuration? Thank you. Regards Marco

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • linux container bridge filters ARP reply

    - by Dani Camps
    I am using kernel 3.0, and I have configured a linux container that is bridged to a tap interface in my host computer. This is the bridge configuration: :~$ brctl show bridge-1 bridge name bridge id STP enabled interfaces bridge-1 8000.9249c78a510b no ns3-mesh-tap-1 vethjUErij My problem is that this bridge is dropping ARP replies that come from the ns3-mesh-tap-1 interface. Instead, if I statically populate the ARP tables and ping directly everything works, so it has to be something related to ARP. I have read about similar problems in related posts, and I have tried with the solutions explained therein but nothing seems to work. Specifically: ~$ grep net.bridge /etc/sysctl.conf net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-filter-vlan-tagged = 0 net.bridge.bridge-nf-filter-pppoe-tagged = 0 arptables and ebtables are not installed. iptables FORWARD is all set to accept: Chain FORWARD (policy ACCEPT) target prot opt source destination The bridged interfaces are set to PROMISC: ~$ ifconfig ns3-mesh-tap-1 Link encap:Ethernet HWaddr 1a:c7:24:ef:36:1a ... UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 vethjUErij Link encap:Ethernet HWaddr aa:b0:d1:3b:9a:0a .... UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 The macs learned by the bridge are correct (checked with brctl showmacs). Any insight on what I am doing wrong would be greatly appreciated. Best Regards Daniel

    Read the article

  • Is reliability reputation of mechanical keyboards overblown?

    - by Rarst
    A while back I worked up to finally buying mechanical keyboard (~$100 range, "black" switches) and was initially quite content with purchase. However just outside first year (read it - as soon as warranty expired) it started to develop repeat issues (press once, get chain of letter repeated) on multiple keys. It doesn't react to generic cleaning (up to compressed air) and searching Internet shows noticeable amount of people with similar-to-identical issues, spanning years. This makes me severely hesitant to buy another mechanical keyboard, considering: every other keyboard I ever owned, including ultra-cheap crap managed to last longer than that typing experience is nice, but not lifechanging-fan-forever nice for me my choice of mechanical keyboards is severely limited not many brands represented in local market and primarily crazy looking gamer models russian (not to mention russian and ukrainian if possible) layout excludes international ordering price tag for a meek year of use I got our of it is plain demoralizing It is obvious mechanical keyboards have their fans, but shopping around for "best fit" or getting into multiple hundreds price tags is probably not something I am highly interested in. Considering my constraints and bad experience with reliability, is it practical for me to sink more money into buying mechanical keyboard(s) again? In other words - manufacturers are beaming about how crazy reliable mechanical keyboards are. Are active long time users of such keyboards confidently of same opinion?

    Read the article

  • Linux Cups raspberry pi offload processing to server

    - by jaredmsaul
    I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes. My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer? So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution. This scenario would ideally work for any number of printer types.

    Read the article

  • HttpsCookieFilter - IllegalStateException: getOutputStream() has already been called for this response

    - by Mat Banik
    Following exception is thrown every once in a while and it shows up in localhost log file in tomcat log directory. If anyone know how to get rid of it, all help would be appreciated. BTW the filter is working fine I just don't know why this exception is happening. Stack trace: java.lang.IllegalStateException: getOutputStream() has already been called for this response at org.apache.catalina.connector.Response.getWriter(Response.java:611) at org.apache.catalina.connector.ResponseFacade.getWriter(ResponseFacade.java:198) at javax.servlet.ServletResponseWrapper.getWriter(ServletResponseWrapper.java:112) at javax.servlet.ServletResponseWrapper.getWriter(ServletResponseWrapper.java:112) at org.springframework.web.servlet.view.freemarker.FreeMarkerView.processTemplate(FreeMarkerView.java:366) at org.springframework.web.servlet.view.freemarker.FreeMarkerView.doRender(FreeMarkerView.java:283) at org.springframework.web.servlet.view.freemarker.FreeMarkerView.renderMergedTemplateModel(FreeMarkerView.java:233) at org.springframework.web.servlet.view.AbstractTemplateView.renderMergedOutputModel(AbstractTemplateView.java:167) at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:250) at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1047) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:817) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:549) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.opensymphony.sitemesh.webapp.SiteMeshFilter.doFilter(SiteMeshFilter.java:65) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.orm.hibernate3.support.OpenSessionInViewFilter.doFilterInternal(OpenSessionInViewFilter.java:198) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java:92) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:381) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:368) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:109) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:97) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:78) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter.doFilter(RememberMeAuthenticationFilter.java:119) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:187) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:105) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:57) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:79) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.access.channel.ChannelProcessingFilter.doFilter(ChannelProcessingFilter.java:109) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.session.ConcurrentSessionFilter.doFilter(ConcurrentSessionFilter.java:109) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:380) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:169) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) //Here is the servlet I suspect is trowing the exception. at package.HttpsCookieFilter.doFilter(HttpsCookieFilter.java:38) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:886) at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:717) The HttpsCookieFilter class: public class HttpsCookieFilter implements Filter { private static Logger log = Logger.getLogger(HttpsCookieFilter.class); @Override public void destroy() { } @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { final HttpServletRequest req = (HttpServletRequest) request; final HttpServletResponse res = (HttpServletResponse) response; final HttpSession session = req.getSession(false); if (session != null) { setCookie(req, res); } try{ chain.doFilter(request, response); // <- Exception thrown from here }catch (IllegalStateException e){ log.warn("HttpsCookieFilter redirect problem! ", e); } } @Override public void init(FilterConfig arg0) throws ServletException { } private void setCookie( HttpServletRequest request, HttpServletResponse response) { Cookie cookie = new Cookie("JSESSIONID", request.getSession(false).getId()); cookie.setMaxAge(-1); cookie.setPath(getCookiePath(request)); cookie.setSecure(false); response.addCookie(cookie); } private String getCookiePath(HttpServletRequest request) { String contextPath = request.getContextPath(); return contextPath.length() > 0 ? contextPath : "/"; } } web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee/web-app_2_5.xsd"> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class> </listener> <listener> <listener-class>org.springframework.security.web.session.HttpSessionEventPublisher</listener-class> </listener> <filter> <filter-name>httpsCookieFilter</filter-name> <filter-class>com.iteezy.server.web.servlet.HttpsCookieFilter</filter-class> </filter> <filter-mapping> <filter-name>httpsCookieFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter> <filter-name>filterChainProxy</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>filterChainProxy</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> ... The reason for integrating this filter comes from Spring security FAQs: I'm using Tomcat (or some other servlet container) and have enabled HTTPS for my login page, switching back to HTTP afterwards. It doesn't work - I just end up back at the login page after authenticating. This happens because sessions created under HTTPS, for which the session cookie is marked as “secure”, cannot subsequently be used under HTTP. The browser will not send the cookie back to the server and any session state will be lost (including the security context information). Starting a session in HTTP first should work as the session cookie won't be marked as secure.

    Read the article

  • Give Access to a Subdirectory Without Giving Access to Parent Directories

    - by allquixotic
    I have a scenario involving a Windows file server where the "owner" wants to dole out permissions to a group of users of the following sort: \\server\dir1\dir2\dir3: Read & Execute and Write \\server\dir1\dir2: No permissions. \\server\dir1: No permissions. \\server: Read & Execute To my understanding, it is not possible to do this because Read & Execute permission must be granted to all the parent directories in a directory chain in order for the operating system to be able to "see" the child directories and get to them. Without this permission, you can't even obtain the security context token when trying to access the nested directory, even if you have full access to the subdirectory. We are looking for ways to get around this, without moving the data from \\server\dir1\dir2\dir3 to \\server\dir4. One workaround I thought of, but which I am not sure if it will work, is creating some sort of link or junction \\server\dir4 which is a reference to \\server\dir1\dir2\dir3. I am not sure which of the available options (if any) would work for this purpose if the user does not have Read & Execute permission on \\server\dir1\dir2 or \\server\dir1, but as far as I know, the options are these: NTFS Symbolic Link, Junction, Hard Link. So the questions: Are any of these methods suitable to accomplish my goal? Are there any other methods of linking or indirectly referencing a directory, which I haven't listed above, which might be suitable? Are there any direct solutions that don't involve granting Read & Execute to \\server\dir1 or \\server\dir2 but still allowing access to \\server\dir1\dir2\dir3?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >