Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 688/750 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • IIS can't load Oracle.Web assembly (for ASP.NET membership provider)

    - by Konamiman
    I am trying to configure an IIS web site to use an Oracle database for ASP.NET membership, but I can't get it to work. IIS doesn't seem to be able to load the assembly containing the Oracle membership provider. That's what I have so far: An Oracle 10g database online and with all the tables for ASP.NET membership created. Windows 2008 R2 Standard with the web server role installed, including support for ASP.NET. Oracle 11g Release 2 ODAC 11.2.0.1.2 installed. The installed components are: Oracle data provider for .NET, Oracle providers for ASP.NET, Oracle instant client. The default web site on IIS (I am using that for testing) has the following web.config file: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <membership defaultProvider="OracleMembershipProvider"> <providers> <remove name="SqlMembershipProvider" /> <add name="OracleMembershipProvider" type="Oracle.Web.Security.OracleMembershipProvider, Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342" connectionStringName="OracleServer" /> </providers> </membership> </system.web> </configuration> (Additional attributes on the "add" element omitted for brevity. Also, the connection string is defined for the whole server.) The Oracle.Web.dll file is on the GAC. That's the relevant part of the C:\Windows\Assembly folder: The web site application pool is configured for .NET 2.0, and has 32-bit applications enabled. I have allowed untrusted providers in the IIS' administration.config file (just for the sake of testing, I'll explicitly add the assembly to the trusted providers list later). With all of this setup in place, when I click on the ".NET Users" icon on the IIS manager, I get a warning about the provider having too much privileges, and when I accept I get the following message: There was an error while performing this operation. Details: Could not load file or assembly 'Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. The system cannot find the file specified. So, what am I missing? How can I get the Oracle membership provider to work? Thank you! UPDATE: It seems that the problem is not with IIS itself, but with the IIS administrator only. When using the web site configuration tool provided by Visual Studio, everything works fine.

    Read the article

  • Nameserver not resolving or domain not pingable [closed]

    - by Ricky
    Sorry, if anyone can think of a better title please change it! I want to host my own websites from home. For testing purposes, I have a virtual machine running a trial version of Windows Server 2008 Enterprise. Note I currently run a VPS and host my own websites but due to a nice speed upgrade on our line I now want to host from home. I have several domains but I wanted to test with one, that is rickyoleary.com. Our ISP does not provide static IP addresses unless we have a business account so I've been looking at no-ip.com. I admit my networking isn't the best, hence this question but I've been bashing my head all day on this one. I created a host name, muffinbubble.no-ip.org which runs on IP: 86.148.124.15. I've setup IIS on the server with a simple test page. I've then forwarded port 80 traffic from the router and from what I can see, it's working. If I access my website (I was unable to link to this for some reason so please copy and paste this) - http://86.148.124.15/ - I see my test page. So the next step was to create my nameservers. This domain is with namecheap.com so I created my nameservers, ns1.rickyoleary.com and ns2.rickyoleary.com. Both these point to the same IP (and yes, that will be changed after testing), the same IP as above: 86.148.124.15. On the server itself I have set up DNS entries as below which I believe to be correct and added rickyoleary.com and www.rickyoleary.com in the host headers (or bindings) in IIS 7.0. If I try and look up my domain, rickyoleary.com it shows ns1.rickyoleary.com and ns2.rickyoleary.com as the nameservers. I then tried to use just-ping.com on my nameserver ns1.rickyoleary.com. I get 100% packets lost, but the correct IP address is returned (I'm guessing the router does not allow pings, but is still accessible...). I get no response when pinging rickyoleary.com. Here's the problems: I cannot ping ns1.rickyoleary.com or ns2.rickyoleary.com from a command prompt. I'm not sure if this is an issue. When I added the nameservers in Windows Server 2008 and clicked 'resolve' a message box displays stating "No such host is known". I cannot ping rickyoleary.com. rickyoleary.com is not showing my test page on my server. Now - please note, I've waited around 6 hours for propagation. From my experience, although you're told to wait 24 - 48 hours, the changes are normally pretty quick so perhaps I'm being impatient or naive to think it should all be working fine until then. I would really appreciate some help here. Thanks.

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • HT Link Sync Error after Ubuntu 10.04 LTS Installation

    - by marklab
    Update 1 I just assembled an exact replica of this server, and successfully installed Ubuntu 10.04 LTS in a RAID10 configuration. The success was confirmed by a login to the initial account. There must be a hardware component that is faulty. Since the error mentions HT, which I believe to be Hyper Threading, I will start with the CPUs. Please indicate if this error is more strongly associated with any other piece of hardware. Or make a recommendation of another approach that would be good for this issue. Issue I was attempting to install Ubuntu 10.04 LTS on this system with the board RAID10 configured. However, the installation failed at the partitioning stage by rebooting the system. Upon reboot, there is an error report after POST listing the following: Node0: NB WatchDog Timer Error Node1: HT Link Sync Error Node2: HT Link Sync Error ... Node7: HT Link Sync Error Press F1 to continue/resume. After pressing F1 the system will boot from the Ubuntu 10.04 LTS installation disc. However, it will fail at the same stage, and go through the same process from there. Hardware CPU: AMD OPTERON X12 6172 G34 2.1G 18MB Motherboard: Supermicro H8QG6-F HDD: WD Caviar Green 2TB 5.4K RPM Troubleshooting I disabled RAID10 on the system, and installed the Ubuntu on a single drive. It installed successfully. I then went back to a RAID10 setup and attempted to install on the system again, and was able to make it through the partitioning stage. However, upon reboot, the system reported: Error: file not found, and then booted me into the Grub Rescue console. I feel I have aggravated the problem at this point because when I attempted to install from the boot disc again, the system reboots upon hitting enter to even start the installation process. It does the same thing when trying to boot from an Ubuntu 11 disc. I have not been able to find any information on this HT Link Sync Error, which I feel may have started the problems I am experiencing now with the installation of the OS. I am also aware that Ubuntu is said not to be supported by the motherboard according to Supermicro's site. However, since I was able to install it successfully on a single drive, I do not believe it is incompatible. I would like to know a reason for why it's failing to install on/off.

    Read the article

  • Distinction between an extranet and a DMZ

    - by Markus Yrjölä
    I've been reading about intranets, extranets, DMZs and VPNs now, and I'd need some clarifications related to extranets and DMZs. I understand that they are different types of concepts - extranet allows limited access to some intranet resources, while DMZ is a subnet that sits between the internet and intranet and hosts the external-faced services. However, I'd like to know what is their distinction in practice in a usual setup? The Wikipedia article on extranets says that extranets are similar to DMZs because they are used for the same purpose (providing access to some services/resources without exposing the whole intranet). The article also states that an extranet is a part of a VPN, and this TechNet article also states that extranet access is often implemented similarly to remote intranet access, e.g. with a VPN. The TechNet article also says that commonly the extranet is hosted inside the DMZ. This Pearson article says "Although [the DMZ] is technically located within the intranet, [it] can serve as the extranet as well". This is slightly confusing. Consider this scenario: A company has a B2C website hosted in the DMZ. The website can be accessed from anywhere, but requires user authentication. The underlying web app has its database inside the intranet and also interacts with some web services that are hosted inside the intranet (i.e. it accesses intranet resources). The way I see it, the website does effectively offer a restricted access to the intranet. But can it be considered an extranet? If we take the Wikipedia definition of an extranet literally - "An extranet is a computer network that allows controlled access from outside of an organization's intranet" - I think it can. Let's say that the above can't be considered an extranet. What if we change the scenario slightly, and say it's a B2B website, where the access is e.g. limited to connections coming from a specific business partner (by using site-to-site VPN, for example). In this case it surely is an extranet, right? If this is the case, then the difference between extranet services and any other services hosted in the DMZ is simply access restrictions?

    Read the article

  • IPSEC site-to-site Openswan to Cisco ASA

    - by Jim
    I recieved a list of commands that were run on the right side of the VPN tunnel which is where the Cisco ASA resides. On my side, I have a linux based firewall running debian with openswan installed. I am having an issue with getting to Phase 2 of the VPN negotiation. Here is the Cisco Information I was sent: {my_public_ip} = left side of connection tunnel-group {my_public_ip} type ipsec-l2l tunnel-group {my_public_ip} ipsec-attributes pre-shared-key fakefake crypto map vpn1 1 match add customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside static (outside,inside) 10.2.1.200 {my_public_ip} netmask 255.255.255.255 crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map vpn1 1 match address customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 Myside ipsec.conf config setup klipsdebug=none plutodebug=none protostack=netkey #nat_traversal=yes conn cisco #name of VPN connection type=tunnel authby=secret #left side (myside) left={myPublicIP} leftsubnet=172.16.250.0/24 #net subnet on left sdie to assign to right side leftnexthop=%defaultroute #right security gateway (ASA side) right={CiscoASA_publicIP} #cisco ASA rightsubnet=10.2.1.0/24 rightnexthop=%defaultroute #crypo stuff keyexchange=ike ikelifetime=86400s auth=esp pfs=no compress=no auto=start ipsec.secrets file {CiscoASA_publicIP} {myPublicIP}: PSK "fakefake" When I start ipsec from the left side/my side I don't recieve any errors, however when I run the ipsec auto --status command: 000 "cisco": 172.16.250.0/24==={left_public_ip}<{left_public_ip}>[+S=C]---{left_public_ip_gateway}...{left_public_ip_gateway}--{right_public_ip}<{right_public_ip}>[+S=C]===10.2.1.0/24; prospective erouted; eroute owner: #0 000 "cisco": myip=unset; hisip=unset; 000 "cisco": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cisco": policy: PSK+ENCRYPT+TUNNEL+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,24; interface: eth0; 000 "cisco": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 000 #2: "cisco":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 10s; nodpd; idle; import:admin initiate 000 #2: pending Phase 2 for "cisco" replacing #0 Now I'm new to setting up an site-to-site IPSEC tunnel so the status informatino I am unsure what it means. All I know is it sits at this "pending Phase 2" and I can't ping the other side, Another question I have is, if I do a route -n, should I see anything relating to this connection? Also, I read a few artilcle where configs contained the interface="ipsec0=eth0", is this an interface that I have to create on the linux debian firewall on my side? Appreciate your time to look at this.

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Android emulator performance on linux

    - by Rado
    I installed the android SDK and eclipse plugin on my laptop, but I was surprised to find out that the emulator eats up 100% of one of my cpu cores. I have exactly the same setup on a desktop machine that does not have this issue. Both computers are running arch linux and both were updated yesterday. Granted, the desktop has better hardware than the laptop, but I was expecting to get closer to 50% cpu usage than 100% on the laptop. Both android virtual devices have the same specs: CPU: ARM Target: Android 2.3.3 - API Level 10 Skin: WVGA800 SD Card: 512M hw.lcd.density: 240 vm.heapSize: 24 hw.ramSize: 256 Laptop host has Intel Core 2 T7200 @ 2GHz cpu with 2Gb RAM. Desktop host has AMD Phenom II X4 940 @ 3GHz cpu with 8Gb RAM. The android emulator uses only 1 core and here are the CPU usage results: Laptop: Cpu0 : 22.8%us, 76.5%sy, 0.0%ni, 0.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu1 : 11.2%us, 2.4%sy, 0.0%ni, 86.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2055484k total, 1860304k used, 195180k free, 5276k buffers Swap: 2000088k total, 106872k used, 1893216k free, 350780k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2026 xyz 20 0 396m 207m 7192 R 100 10.3 4:11.58 emulator-arm Desktop: Cpu0 : 0.7%us, 0.0%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 1.3%us, 0.0%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 5.0%us, 1.3%sy, 0.0%ni, 91.9%id, 1.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7666324k total, 6506808k used, 1159516k free, 1650960k buffers Swap: 8988348k total, 0k used, 8988348k free, 2867300k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2811 xyz 20 0 392m 220m 6276 S 8 2.9 0:33.58 emulator-arm Is there any way I can improve the emulator performance on the laptop? [UPDATE] I ran the emulator with the same settings, on the same laptop under Win7 and after starting up, it didn't use 100% of a CPU core unlike under linux. Also, I tried running the emulator from a terminal in Linux and I get this message when I don't get it under the desktop Linux host: Could not configure '/dev/hpet' to have a 1024Hz timer. This is not a fatal error, but for better emulation accuracy type: 'echo 1024 /proc/sys/dev/hpet/max-user-freq' as root. I'm not really familiar with rtc or hpet, but it doesn't seem that max-user-freq setting does anything, I still get the same warning.

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • Double MySQL Running on Mountain Lion

    - by Norris
    After I've done a full restart, my Apache PHP server doesn't connect to Local MySQL ( connecting via 127.0.0.1 because localhost for some reason fails always ). So I did this today: ? ~ mysqladmin shutdown -u root -p Enter password: ? ~ mysqladmin shutdown -u root -p Enter password: mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/tmp/mysql.sock' exists! Which basically means that I succeeded in shutting down mysql. But as soon as I did - Apache PHP successfully connect to MySQL and my local sites work without a hiccup until the next restart. Here are a few other details: (as you can tell - I've installed MySQL via brew) ? ~ sudo ps aux | grep mysql N 4774 0.0 0.0 2432768 620 s000 S+ 9:53AM 0:00.00 grep mysql N 4772 0.0 2.6 3030168 440688 ?? S 9:51AM 0:00.29 /usr/local/Cellar/mysql/5.6.13/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.6.13 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.6.13/lib/plugin --bind-address=127.0.0.1 --log-error=/usr/local/var/mysql/N.local.err --pid-file=/usr/local/var/mysql/N.local.pid N 4686 0.0 0.0 2433432 1000 ?? S 9:51AM 0:00.01 /bin/sh /usr/local/opt/mysql/bin/mysqld_safe --bind-address=127.0.0.1 N 4362 0.0 2.7 3120276 458728 ?? S 9:47AM 0:00.45 mysqld ? ~ lsof -i | grep mysql mysqld 4362 N 16u IPv6 0x76959e40691f9f93 0t0 TCP *:mysql (LISTEN) This is the weird thing: ? ~ killall -9 mysqld MySQL Is dead! Apache doesn't connect. Then, when I run: ? ~ sudo mysqladmin shutdown -u root -p Enter password: Apache is (again) able to successfully connect to MySQL. As far as I understand this means that I have two mysql servers setup and both of them are trying to start up at the same time, but I don't have the slightest idea on how to fix it. I've tried brew reinstalling but that didn't help. ? ~ which mysqladmin /usr/local/bin/mysqladmin ? ~ whence -p mysql /usr/local/bin/mysql

    Read the article

  • 6to4 tunnel: cannot ping6 to ipv6.google.com?

    - by quanta
    Hi folks, Follow the Setup of 6to4 tunnel guide, I want to test ipv6 connectivity, but I cannot ping6 to ipv6.google.com. Details below: # traceroute 192.88.99.1 traceroute to 192.88.99.1 (192.88.99.1), 30 hops max, 40 byte packets 1 static.vdc.vn (123.30.53.1) 1.514 ms 2.622 ms 3.760 ms 2 static.vdc.vn (123.30.63.117) 0.608 ms 0.696 ms 0.735 ms 3 static.vdc.vn (123.30.63.101) 0.474 ms 0.477 ms 0.506 ms 4 203.162.231.214 (203.162.231.214) 11.327 ms 11.320 ms 11.312 ms 5 static.vdc.vn (222.255.165.34) 11.546 ms 11.684 ms 11.768 ms 6 203.162.217.26 (203.162.217.26) 42.460 ms 42.424 ms 42.401 ms 7 218.188.104.173 (218.188.104.173) 42.489 ms 42.462 ms 42.415 ms 8 218.189.5.10 (218.189.5.10) 42.613 ms 218.189.5.42 (218.189.5.42) 42.273 ms 42.300 ms 9 d1-26-224-143-118-on-nets.com (118.143.224.26) 205.752 ms d1-18-224-143-118-on-nets.com (118.143.224.18) 207.130 ms d1-14-224-143-118-on-nets.com (118.143.224.14) 206.970 ms 10 218.189.5.150 (218.189.5.150) 207.456 ms 206.349 ms 206.941 ms 11 * * * 12 10gigabitethernet2-1.core1.lax1.he.net (72.52.92.121) 214.087 ms 214.426 ms 214.818 ms 13 192.88.99.1 (192.88.99.1) 207.215 ms 199.270 ms 209.391 ms # ifconfig tun6to4 tun6to4 Link encap:IPv6-in-IPv4 inet6 addr: 2002:x:x::/16 Scope:Global inet6 addr: ::x.x.x.x/128 Scope:Compat UP RUNNING NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:11 dropped:0 overruns:0 carrier:11 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) # iptunnel sit0: ipv6/ip remote any local any ttl 64 nopmtudisc tun6to4: ipv6/ip remote any local x.x.x.x ttl 64 # ip -6 route show ::/96 via :: dev tun6to4 metric 256 expires 21332777sec mtu 1480 advmss 1420 hoplimit 4294967295 2002::/16 dev tun6to4 metric 256 expires 21332794sec mtu 1480 advmss 1420 hoplimit 4294967295 fe80::/64 dev eth0 metric 256 expires 15674592sec mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth1 metric 256 expires 15674597sec mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev tun6to4 metric 256 expires 21332794sec mtu 1480 advmss 1420 hoplimit 4294967295 default via ::192.88.99.1 dev tun6to4 metric 1 expires 21332861sec mtu 1480 advmss 1420 hoplimit 4294967295 # ping6 -n -c 4 ipv6.google.com PING ipv6.google.com(2404:6800:8005::68) 56 data bytes From 2002:x:x:: icmp_seq=0 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=1 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=2 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=3 Destination unreachable: Address unreachable --- ipv6.google.com ping statistics --- 4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms What is my problem? Thanks,

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • LUKS with LVM, mount is not persistent after reboot

    - by linxsaga
    I have created a Logical vol and used luks to encrypt it. But while rebooting the server. I get a error message (below), therefore I would have to enter the root pass and disable the /etc/fstab entry. So mount of the LUKS partition is not persistent during reboot using LUKS. I have this setup on RHEL6 and wondering what i could be missing. I want to the LV to get be mount on reboot. Later I would want to replace it with UUID instead of the device name. Error message on reboot: "Give root password for maintenance (or type Control-D to continue):" Here are the steps from the beginning: [root@rhel6 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@rhel6 ~]# vgcreate vg01 /dev/sdb Volume group "vg01" successfully created [root@rhel6 ~]# lvcreate --size 500M -n lvol1 vg01 Logical volume "lvol1" created [root@rhel6 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg01/lvol1 VG Name vg01 LV UUID nX9DDe-ctqG-XCgO-2wcx-ddy4-i91Y-rZ5u91 LV Write Access read/write LV Status available # open 0 LV Size 500.00 MiB Current LE 125 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 [root@rhel6 ~]# cryptsetup luksFormat /dev/vg01/lvol1 WARNING! ======== This will overwrite data on /dev/vg01/lvol1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: [root@rhel6 ~]# mkdir /house [root@rhel6 ~]# cryptsetup luksOpen /dev/vg01/lvol1 house Enter passphrase for /dev/vg01/lvol1: [root@rhel6 ~]# mkfs.ext4 /dev/mapper/house mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 127512 inodes, 509952 blocks 25497 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 63 block groups 8192 blocks per group, 8192 fragments per group 2024 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@rhel6 ~]# mount -t ext4 /dev/mapper/house /house PS: HERE I have successfully mounted: [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# vim /etc/fstab -> as follow /dev/mapper/house /house ext4 defaults 1 2 [root@rhel6 ~]# vim /etc/crypttab -> entry as follows house /dev/vg01/lvol1 password [root@rhel6 ~]# mount -o remount /house [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# umount /house/ [root@rhel6 ~]# mount -a -> SUCCESSFUL AGAIN [root@rhel6 ~]# ls /house/ lost+found Please let me know if I am missing anything here. Thanks in advance.

    Read the article

  • Why is my router not routing?

    - by dwj
    Starting a week and half ago my router stopped working with my cable modem. I went to sleep with it working and woke up with it not. I swapped in another router and am still having issues; I was gone for 10 days so now I'm back to trying to figure it out. While I was gone I left everything (cable modem, router, and computer) powered off. My setup: Comcast Ambit cable modem (from Comcast) Netgear WGR614 v4 router -- replaced with Linksys WRT54GS v1.1 Windows XP SP3 other computers, all currently unplugged The modem is using the firmware (ver 2.105.2001) provided by Comcast; hardware version 1.3 The Linksys router is using FW ver 4.71.4 (latest for this release of HW), factory defaults I am only using the wired connections; no wireless. I have swapped out all of the cat5 cable. If I plug my computer directly into the cable modem, I can ping by name or number. Everything works perfectly. If I plug my computer into the router and the router into the modem, I cannot access anything outside of my local network. This is the exact setup I've used for the past 5 years; there were no changes in the past year. Now here's the interesting part: I can log into the Linksys router and get status information from it; everything appears good. Using the Diagnostics, I can run ping and traceroute to any site on the internet. These work perfectly. From my computer, I can ping the router and the modem. However, I cannot ping anything on the internet by with name or number. If I plug in another computer, I can ping it successfully. I've included two transcripts below that show these two attempts. Addresses, DNS, gateways, etc. look good. I cannot access the internet through either router. I am at a loss here. Suggestions? Help! Computer to Router to Cable Modem C:\ipconfig /renew Windows IP Configuration No operation can be performed on Bluetooth Network while it has its media disconnected. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hsd1.ca.comcast.net. IP Address. . . . . . . . . . . . : 192.168.1.100 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 Ethernet adapter Bluetooth Network: Media State . . . . . . . . . . . : Media disconnected C:\ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : wynton Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : hsd1.ca.comcast.net. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hsd1.ca.comcast.net. Description . . . . . . . . . . . : Intel(R) 82562V-2 10/100 Network Connection Physical Address. . . . . . . . . : 00-1D-09-9B-45-EB Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.1.100 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 68.87.76.178 68.87.78.130 Lease Obtained. . . . . . . . . . : Monday, March 22, 2010 10:21:55 PM Lease Expires . . . . . . . . . . : Tuesday, March 23, 2010 10:21:55 PM Ethernet adapter Bluetooth Network: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Bluetooth LAN Access Server Driver Physical Address. . . . . . . . . : 00-0A-3A-6F-68-41 C:\ping google.com Ping request could not find host google.com. Please check the name and try again . C:\ping 74.125.19.104 Pinging 74.125.19.104 with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 74.125.19.104: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\ Computer to Cable Modem Directly C:\ipconfig /renew Windows IP Configuration No operation can be performed on Bluetooth Network while it has its media disconnected. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hsd1.ca.comcast.net. IP Address. . . . . . . . . . . . : 71.204.149.195 Subnet Mask . . . . . . . . . . . : 255.255.252.0 Default Gateway . . . . . . . . . : 71.204.148.1 Ethernet adapter Bluetooth Network: Media State . . . . . . . . . . . : Media disconnected C:\ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : wynton Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : hsd1.ca.comcast.net. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hsd1.ca.comcast.net. Description . . . . . . . . . . . : Intel(R) 82562V-2 10/100 Network Connection Physical Address. . . . . . . . . : 00-1D-09-9B-45-EB Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 71.204.149.195 Subnet Mask . . . . . . . . . . . : 255.255.252.0 Default Gateway . . . . . . . . . : 71.204.148.1 DHCP Server . . . . . . . . . . . : 68.87.76.10 DNS Servers . . . . . . . . . . . : 68.87.76.178 68.87.78.130 Lease Obtained. . . . . . . . . . : Monday, March 22, 2010 10:18:50 PM Lease Expires . . . . . . . . . . : Monday, March 22, 2010 11:12:31 PM Ethernet adapter Bluetooth Network: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Bluetooth LAN Access Server Driver Physical Address. . . . . . . . . : 00-0A-3A-6F-68-41 C:\ping google.com Pinging google.com [74.125.19.99] with 32 bytes of data: Reply from 74.125.19.99: bytes=32 time=20ms TTL=55 Reply from 74.125.19.99: bytes=32 time=17ms TTL=55 Reply from 74.125.19.99: bytes=32 time=28ms TTL=55 Reply from 74.125.19.99: bytes=32 time=18ms TTL=55 Ping statistics for 74.125.19.99: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 17ms, Maximum = 28ms, Average = 20ms C:\ping 74.125.19.104 Pinging 74.125.19.104 with 32 bytes of data: Reply from 74.125.19.104: bytes=32 time=18ms TTL=55 Reply from 74.125.19.104: bytes=32 time=18ms TTL=55 Reply from 74.125.19.104: bytes=32 time=17ms TTL=55 Reply from 74.125.19.104: bytes=32 time=16ms TTL=55 Ping statistics for 74.125.19.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 16ms, Maximum = 18ms, Average = 17ms C:\

    Read the article

  • Windows 7 x64 "upgrade" repair fails

    - by Polynomial
    I've been running into issues with Windows Update, which I can't seem to fix. The hotfixes don't work, nor does the Windows update readyness tool, or the manual SP1 upgrade. I get various esoteric errors which nobody seems to have a fix for. Looks like some of the update cache is corrupt and digital signatures seem to be broken on some packages / Windows Update components. Long story short, I have discovered the only option is to do a repair operation on the OS, to repair everything. It's so corrupt that only a complete replacement will fix it. According to various sources (including MSKB) one can perform a repair by running an in-place upgrade. I've got the Windows 7 Ultimate retail disc, which I've inserted into my machine. I ran setup.exe and went through in the following order: Install now Go online to get latest updates (I've also tried not getting updates) Wait for updates to be downloaded Select Windows 7 Ultimate (x64 architecture) and click next Accept the T&Cs, click next Click Upgrade At this point it spends a minute on the "checking compatibility" screen, after which I get the following error: The following issues are preventing Windows from upgrading. Cancel the upgrade, complete each task, and then restart the upgrade to continue. You can’t upgrade 64-bit Windows to a 32-bit version of Windows. To upgrade, obtain a 64-bit version of the installation disc, or go online to see how to install Windows 7 and keep your files and settings. 32-bit Windows cannot be upgraded to a 64-bit version of Windows. To upgrade, obtain a 32-bit version of the Windows installation disc. It also mentions a warning about potential conflicts with a storage driver and VS2010, but that doesn't seem to be the blocking issue. My currently installed version of Windows is Ultimate 64-bit (absolutely sure of this) and the disc is definitely a x86 / x64 combined Ultimate retail disc. There seem to be a few people who have run into this (e.g. this question), but I've not seen any answers. I've checked the event viewer, but can't spot anything in there that's related. Any idea how I can get this working? P.S: Just to pre-empt the inevitable "are you suuuuuuuuuuuuure it's x64 Ultimate?" questions:

    Read the article

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • Internal/external DNS with subdomains

    - by ScottMcGready
    I've got an internal DNS server (part of OS X server) and it's acting as the main DNS server for a specific (physical) site. When it can't resolve hostnames itself, it forwards requests to Google's DNS servers. Everything works well apart from a couple of issues, which I think may be related but can't get to the bottom of. I've got a number of intranet sites setup, that people can access by going to something like: intranet.mydomainname.com selfservice.mydomainname.com These point to various servers in the building that host these sites. Whether internal or external (without VPN), I can access these sites just dandy. Where the issue comes is when I want to host, say, test.mydomainname.com on an external server it fails to resolve as the primary zone for mydomainname.com is internal. How can I get it to look up Google's DNS (or an external one) for that zone if it's not in the list? I've tried everything I can think (adding my host's nameservers etc) of but nothing seems to work fully. Also I can't access intranet sites when connected via VPN and from what I can gather - I believe this might be related to the DNS issue but just wanted to give as much information as possible. Edit The domain mydomainname.com is hosted externally and pointed at the site's public IP. From there we can forward the requests to the relevant internal server. Externally everything works, internally though any subdomain of mydomainname.com is served locally, I want it to be served from Google's DNS / externally. DNS Configuration As per a request, here's the current DNS configuration (OS X server's DNS tab). I've blurred out the .private address as it's not really relevant but it's the server's name. The colored dots are just there to link everything together. Screenshot: In an attempt to clarify this is what I want: intranet.mydomain.com -> 192.168.0.12 selfservice.mydomain.com -> 192.168.0.13 *.mydomain.com -> forward to external DNS mydomain.com -> forward to external DNS At the moment any subdomain of mydomain.com is not forwarded on (think this is because of the primary zone being mydomain.com with a NS of intranet.mydomain.com but could do with a little nod in the right direction.

    Read the article

  • Configuring varnish and django (apache/modwsgi)

    - by Hedde
    I am trying to work out why my application keeps hitting the database while I have setup varnish infront of apache. I think I am missing some vital configuration, any tips are welcome This is my curl result: HTTP/1.1 200 OK Server: Apache/2.2.16 (Debian) Content-Language: en-us Vary: Accept,Accept-Encoding,Accept-Language,Cookie Cache-Control: s-maxage=60, no-transform, max-age=60 Content-Type: application/json; charset=utf-8 Date: Sat, 15 Sep 2012 08:19:17 GMT Connection: keep-alive My varnishlog: 13 BackendClose - apache 13 BackendOpen b apache 127.0.0.1 47665 127.0.0.1 8000 13 TxRequest b GET 13 TxURL b /api/v1/events/?format=json 13 TxProtocol b HTTP/1.1 13 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 13 TxHeader b Host: foobar.com 13 TxHeader b Accept: */* 13 TxHeader b X-Forwarded-For: 92.64.200.145 13 TxHeader b X-Varnish: 979305817 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Sat, 15 Sep 2012 08:21:28 GMT 13 RxHeader b Server: Apache/2.2.16 (Debian) 13 RxHeader b Content-Language: en-us 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Vary: Accept,Accept-Encoding,Accept-Language,Cookie 13 RxHeader b Cache-Control: s-maxage=60, no-transform, max-age=60 13 RxHeader b Content-Length: 6399 13 RxHeader b Content-Type: application/json; charset=utf-8 13 Fetch_Body b 4(length) cls 0 mklen 1 13 Length b 6399 13 BackendReuse b apache 11 SessionOpen c 92.64.200.145 53236 :80 11 ReqStart c 92.64.200.145 53236 979305817 11 RxRequest c HEAD 11 RxURL c /api/v1/events/?format=json 11 RxProtocol c HTTP/1.1 11 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 11 RxHeader c Host: foobar.com 11 RxHeader c Accept: */* 11 VCL_call c recv lookup 11 VCL_call c hash 11 Hash c /api/v1/events/?format=json 11 Hash c foobar.com 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 apache apache 11 TTL c 979305817 RFC 60 -1 -1 1347697289 0 1347697288 0 60 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Sat, 15 Sep 2012 08:21:28 GMT 11 ObjHeader c Server: Apache/2.2.16 (Debian) 11 ObjHeader c Content-Language: en-us 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 ObjHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 ObjHeader c Content-Type: application/json; charset=utf-8 11 Gzip c u F - 6399 69865 80 80 51128 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 200 11 TxResponse c OK 11 TxHeader c Server: Apache/2.2.16 (Debian) 11 TxHeader c Content-Language: en-us 11 TxHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 TxHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 TxHeader c Content-Type: application/json; charset=utf-8 11 TxHeader c Date: Sat, 15 Sep 2012 08:21:29 GMT 11 TxHeader c Connection: keep-alive 11 Length c 0 11 ReqEnd c 979305817 1347697288.292612076 1347697289.456128597 0.000086784 1.163468122 0.000048399

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • PHP: gethostbyname() suddenly no longer resolves names to IPs when run in Apache

    - by hurikhan77
    One of our older legacy servers which gets no further updates or reconfigurations suddenly stopped resolving hostnames to IPs when PHP is executed within Apache. However, it still works fine when executed from the CLI. From the RSS caches last modification time, I deduce that it stopped working on around Mar, 28th. To reproduce the problem, I created a script using fsockopen() and it said "connection failed (errno 2)". I further reduced the problem to being related with a failed name resolution: <?php $addr = gethostbyname("twitter.com"); echo "ADDR($addr)"; ?> When I run this through Apache, the output is ADDR(twitter.com), which is wrong. When I run this from the CLI, the output is ADDR(aaa.bbb.ccc.ddd) with varying IP addresses, as expected. Nothing on the server setup has changed. CLI and Apache module share the same php.ini. PHP is version v4.4.9 with Zend Optimizer v2.5.10. Apache is v1.3.31. I know the versions are old. But since nothing has been changed, a solution like "try to upgrade versions first" is no solution as the server's feature set/versioning is frozen and will be replaced soon. Still we need a solution. If I run dig through the script, it works in both environments (mod_php and CLI) but this is more than an ugly hack as it would involve many edits and testing throughout the whole script base which is also undesired as the PHP application on the server is frozen, too, and only receives security updates. It will be replaced by a complete rewrite (on the new server). But as the rewrite will take some time and successive replace parts of the legacy application, we need a fix for the resolver problem. I already googled a bit and while the problem is known, many did not find a fix. The fix to raise memory limits did not work. Restarts did not work. The resolver in mod_php just did stop working for no apparent reason. :-(

    Read the article

  • Installing ImageMagick on Mac OSX 10.6

    - by Russell C.
    I just got a new Mac and am trying to setup a local Perl development environment. I'm using MAMP but also need the ImageMagick perl module installed in order to do some of the photo processing our scripts require. I tried installing ImageMagick manually but ran into some issues and after reading online a lot of people reported having issues going this route. The general consensus was to install it using MacPorts instead so I went ahead and installed MacPorts. Unfortunately, MacPorts can't seem to install it successfully either. Here is the command I'm using to try to install ImageMagick: sudo port install p5-perlmagick And here are all the errors reported during install: ---> Computing dependencies for p5-perlmagick ---> Building p5-perlmagick Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-perlmagick/work/PerlMagick-6.32" && /usr/bin/make -j2 all " returned error 2 Command output: Magick.xs:10918: error: 'struct Methods' has no member named 'exception' Magick.xs:10918: error: request for member 'severity' in something not a structure or union Magick.xs:10918: error: 'ErrorException' undeclared (first use in this function) Magick.xs:10919: error: 'struct Methods' has no member named 'exception' Magick.xs:10920: warning: implicit declaration of function 'GetImageException' Magick.xs:10922: error: 'struct PackageInfo' has no member named 'image_info' Magick.xs:10922: error: 'struct Methods' has no member named 'adjoin' Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: 'UndefinedException' undeclared (first use in this function) Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: passing argument 2 of 'Perl_sv_catpv' from incompatible pointer type Magick.xs:10929: warning: unused variable 'message' Magick.xs:10856: warning: unused variable 'filename' Magick.c:10784: warning: unused variable 'ref' Magick.c:10777: warning: unused variable 'ix' Magick.xs: In function 'boot_Image__Magick': Magick.xs:2122: warning: implicit declaration of function 'InitializeMagick' Magick.xs:2123: warning: implicit declaration of function 'SetWarningHandler' Magick.xs:2124: warning: implicit declaration of function 'SetErrorHandler' make: *** [Magick.o] Error 1 Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. I have no idea what the problem might be or how to go about successfully installing ImageMagick. I'd appreciate any help & advice that someone out there that has done this successfully might be able to provide. Thanks in advance!

    Read the article

  • Port forwarding problem

    - by Steve
    I have a modem connecting to ADSL2 network and a router connecting to the modem. The rest of the machines all connect to the router. The modem has IP as 192.168.1.1 and the router's IP is 192.168.0.1. From the modem configuration, I can see that the modem thinks the router's IP is 192.168.1.2. I can visit the router by either using 192.168.0.1 or 192.168.1.2. Now I forward a port from the router to a private machine. It works. I can test it by typing 192.168.1.2 and it is redirected to the private machine. But if I use 192.168.0.1, it is still the router's configuration page. I also do a port forwarding on my modem. Since the modem sees only the router, I can only forward the port to the router's specific port. And I am thinking that by doing this, I can reach the private machine after two times port forwarding, once on the modem and once on the router. I also have a static public IP. I want to achieve the goal that when someone types the public IP, he will be redirected to the private machine. But when I use some online port forwarding tester, the result always says that the port is closed on the public IP. I have the questions: Why my router has two IPs? Why using one IP I can see the port forwarding result while using the other I cannot? I think the port forwarding only works when visiting from outside, rather than from both outside and inside. Otherwise, if I set port forwarding on my router/modem on port 80, I will never be able to see its original configuration page again. Everything is forwarded. Am I right? How can I achieve my goal described above? By achieve this, I will have a dedicated server of my own and the users can visit from the public IP. Anyone can correct me on any mistakes I made? I am using Netconn modem and D-Link DIR-300 router. Thank you very much for any help. Edit: Consider I have correctly setup the whole thing. Now I want to test my website by using public IP to visit it, but the port forwarding doesn't work. Does it consider that I am inside the local network and not using the port forwarding? If so, how can I do it? I ask my friends (outside my local network) to have a try and they can see the website. What should I do so that from the inside, I can do the testing? Thank you very much.

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >