Search Results

Search found 10921 results on 437 pages for 'latex environment'.

Page 393/437 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • network is not available even when cisco vpn client is connected. wrong route?

    - by javapowered
    I'm using Vodafone 3G modem. I've disabled other network devices in the system (ethernet, wifi, wimax) turned off firewall and antivirus. cisco vpn client connects successfully but I still can not access computer 192.168.147.120 (as well as any other computer from network). Any suggestions are welcome as I don't know what to do. ipconfig /all and route print commands (translated to english): Microsoft Windows [Version 6.1.7601] (C) Microsoft Corporation (Microsoft Corp.), 2009. All rights reserved. C: \ Users \ Oleg> ipconfig / all IP Configuration for Windows The name of the computer. . . . . . . . . : OlegPC The primary DNS-suffix. . . . . . : Node Type. . . . . . . . . . . . . : Hybrid IP-routing is enabled. . . . : No WINS-proxy enabled. . . . . . . : No Ethernet adapter Local Area Connection 4: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Cisco Systems VPN Adapter Physical Address. . . . . . . . . 00-05-9A-3C-78-00 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Local IPv6-address channel. . . : Fe80:: c073: 41b2: 852f: eb87% 26 (Preferred) IPv4-address. . . . . . . . . . . . : 10.53.127.204 (Preferred) The subnet mask. . . . . . . . . . : 255.0.0.0 Default Gateway. . . . . . . . . : IAID DHCPv6. . . . . . . . . . . : 536872346 DUID the client DHCPv6. . . . . . . 00-01-00-01-14-6F-4C-8D-60-EB-69-85-10-2D DNS-servers. . . . . . . . . . . : Fec0: 0:0: ffff:: 1% 1 fec0: 0:0: ffff:: 2% 1 fec0: 0:0: ffff:: 3% 1 NetBios over TCP / IP. . . . . . . . : Disabled Adapter mobile broadband connection through a broadband adapter mobile communications: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Vodafone Mobile Broadband Network Adapter (Huawei) Physical Address. . . . . . . . . 58-2C-80-13-92-63 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes IPv4-address. . . . . . . . . . . . : 10.229.227.77 (Preferred) The subnet mask. . . . . . . . . . : 255.255.255.252 Default Gateway. . . . . . . . . : 10.229.227.78 DNS-servers. . . . . . . . . . . : 163.121.128.134 212.103.160.18 NetBios over TCP / IP. . . . . . . . : Disabled Tunnel adapter isatap. {737FF02E-D473-4F91-840E-2A4DD293FC12}: State of the environment. . . . . . . . : DNS Suffix. DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Adapter Microsoft ISATAP # 3 Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Tunnel adapter isatap. {EF585226-5B07-4446-A5A4-CB1B8E4B13AC}: State of the environment. . . . . . . . : DNS Suffix. DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Adapter Microsoft ISATAP # 4 Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: DNS-suffix for this connection. . . . . : Description. . . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . 00-00-00-00-00-00-00-E0 DHCP is enabled. . . . . . . . . . . : No Autoconfiguration Enabled. . . . . . : Yes IPv6-address. . . . . . . . . . . . : 2001:0:4137:9 e76: ea: b77: f51a: 1cb2 (Basically d) Local IPv6-address channel. . . : Fe80:: ea: b77: f51a: 1cb2% 16 (Preferred) Default Gateway. . . . . . . . . ::: NetBios over TCP / IP. . . . . . . . : Disabled C: \ Users \ Oleg> route print ================================================== ========================= List of interfaces 26 ... 00 05 9a 3c 78 00 ...... Cisco Systems VPN Adapter 23 ... 58 2c 80 13 92 63 ...... Vodafone Mobile Broadband Network Adapter (Huawei) 1 ........................... Software Loopback Interface 1 19 ... 00 00 00 00 00 00 00 e0 Adapter Microsoft ISATAP # 3 20 ... 00 00 00 00 00 00 00 e0 Adapter Microsoft ISATAP # 4 16 ... 00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface ================================================== ========================= IPv4 Route Table ================================================== ========================= Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.229.227.78 10.229.227.77 296 10.0.0.0 255.0.0.0 On-link 10.53.127.204 286 10.6.93.21 255,255,255,255 10.0.0.1 10.53.127.204 100 10.13.50.12 255,255,255,255 10.0.0.1 10.53.127.204 100 10.53.8.0 255.255.252.0 10.0.0.1 10.53.127.204 100 10.53.127.204 255.255.255.255 On-link 10.53.127.204 286 10.53.128.0 255.255.248.0 10.0.0.1 10.53.127.204 100 10.53.148.0 255,255,255,240 10.0.0.1 10.53.127.204 100 10.53.148.16 255,255,255,240 10.0.0.1 10.53.127.204 100 10.229.227.76 255.255.255.252 On-link 10.229.227.77 296 10.229.227.77 255.255.255.255 On-link 10.229.227.77 296 10.229.227.79 255.255.255.255 On-link 10.229.227.77 296 10.255.255.255 255.255.255.255 On-link 10.53.127.204 286 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.147.0 255,255,255,240 10.0.0.1 10.53.127.204 100 192.168.147.96 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,112 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,128 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,144 255,255,255,240 10.0.0.1 10.53.127.204 100 192,168,147,224 255,255,255,240 10.0.0.1 10.53.127.204 100 192.168.214.0 255.255.255.0 10.0.0.1 10.53.127.204 100 192.168.215.0 255.255.255.0 10.0.0.1 10.53.127.204 100 194.247.133.19 255,255,255,255 10.0.0.1 10.53.127.204 100 213,247,231,194 255,255,255,255 10.229.227.78 10.229.227.77 100 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.229.227.77 296 224.0.0.0 240.0.0.0 On-link 10.53.127.204 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.229.227.77 296 255.255.255.255 255.255.255.255 On-link 10.53.127.204 286 ================================================== ========================= Persistent Routes: None IPv6 Route Table ================================================== ========================= Active Routes: If Metric Network Destination Gateway 16 58:: / 0 On-link 1306:: 1 / 128 On-link 16 58 2001:: / 32 On-link 16 306 2001: 0:4137:9 e76: ea: b77: f51a: 1cb2/128 On-link 16 306 fe80:: / 64 On-link 26 286 fe80:: / 64 On-link 16 306 fe80:: ea: b77: f51a: 1cb2/128 On-link 26 286 fe80:: c073: 41b2: 852f: eb87/128 On-link 1306 ff00:: / 8 On-link 16 306 ff00:: / 8 On-link 26 286 ff00:: / 8 On-link ================================================== ========================= Persistent Routes: None C: \ Users \ Oleg>

    Read the article

  • Home and Small to Medium Enterprise network manufacturer choice, Netgear, Linksys or D-Link ?

    - by Kedare
    (Please don't close this post, it's a serious post so... Be cool, no trolls please, I need an answer ;p) Hello, I am looking for an alternative to Cisco (too expensive for me !) for semi-pro utilization (at home but with advanced feature (I'm studying in IT)) and in small/medium enterprises. I think I will choose between LinkSys (Including Cisco Small Business), Netgear and D-Link, but I've never really used these products, that what I need is a manufacturer that make "almost" all type of networking equipment (Like Cisco but cheaper..), here are my needs : I need almost all my products to be rackable I need a good warranty (Netgear lifetime waranty rulez!) I need an "unified" network environment I made a little comparison of the characteristics that interest me after hours of search on Internet (based on result found on many websites): (Prices are based on the ldlc-pro.com french website) Hotline/Support Quality: Netgear : Not so bad Linksys : Not so bad D-Link : Poor! Most common Warranty: Netgear : Unlimited Lifetime Warranty! Linksys : Limited 3 years warranty D-Link : Limited 5 years warranty (Unlimited in US but I'm on France :(...) VPN protocols compatibles with routers on endpoint mode: Netgear : Only IPSEC :( Linksys : IPSEC, PPTP, L2TP D-Link : IPSEC, PPTP, L2TP Cheaper 8 ports Gb switch : Netgear : 30€ Linksys : 47€ D-Link : 30€ Cheaper 48 ports + 1Gb uplink(s) administrable switch : Netgear : 263€ Linksys : 630€ D-Link : 600€ Cheaper VPN router : Netgear : 100€ Linksys : 80€ D-Link : 60€ Cheaper rackable switch : Netgear : 50€ Linksys : 87€ D-Link : 50€ Cheaper rackable and administrable switch : Netgear : 120€ Linksys : 370€ D-Link : 171€ Netgear and D-Link are in the same range of price, where Linksys is more expensives. I've searched for some other criteria ( the full comparison is here, in french with shop/source links: http://forums.jeuxonline.info/showthread.php?t=1072280 ) and made a final score for each manufacturer : SCORE including IP camera sub-score: Netgear : 6.2/10 Linksys : 7.3/10 D-Link : 7.0/10 SCORE excluding IP camera sub-score: Netgear : 6.9/10 Linksys : 7.0/10 D-Link : 6.7/10 On both case, Linksys wins. So here is my little comparison, but because I've never really used these stuffs, I need your help to make a decision on witch manufacturer choose for both my personnal and corporate use. So here are the questions : What manufacturer do you recommend me (Not cisco (except Small business)) ? Why ? Have you called the call center of the customer support of one of these manufacturer ? How it was ? Did you had problems or bad experiences with these equipments ? Any other advices ? ;) Thank you !

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca

    - by Marlen T. B.
    I have recently (as of July 2012) bought a HP Pavilion dv6-6c40ca laptop. It came pre-installed with Windows 7 on an MBR. I installed Ubuntu 12.04 on it on a GPT partition in what I think is BIOS emulation mode. I made a BIOS-Grub partition so the install didn't fail. That is what it is for .. right? Now I want to upgrade to UEFI mode. How would I Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca. Or is it impossible? My laptop, despite its new age may not be UEFI 2.0+ capable. If it isn't how can I install a software UEFI (i.e. a DUET such as the one by tianocore). Or is this too impossible? A link to my laptop's specs is: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c03137924&tmp_task=prodinfoCategory&cc=ca&dlc=en&lang=en&lc=en&product=5218530 My laptop should have a UEFI given this link from HP http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c01442956#N218. And from the link I draw a quote: That means most notebooks distributed with Windows Vista, and all notebooks distributed with Windows 7, have the UEFI environment. My laptop had Windows 7 Home Premium pre-installed. OK. Following the comments so far -- NOTE: I am trying to do this on an external drive so I can see if it works. I have partitioned the drive using GParted as a GPT drive. Created a 200MB partition at the beginning of the drive with a FAT32 file system. Given the 200MB partition a label of "EFI". Set the boot flag on the 200MB partition. What should a do next to install Ubuntu 12.04? Given the link: https://help.ubuntu.com/community/UEFIBooting#Selecting_the_.28U.29EFI_Graphic_Protocol In my first read through (just to see if I will understand everything before I start) I get to step 2.3 Install GRUB2 in (U)EFI systems The first line is Boot into Linux (any live ISO) preferably in UEFI mode. Um .. how do you tell what mode your live CD is in?! And how do you change it if the mode is wrong?

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • Using WSUS Admin Console from outside domain

    - by Nick
    Environment: I have a workstation on our primary domain. We have a primary WSUS Server that is the upstream server of 8 different testing domains. The Primary WSUS server is not part of any domain. Routing is configured between my workstation and the Primary WSUS server. I can RDP to the Primary WSUS sever without any problem. The router is configured to forward any any between my workstation and the Primary WSUS server. This WSUS server cannot be part of a domain due to external requirements (I can't change them) on the lab I work in. The version of WSUS is WSUS 3.0 SP 2 What I want to do: I need to connect to the WSUS server with the WSUS Admin console from my local workstation. The end goal is to connect via Powershell and manage with that. I also need to take what I do here and port it to the 8 test domains so I can manage those WSUS servers. The routing is all in place so I can talk to the servers, it's just connecting to the WSUS console that is causing problems. The problem: I cannot get my workstation to connect to the WSUS Console. I get one of the following errors depending on the setup. 1st error: Cannot connect to 'WSUS'. You do not have the permissions required to access this WSUS server. To connect to the server you must be a member of the WSUS Administrators or WSUS Reporters security groups I also get the warning 7012 from the event log that says the same thing. 2nd error: Cannot connect to 'WSUS'. The server may be using another port or different Secure Sockets Layer setting. What I have tried: So far I have configured IIS for Anonymous Authentication on both the WSUS Administration and ApiRemoting30 using an account will call WSUS_User. With this in place, I get the 1st error. When I do this though, the local WSUS Console cannot be used either. Reverting back to only Windows Authentication allows the local console to work, but the remote console now give the 2nd error. I have confirmed the port, and that there is no SSL in use (which is a policy that is pushed from above, that I cannot effect). I have placed WSUS_User in the groups mentioned above, but it still does not connect. I made sure WSUS_User has full access on C:\Program Files\Update Services and C:\Program Files\Update Services\WebServices I am not very familiar with the workings of WSUS or IIS, and have gone as far as I can figure out on my own. Googling these errors all take me to the same steps about Anonymous Authentication and configuring permissions on folders. Note: I have cross-posted this to StackOverflow as well.

    Read the article

  • Apache Solr Admin on Tomcat Deployed in WebApps Directory

    - by KM01
    I am trying to get Apache Solr to work on Redhat6 and Tomcat6 (using these instructions), but get this error when browsing to the admin section, http://localhost:8080/solr-example/admin: HTTP Status 404 - missing core name in path type Status report message missing core name in path description The requested resource (missing core name in path) is not available. http://localhost:8080/solr-example loads fine, with a link to "Solr Admin." My setup is as follows: tomcat6: /etc/tomcat6 Solr: /app/solr/example I have a solr-example.xml in /etc/tomcat6/Catalina/localhost/, which reads: <?xml version="1.0" encoding="utf-8"?> <Context docBase="/app/solr/example/apache-solr-3.4.0.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/app/solr/example" override="true"/> </Context> I don't see anything in the logs (/var/log/tomcat6) ... only entires in catalina.out are regarding the starting and stopping of tomcat6. My questions are: 1.What else do I need to do to get "Solr Admin" to work under Tomcat? 2.Where are these "cores" supposed to be specified? I see an entry in /app/solr/example/solr/solr.xml ? <solr persistent="false"> adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="." /> </cores> </solr> 3.How do I got about ensuring that logs are working correctly? I can't find logs that contain mention of the 404 above. Update in response to @quanta's comment: Downloaded former (apache-solr-3.4.0.tgz) dataDir was not set, now set to: <dataDir>${solr.data.dir:../solr/data}</dataDir> JAVA_OPTS: /usr/lib/jvm/java/bin/java -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start catalina.out contains no indication of the above error

    Read the article

  • Output php mail calls to log file

    - by Tom McQuarrie
    This question relates to the question found here: Find the php script thats sending mails Trying to do the exact same thing but can't get the log to output what I need. Not too experienced with serverfault and ideally I'd post my followup on the original question, or PM adam to see if he ever found a solution, but looks as though server fault doesn't work that way. I can post an "answer" but that's definitely not what this is. I have a script located at /usr/local/bin/sendmail-php-logged, with the following: #!/bin/sh logger -p mail.info sendmail-php: site=${HTTP_HOST}, client=${REMOTE_ADDR}, script=${SCRIPT_NAME}, filename=${SCRIPT_FILENAME}, docroot=${DOCUMENT_ROOT}, pwd=${PWD}, uid=${UID}, user=$(whoami) /usr/sbin/sendmail -t -i $* This is logging to /var/log/maillog, but as Adam mentions in his question, none of the server variables work. Output I'm getting is: Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:03 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:05 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:11 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:14 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:29 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:41 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root User ID, current user, and pwd are all working, probably because they're globally accessible script resources, and not specific to PHP, like all the others are. I've tried using other server variables as per labradort's instructions, but no joy. Here's some sample tests: logger -p mail.info sendmail-php SCRIPT_NAME: ${SCRIPT_NAME} logger -p mail.info sendmail-php SCRIPT_FILENAME: ${SCRIPT_FILENAME} logger -p mail.info sendmail-php PATH_INFO: ${PATH_INFO} logger -p mail.info sendmail-php PHP_SELF: ${PHP_SELF} logger -p mail.info sendmail-php DOCUMENT_ROOT: ${DOCUMENT_ROOT} logger -p mail.info sendmail-php REMOTE_ADDR: ${REMOTE_ADDR} logger -p mail.info sendmail-php SCRIPT_NAME: $SCRIPT_NAME logger -p mail.info sendmail-php SCRIPT_FILENAME: $SCRIPT_FILENAME logger -p mail.info sendmail-php PATH_INFO: $PATH_INFO logger -p mail.info sendmail-php PHP_SELF: $PHP_SELF logger -p mail.info sendmail-php DOCUMENT_ROOT: $DOCUMENT_ROOT logger -p mail.info sendmail-php REMOTE_ADDR: $REMOTE_ADDR And the output: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: I'm running php 5.3.10. Unfortunately register_globals is on, for compatibility with legacy systems, but you wouldn't think that would cause the environment variables to stop working. If someone can give me some hints as to why this might not be working I'll be a very happy man :)

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • Can't get Passwordless (SSH provided) SFTP working

    - by Shoaibi
    I have chrooted sftp setup as below. # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin without-password StrictModes yes AllowGroups admins clients RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* #Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Subsystem sftp internal-sftp Match group clients ChrootDirectory /var/chroot-home X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp a dummy user root:~# tail -n1 /etc/passwd david:x:1000:1001::/david:/bin/sh Now in this case david can sftp using say filezilla client and he is chrooted to /var/chroot-home/david/. But what if i was to setup a passwordless auth? I have tried pasting his key in /var/chroot-home/david/.ssh/authorized_keys but no use, tried ssh'ing as david to the box and it just stops at "debug1: Sending env LC_CTYPE = C" after i supply it password and there is nothing shown in auth.log, may be because it can't find the homedir. If i do "su - david" as root i see "No directory, logging in with HOME=/" which makes sense. Symlink doesn't help either. I have also tried with: Match group clients ChrootDirectory /var/chroot-home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp a dummy user root:~# tail -n1 /etc/passwd david:x:1000:1001::/var/chroot-home/david:/bin/sh This way if i don't change /var/chroot-home/david to root:root sshd complains about bad ownership or permission modes, and if i do, david can no longer upload/delete anything directly in his home while using sftp from filezilla.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • can not connect through SCP, but SSH connections works

    - by Joe Cabezas
    i am trying to connect to my server to transfer file using scp: $ scp -v -r -P <port> <user>@<host>:~/dir/ dir/ this is the output: OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/joe/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to <host> [<host>] port <port>. debug1: Connection established. debug1: identity file /Users/joe/.ssh/identity type -1 debug1: identity file /Users/joe/.ssh/id_rsa type -1 debug1: identity file /Users/joe/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host but connecting via SSH works fine: $ ssh <user>@<host> -p <port> <user>@<host>'s password: <user>@<host>:~$ OK what can be wrong with this? my /etc/ssh/sshd_config file on the host is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port <port> # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication no #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes

    Read the article

  • How to improve this bash shell script for turning hardlinks into symlinks?

    - by MountainX
    This shell script is mostly the work of other people. It has gone through several iterations, and I have tweaked it slightly while also trying to fully understand how it works. I think I understand it now, but I don't have confidence to significantly alter it on my own and risk losing data when I run the altered version. So I would appreciate some expert guidance on how to improve this script. The changes I am seeking are: make it even more robust to any strange file names, if possible. It currently handles spaces in file names, but not newlines. I can live with that (because I try to find any file names with newlines and get rid of them). make it more intelligent about which file gets retained as the actual inode content and which file(s) become sym links. I would like to be able to choose to retain the file that is either a) the shortest path, b) the longest path or c) has the filename with the most alpha characters (which will probably be the most descriptive name). allow it to read the directories to process either from parameters passed in or from a file. optionally, write a long of all changes and/or all files not processed. Of all of these, #2 is the most important for me right now. I need to process some files with it and I need to improve the way it chooses which files to turn into symlinks. (I tried using things like the find option -depth without success.) Here's the current script: #!/bin/bash # clean up known problematic files first. ## find /home -type f -wholename '*Icon* ## *' -exec rm '{}' \; # Configure script environment # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set -o nounset dir='/SOME/PATH/HERE/' # For each path which has multiple links # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # (except ones containing newline) last_inode= while IFS= read -r path_info do #echo "DEBUG: path_info: '$path_info'" inode=${path_info%%:*} path=${path_info#*:} if [[ $last_inode != $inode ]]; then last_inode=$inode path_to_keep=$path else printf "ln -s\t'$path_to_keep'\t'$path'\n" rm "$path" ln -s "$path_to_keep" "$path" fi done < <( find "$dir" -type f -links +1 ! -wholename '* *' -printf '%i:%p\n' | sort --field-separator=: ) # Warn about any excluded files # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ buf=$( find "$dir" -type f -links +1 -path '* *' ) if [[ $buf != '' ]]; then echo 'Some files not processed because their paths contained newline(s):'$'\n'"$buf" fi exit 0

    Read the article

  • Need to increase nginx throughput to an upstream unix socket -- linux kernel tuning?

    - by Ben Lee
    I am running an nginx server that acts as a proxy to an upstream unix socket, like this: upstream app_server { server unix:/tmp/app.sock fail_timeout=0; } server { listen ###.###.###.###; server_name whatever.server; root /web/root; try_files $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } Some app server processes, in turn, pull requests off /tmp/app.sock as they become available. The particular app server in use here is Unicorn, but I don't think that's relevant to this question. The issue is, it just seems that past a certain amount of load, nginx can't get requests through the socket at a fast enough rate. It doesn't matter how many app server processes I set up, it doesn't even matter what the app is (tried it with a dummy app with just a single endpoint that returned an empty page with status 404). The bottleneck seems to be the socket, not the app. I'm getting a flood of these messages in the nginx error log: connect() to unix:/tmp/app.sock failed (11: Resource temporarily unavailable) while connecting to upstream Many requests result in status code 502, and those that don't take a long time to complete. The nginx write queue stat hovers around 1000. Anyway, I feel like I'm missing something obvious here, because this particular configuration of nginx and app server is pretty common, especially with Unicorn (it's the recommended method in fact). Are there any linux kernel options that needs to be set, or something in nginx? Any ideas about how to increase the throughput to the upstream socket? Something that I'm clearly doing wrong? Additional information on the environment: $ uname -a Linux app1 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ ruby -v ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux] $ unicorn -v unicorn v4.3.1 $ nginx -V nginx version: nginx/1.2.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled Current kernel tweaks: net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_mem = 16777216 16777216 16777216 net.ipv4.tcp_window_scaling = 1 net.ipv4.route.flush = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 net.core.somaxconn = 8192 net.netfilter.nf_conntrack_max = 131072

    Read the article

  • Error installing scipy on Mountain Lion with Xcode 4.5.1

    - by Xster
    Environment: Mountain Lion 10.8.2, Xcode 4.5.1 command line tools, Python 2.7.3, virtualenv 1.8.2 and numpy 1.6.2 When installing scipy with pip install -e "git+https://github.com/scipy/scipy#egg=scipy-dev" on a fresh virtualenv. llvm-gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return error: Command "/usr/bin/llvm-gcc -fno-strict-aliasing -Os -w -pipe -march=core2 -msse4 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/Users/xiao/.virtualenv/lib/python2.7/site-packages/numpy/core/include -c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c -o build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.o" failed with exit status 1 Is it supposed to be looking for headers from my system frameworks? Is the development version of scipy no longer good for the latest version of Mountain Lion/Xcode?

    Read the article

  • Flash was "not designed to function across LANs". Any workarounds?

    - by Triynko
    See: http://helpx.adobe.com/flash/kb/problems-using-flash-authoring-across.html Issue When using Adobe Flash across a local area network (LAN) and networked drives/folders, you may experience any of the following problems:" Flash crashes while performing a test movie on FLA files located on a networked drive or folder. FLA files get corrupted when opening from or saving to networked drives or folder. Flash does not reflect changes in custom class after compiling. Flash, Flash Video Encoder, or Adobe Media Encodercrashes or corrupts Flash Video (FLV) files while encoding source located on networked drives or folder. Flash Video Encoder or Adobe Media Encoder crashes or corrupts FLV files where the output folder is a networked drive or folder. Published Flash Player (SWF) files and projectors are unable to load content located on networked drives or folder. More than one instance of a SWF or Projector on client machines cannot play back FLV files located on a networked drive or folder. Reason The Adobe Flash IDE, FLV Encoder, Adobe Media Encoderand Flash Player were not designed to function across LANs. Solution Use of Flash files across local networks is not supported in any context. Published content should access data through a web server. All file sources should be opened and saved on the local system. Using Flash in such a scenario for project collaboration or content deployment is highly discouraged and may corrupt your source files. If you need to work in a collaborative environment or store source files on a server, use the project panel and/or a third-party version control system. SERIOUSLY? I cannot work on files located on a mapped network drive? How did they mess that one up? Does the Flash IDE really open the source file and wipe it clean to do the saving, rather than saving a copy first then replacing it as an atomic file system operation? How hard would it be for them make a dummy temporary file for saving then issue a MOVE command? Any workarounds for this, like something that can make a network drive as stable as a local drive, like some kind of automatic local caching and synching?

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • Issue with VMWare vSphere and NFS: re occurring apd state

    - by Bastian N.
    I am experiencing issues with VMWare vSphere 5.1 and NFS storage on 2 different setups, which result in an "All Path Down" state for the NFS shares. This first happened once or twice a day, but lately it occurs much more frequent, as specially when Acronis Backup jobs are running. Setup 1 (Production): 2 ESXi 5.1 hosts (Essentials Plus) + OpenFiler with NFS as storage Setup 2 (Lab): 1 ESXi 5.1 host + Ubuntu 12.04 LTS with NFS as storage Here is an example from the vmkernel.log: 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 248: APD Timer started for ident [987c2dd0-02658e1e] 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 395: Device or filesystem with identifier [987c2dd0-02658e1e] has entered the All Paths Down state. 2013-05-28T08:07:33.479Z cpu0:2054)StorageApdHandler: 846: APD Start for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4cf28 3 2013-05-28T08:07:37.485Z cpu0:2052)NFSLock: 610: Stop accessing fd 0x410007e4d0e8 3 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 277: APD Timer killed for ident [987c2dd0-02658e1e] 2013-05-28T08:07:41.280Z cpu1:2049)StorageApdHandler: 402: Device or filesystem with identifier [987c2dd0-02658e1e] has exited the All Paths Down state. 2013-05-28T08:07:41.281Z cpu1:2049)StorageApdHandler: 902: APD Exit for ident [987c2dd0-02658e1e]! 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4d0e8 again 2013-05-28T08:07:52.300Z cpu1:3679)NFSLock: 570: Start accessing fd 0x410007e4cf28 again As long as the issue occurred once or twice a day it really wasn't a problem, but now this issue has impact on the VMs. The VMs get slow or even hang, resulting in a reset through vCenter in the production environment. I searched the web extensively and asked in forums, but till now nobody was able to help me. Based on blog posts and VMWare KB articles I tried the following NFS settings: Net.TcpipHeapSize = 32 Net.TcpipHeapMax = 128 NFS.HartbeatFrequency = 12 NFS.HartbeatMaxFailures = 10 NFS.HartbeatTimeout = 5 NFS.MaxQueueDepth = 64 Instead of NFS.MaxQueueDepth = 64 I already tried other settings like NFS.MaxQueueDepth = 32 or even NFS.MaxQueueDepth = 1. Unfortunately without any luck. It would be great if someone could help me on this issue. It is really annoying. Thanks in advance for all the help. [UPDATE] As I explained in the comment below, here is the network setup: On the production setup the NFS traffic is bound to a separate VLAN with ID 20. I am using a HP 1810 24 Port Switch. The OpenFiler system is connected to the VLAN with 4 Intel GbE NICs with dynamic LACP. The ESXis both have 4 Intel GbE NICs using 2 static LACP trunks containing 2 NICs each. One pair is connected to the regular LAN and the other one to the VLAN 20. And here is a screenshot of the vSwitch: Switch configuration: Port configuration: On the lab setup its a single Intel NIC on each side without VLAN, but with different IP subnet.

    Read the article

  • Installing ImageMagick on Mac OSX 10.6

    - by Russell C.
    I just got a new Mac and am trying to setup a local Perl development environment. I'm using MAMP but also need the ImageMagick perl module installed in order to do some of the photo processing our scripts require. I tried installing ImageMagick manually but ran into some issues and after reading online a lot of people reported having issues going this route. The general consensus was to install it using MacPorts instead so I went ahead and installed MacPorts. Unfortunately, MacPorts can't seem to install it successfully either. Here is the command I'm using to try to install ImageMagick: sudo port install p5-perlmagick And here are all the errors reported during install: ---> Computing dependencies for p5-perlmagick ---> Building p5-perlmagick Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-perlmagick/work/PerlMagick-6.32" && /usr/bin/make -j2 all " returned error 2 Command output: Magick.xs:10918: error: 'struct Methods' has no member named 'exception' Magick.xs:10918: error: request for member 'severity' in something not a structure or union Magick.xs:10918: error: 'ErrorException' undeclared (first use in this function) Magick.xs:10919: error: 'struct Methods' has no member named 'exception' Magick.xs:10920: warning: implicit declaration of function 'GetImageException' Magick.xs:10922: error: 'struct PackageInfo' has no member named 'image_info' Magick.xs:10922: error: 'struct Methods' has no member named 'adjoin' Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: 'UndefinedException' undeclared (first use in this function) Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: passing argument 2 of 'Perl_sv_catpv' from incompatible pointer type Magick.xs:10929: warning: unused variable 'message' Magick.xs:10856: warning: unused variable 'filename' Magick.c:10784: warning: unused variable 'ref' Magick.c:10777: warning: unused variable 'ix' Magick.xs: In function 'boot_Image__Magick': Magick.xs:2122: warning: implicit declaration of function 'InitializeMagick' Magick.xs:2123: warning: implicit declaration of function 'SetWarningHandler' Magick.xs:2124: warning: implicit declaration of function 'SetErrorHandler' make: *** [Magick.o] Error 1 Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. I have no idea what the problem might be or how to go about successfully installing ImageMagick. I'd appreciate any help & advice that someone out there that has done this successfully might be able to provide. Thanks in advance!

    Read the article

  • KeePass justification

    - by Jeff Walker
    I work at a place that tries to take security seriously, but sadly, they often fail. Currently, one of the major ways they fail is password management. I personally have about 20 accounts (my personal user id on lots of machines). For shared "system" accounts, there are about 45 per environment; development, test, and production. I have access to 2 of those, so my personal total is somewhere around 115 accounts. Passwords have to be at least 15 characters with some extensive but standard complexity constraints, and have to be changed every 60 days or so (system accounts every year). They also should not be the same for different accounts, but that isn't enforced. Think DoD-type standards. There is no way to remember and keep up with this. It just isn't humanly possible, as far as I'm concerned. This might be a good justification of a centralized account management system, a la LDAP or ActiveDirectory, but that is a totally different battle. Currently the solution is an Excel spreadsheet. They use Excel to put a password on it, and then most people make a copy and remove the password. This makes my stomach turn. I use KeePass for this problem and it manages all of my account very well. I like the features like auto-typing, grouping, plugins, password generation, etc. It uses AES-256 encryption via the .Net framework, and while not FIPS compliant, it has a very good reputation. The only problem is that they are also very careful about using randomly downloaded software. So we have to justify every piece of software on our workstations. I have been told that they really don't want me to use this, be cause of the "sensitive nature" of storing passwords. sigh My justification has to be "VERY VERY strong". I have been tasked with writing a justification for KeePass, but as I am lazy, I would like any input that I can get from the community. What do you recommend? Is there something out there that is better or more respected than KeePass? Is there any security experts saying interesting things on this topic? Anything will help at this point. Thanks.

    Read the article

  • PHP & MySQL on Mac OS X: Access denied for GUI user

    - by Eirik Lillebo
    Hey! This question was first posted to Stack Overflow, but as it is perhaps just as much a server issue I though it might be just as well to post it here also. I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query(): Access denied for user '_securityagent'@'localhost' (using password: NO) First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does: Collect all the data from article form (title, content, dates, etc..) Validate collected data Connect to database Dynamically build SQL query based on validated article data Send query to database before closing the connection Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from and article at Apple's Developer Connection talking about some random bug: Mac OS X's security infrastructure gets around this problem by running its GUI code as a special user, "_securityagent". Then I tried put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query(). Update: Here is the exact code I'm using to connect to the database. But a little explanation must follow: The connection error happens at a call to mysql_query() in function X in class_1 class_1 uses class_2 to connect to database class_2 reads a config file with the database connection variables (host, user, pass, db) class_2 connect to the database through the following function: var $SYSTEM_DB_HOST = ""; function connect_db() { // Reads the config file include('system_config.php'); if (!($SYSTEM_DB_HOST == "")) { mysql_connect($SYSTEM_DB_HOST, $SYSTEM_DB_USER, $SYSTEM_DB_PASS); @mysql_select_db($SYSTEM_DB); return true; } else { return false; } }

    Read the article

  • varnish3, mod_geoip with apache2 using mod_rewrite and mod_rpaf

    - by mursalat
    I am maintaining a website with 3 different versions of the site, with 3 different languages, handles with a single system written in php, which takes in environment variables based on the domain name that is being accessed. These are the three sites: myshop.com : english international version myshop.eu : european version of site myshop.ru : russian version of the site when myshop.com is accessed from russia it is to be redirected to myshop.ru, and any country from europe accesses myshop.com, is redirected to myshop.eu, and international visitors stay at myshop.com, although they can go to the country specific site. All these redirections for the country is done using GeoIP apache2 mod in order to determine the country code, which is used in a RewriteCondition to state a RewriteRule, there are some exceptions of IPs that do not do the rewrite for, basically the IPs of the developer's PCs. The site has been doing just fine, until we decided to setup varnish to give the site a boost, it really did give it a great boost, but the country specific rewrites has become buggy. What started to happen is that a russian visitor can go to myshop.com and won't be redirected, until he clicks a random link (perhaps a link not cached by varnish yet) and the user is redirected to their specific country. For that i setup mod_rpaf, and for exceptions to the rewrite rule (for the developer's ip), i used this RewriteCond %{HTTP:X-FORWARDED-FOR} !^43\.43\.43\.43, and i restarted varnish and apache2, it worked for a while, then it messed up again. And whole day i have been doing changes however i have little no clue as to what's going on, sometimes it works, and sometimes it doesn't, and sometimes it half works, etc... As for geoip, i used a php to check the $_SERVER variable, and here is the general idea of the output [HTTP_X_FORWARDED_FOR] => 43.43.43.44 [HTTP_X_VARNISH] => 1705675599 [SERVER_ADDR] => 127.0.0.1 [SERVER_PORT] => 80 [REMOTE_ADDR] => 43.43.43.44 [GEOIP_ADDR] => 43.43.43.44 [GEOIP_CONTINENT_CODE] => EU [GEOIP_COUNTRY_CODE] => FR [GEOIP_COUNTRY_NAME] => France Now, thanks to the "random" redirects, i hardly have a clue as to what is going on, so can you guys please give me some ideas as to what tools to use to debug this? I have tried to see the redirect logs, but they really dont show much, and varnishlog isn't helping much either - although i must admit i am no professional at varnish. I believe the problem is with varnish trying to cache the url, and thus apache redirects are not being done properly, however visits the site first has a redirect, and based on that other users are served the content, depending on from where the user was when the cache was last updated, is it correct? if so, how can i solve the problem? Also, i have the option of using geoip redirects on varnish3 instead of using apache2 to do the redirects, is that what the best practice is? Any suggestion as to debugging this or to fix this would be helpful! thnx!

    Read the article

  • Deploying Django App with Nginx, Apache, mod_wsgi

    - by JCWong
    I have a django app which can run locally using the standard development environment. I want to now move this to EC2 for production. The django documentation suggests running with apache and mod_wsgi, and using nginx for loading static files. I am running Ubuntu 12.04 on an Ec2 box. My Django app, "ddt", contains a subdirectory "apache" with ddt.wsgi import os, sys apache_configuration= os.path.dirname(__file__) project = os.path.dirname(apache_configuration) workspace = os.path.dirname(project) sys.path.append(workspace) sys.path.append('/usr/lib/python2.7/site-packages/django/') sys.path.append('/home/jeffrey/www/ddt/') os.environ['DJANGO_SETTINGS_MODULE'] = 'ddt.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I have mod_wsgi installed from apt. My apache/httpd.conf contains NameVirtualHost *:8080 WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi WSGIPythonPath /home/jeffrey/www/ddt <Directory /home/jeffrey/www/ddt/apache/> <Files ddt.wsgi> Order deny,allow Allow from all </Files> </Directory> Under apache2/sites-enabled <VirtualHost *:8080> ServerName www.mysite.com ServerAlias mysite.com <Directory /home/jeffrey/www/ddt/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/jeffrey/www/ddt/logs/apache_error.log CustomLog /home/jeffrey/www/ddt/logs/apache_access.log combined WSGIDaemonProcess datadriventrading.com user=www-data group=www-data threads=25 WSGIProcessGroup datadriventrading.com WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi </VirtualHost> If I am correct, these 3 files above should correctly allow my django app to run on port 8080. I have the following nginx/proxy.conf file proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; Under nginx/sites-enabled server { listen 80; server_name www.mysite.com mysite.com; access_log /home/jeffrey/www/ddt/logs/nginx_access.log; error_log /home/jeffrey/www/ddt/logs/nginx_error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/jeffrey/www/ddt/; } } If I am correct these two files should setup nginx to take requests on the HTTP port 80, but then direct requests to apache which is running the django app on port 8080. If i go to mysite.com, all I see is Welcome to Nginx! Any advice for how to debug this?

    Read the article

  • Performance of ClearCase servers on VMs?

    - by Garen
    Where I work, we are in need of upgrading our ClearCase servers and it's been proposed that we move them into a new (yet-to-be-deployed) VMmare system. In the past I've not noticed a significant problem with performance with most applications when running in VMs, but given that ClearCase "speed" (i.e. dynamic-view response times) is so latency sensitive I am concerned that this will not be a good idea. VMWare has numerous white-papers detailing performance related issues based on network traffic patterns that re-inforces my hypothesis, but nothing particularly concrete for this particular use case that I can see. What I can find are various forum posts online, but which are somewhat dated, e.g.: ClearCase clients are supported on VMWare, but not for performance issues. I would never put a production server on VM. It will work but will be slower. The more complex the slower it gets. accessing or building from a local snapshot view will be the fastest, building in a remote VM stored dynamic view using clearmake will be painful..... VMWare is best used for test environments (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) and: VMware + ClearCase = works but SLUGGISH!!!!!! (windows)(not for production environment) My company tried to mandate that all new apps or app upgrades needed to be on/moved VMware instances. The VMware instance could not handle the demands of ClearCase. (come to find out that I was sharing a box with a database server) Will you know what else would be on that box besides ClearCase? Karl (via http://www.cmcrossroads.com/forums?func=view&id=44094&catid=31) and: ... are still finding we can't get the performance using dynamic views to below 2.5 times that of a physical machine. Interestingly, speaking to a few people with much VMWare experience and indeed from running builds, we are finding that typically, VMWare doesn't take that much longer for most applications and about 10-20% longer has been quoted. (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) Which brings me to the more direct question: Does anyone have any more recent experience with ClearCase servers on VMware (if not any specific, relevant performance advice)?

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >