Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 624/854 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Excel fails to open Python-generated CSV files

    - by johnjdc
    I have many Python scripts that output CSV files. It is occasionally convenient to open these files in Excel. After installing OS X Mavericks, Excel no longer opens these files properly: Excel doesn't parse the files and it duplicates the rows of the file until it runs out of memory. Specifically, when Excel attempts to open the file, a prompt appears that reads: "File not loaded completely." Example of code I'm using to generate the CSV files: import csv with open('csv_test.csv', 'wb') as f: writer = csv.writer(f) writer.writerow([1,2,3]) writer.writerow([4,5,6]) Even the simple file generated by the above code fails to load properly in Excel. However, if I open the CSV file in a text editor and copy/paste the text into Excel, parse it with text to columns, and then save as CSV from Excel, then I can reopen the CSV file in Excel without issue. Do I need to pass an additional parameter in my scripts to make Excel parse the CSV files the same way it used to? Or is there some setting I can change in OS X Mavericks or Excel? Thanks.

    Read the article

  • Firefox cannot render icons from Font Awesome webfont set

    - by ADTC
    In Firefox (Windows 7), icons and glyphs that are called from the Font Awesome package do not render properly. An example of this can be seen on the Khan Academy website. Below the video the icons are shown as boxes with hex codes in them. This means that it isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I found this question on Stack Overflow that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However, I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. I've already tried manually downloading the font and adding it to Windows' Fonts folder but this does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • setting up phpmyadmin with nginx within ubuntu 11.04

    - by Patrick
    I have nginx and php5-fpm running on ubuntu 11.04. I have installed phpmyadmin but im having trouble accessing it. I would like to access it via http://localhost/phpmyadmin I've used all the default locations for the nginx, php5, and phpmyadmin installs. I'm being directed to use the block below by the blog guide im following, but im not sure what to change to get it to point how im wanting it to. server { listen 80; server_name php.example.com; // <-I know i need to edit this, but not sure to what. access_log /var/log/nginx/localhost.access.log; root /usr/share/phpmyadmin; index index.php; location / { try_files $uri $uri/ @phpmyadmin; } location @phpmyadmin { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_NAME /index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Visual Studio 2010 won't compile/create new projects

    - by tuner
    My Visual Studio 2010 Professional with SP1 installed won't compile anymore. The shown error is: TRACKER : error TRK0005: Failed to locate: "CL.exe". The system cannot find the file specified. Strangely it is also not possible anymore to create new projects - the wizard appears but just restarts when I press create. As I found out the paths for Visual Studio are now built from settings in the registry. Namely HKEY_CURRENT_USER\Software\Microsoft\VisualStudio. Comparing a colleagues installation with mine revealed no different settings. So this is how the Property Pages/Configuration Properties/VC++ Directories look like: Executable Directories: $(ExecutablePath) Include Directories: $(IncludePath) Reference Directories: $(ReferencePath) Library Directories: $(LibraryPath) Source Directories: $(SourcePath) Exclude Directories: $(ExcludePath) From the Visual Studio 2010 Command Prompt, cl.exe is found. I can only guess that this behavior was caused by a reinstallation of Studio a couple of months ago (to a different folder). As we use an external build-script for our main project there is a good chance that it is broken since then. Any hints?

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • WMI Sensors monitoring

    - by DmitrySemenov
    Monitoring tool Paessler stopped to monitor WMI Windows Sensors Paessler is Updated to version 12.4.5.3165. (10/30/2012 1:44:11 PM) Paessler windows sensors (against windows server 2008 R2 web edition) stopped to work (no changes have been made on server that we monitor) with the message Connection could not be established (80070005: Access is denied - Host: 192.168.2.10, User: Administrator, Password: **, Domain: ntlmdomain:) (code: PE015) However if I go to Virtual machine used to run Paessler and the following cscript runs successfully: strComputer = "192.168.2.10" Set objSWbemLocator = CreateObject("WbemScripting.SWbemLocator") Set objSWbemServices = objSWbemLocator.ConnectServer _ (strComputer, "root\cimv2", _ "Administrator", "pass") Set colProcessList = objSWbemServices.ExecQuery( _ "Select * From Win32_Processor") For Each objProcess in colProcessList Wscript.Echo "Process Name: " & objProcess.Name Next I'm getting output C:\>cscript test.vbs Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. Process Name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz Process Name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz So WMI works a. I gave Administrator credentials for Device to monitor in Paessler setting, the same I used in the script above b. I restarted windows server (broken sensors) - but this didn't help c. I restarted Paessler probe service - no effect any ideas?

    Read the article

  • More than 10k connections on linux vps

    - by Sash_007
    my question what is causing this and how to check? we use url masking script is the website..is it causing this?please help We could noticed that you are abusing our network, as you have made more than 10k connections in our node due to this our node became unstable and all of our customer faced down time because of your VPS. Please find the log details below for your reference. ============================== 593 src=199.231.227.56 dst=58.2.236.196 465 src=199.231.227.56 dst=192.223.243.6 396 src=199.231.227.56 dst=58.2.238.191 217 src=199.231.227.56 dst=58.2.236.197 161 src=199.231.227.56 dst=20.139.83.50 145 src=199.231.227.56 dst=192.223.163.6 136 src=199.231.227.56 dst=125.21.230.68 134 src=199.231.227.56 dst=125.21.230.132 131 src=199.231.227.56 dst=20.139.67.50 117 src=199.231.227.56 dst=110.234.29.210 112 src=199.231.227.56 dst=65.52.0.51 104 src=199.231.227.56 dst=202.46.23.55 100 src=199.231.227.56 dst=202.3.120.4 94 src=199.231.227.56 dst=117.198.39.22 69 src=203.197.253.62 dst=199.231.227.56 62 src=14.194.248.225 dst=199.231.227.56 53 src=199.231.227.56 dst=192.223.136.5 52 src=49.248.11.195 dst=199.231.227.56 51 src=199.231.227.56 dst=117.198.38.15 50 src=199.231.227.56 dst=192.71.175.2 47 src=199.231.227.56 dst=61.16.189.76 45 src=199.231.227.56 dst=122.177.222.17 43 src=199.231.227.56 dst=115.242.89.40 42 src=199.231.227.56 dst=103.22.237.215 41 src=125.16.9.2 dst=199.231.227.56 39 src=199.231.227.56 dst=117.198.35.90 38 src=199.231.227.56 dst=203.91.201.54 38 src=199.231.227.56 dst=14.139.241.89 38 src=199.231.227.56 dst=111.93.85.82 37 src=199.231.227.56 dst=65.52.0.56 Note: 1st column indicates the total number of connections to a particular IP. You have totally made more than 10k connections.

    Read the article

  • How to correctly configure DNS for Icelandic domains and Plesk

    - by Leonard Challis
    I have a domain registered with ISNIC (domain.is). They only let you set nameservers that pass their requirements. I've been told it's this requirement that I need to fix: Nameserver must be consistently registered in DNS, i.e. its own A resource record must be available and a corresponding PTR resource record as well. I allocated two new IP addresses from my server host and at that point set their PTR records to ns0.domain.is and ns1.domain.is. After that I created two A records for that domain in Plesk, again ns0. and ns2.domain.is with their respective IPs. Next, I went to the ISNIC page to register my nameservers, along with their IP addresses I'd allocated and this worked perfectly for both without error. So the final job was to set the nameservers for the domain via ISNIC's control panel, however when I try, I'm getting this error: Test results for "NS0.DOMAIN.IS": The nameserver ns1.vps123.vpsprovider.com is not consistently registered in DNS (ns1.vps123.vpsprovider.com => 1123.123.123.123 => vps123.vpsprovider.com) The nameserver ns0.vps123.vpsprovider.com is not consistently registered in DNS (ns0.vps123.vpsprovider.com => 1123.123.123.123 => vps123.vpsprovider.com) The nameserver ns0.DOMAIN.IS is missing from the NS record set for DOMAIN.IS Test results for "NS1.DOMAIN.IS": The nameserver ns1.DOMAIN.IS is missing from the NS record set for DOMAIN.IS The nameserver ns0.DOMAIN.IS is missing from the NS record set for DOMAIN.IS This is really at the limits of my DNS knowledge I'm afraid. It feels like I'm close but maybe missing a vital part, linking the nameservers in Plesk or something?

    Read the article

  • OpenVPN Keeps Crashing

    - by Frank Thornton
    Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28523 [vpntest] Peer Connection Initiated with [AF_INET]<MY_IP>:28523 Oct 20 21:00:44 sb1 openvpn[2082]: vpntest/<MY_IP>:28523 MULTI_sva: pool returned IPv4=10.8.0.6, IPv6=(Not enabled) Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28522 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1576', remote='link-mtu 1376' Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28522 WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1532', remote='tun-mtu 1332' Oct 20 21:00:45 sb1 openvpn[2082]: <MY_IP>:28522 [vpntest2] Peer Connection Initiated with [AF_INET]<MY_IP>:28522 Oct 20 21:00:45 sb1 openvpn[2082]: vpntest2/<MY_IP>:28522 MULTI_sva: pool returned IPv4=10.8.0.10, IPv6=(Not enabled) Oct 20 21:00:46 sb1 openvpn[2082]: vpntest/<MY_IP>:28523 send_push_reply(): safe_cap=940 Client File: client dev tun proto tcp remote <IP> 443 resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1410 persist-key persist-tun auth-user-pass comp-lzo SERVER: port 443 #- port proto tcp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 reneg-sec 0 #mtu-disc yes mssfix 1410 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /etc/openvpn/openvpn-auth-pam.so /etc/pam.d/login #plugin /usr/share/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Comment this line if you are using FreeRADIUS #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment this line if you are using FreeRADIUS client-to-client client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 3 30 comp-lzo persist-key persist-tun What is causing the VPN to keep dropping the connection and then reconnecting?

    Read the article

  • WWNs,WWPNs and Fibre Channel addresses

    - by user238230
    Lots of contradictory on these subjects and I don't know why. My first question is about the 64 bit WWN. One reference claims the terms WWN and WWPN are synonymous. An online source seems to refute this. They say: A WWPN (world wide port name) is the unique identifier for a fibre channel port where a WWN (world wide name) the unique identifier for the node itself. A good example is a dual port HBA. There will be two WWPN's (one for each port) and only a single WWN for the card itself. Question #1: Which is correct? I’m almost positive I read that every “Port” has a WWN. My next question is about the 24 bit FC address that is dynamically allocated to a port when it is introduced to the switch. The Domain ID field is defined as: "a unique number provided to each switch in the fabric." Question #2: Do Domain IDs only apply to switch ports? For example what would the Domain ID be for a HBA? None? The same as the switch port it is connected to? Question #3: My last question is about the Name Server of a switch. A book example shows the routing of a message through the switch. It uses the WWNs of the source and destination ports to route the message. I am assuming that the Name Server must associate the WWN and the FC address in some way in order to route the message, correct?

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Why does my DD-WRT not accept SSH connections from my laptop?

    - by Vlad Seghete
    So, here is my system: I have a 2Wire AT&T modem/router which I use for wireless and a Buffalo router flashed with DD-WRT which is physically attached to the 2Wire and set in the DMZ. I set everything up on the DD-WRT to be able to connect to it using ssh and also so that it forwards ssh requests on a different port to one of the servers behind it. Now, when I am physically connected to the DD-WRT all this works great and as I would want it to. I ssh into the two different ports using the WAN IP of my network, and I get where I expect to land. If, however, I am connected using wi-fi to the 2Wire, the same commands do not work. I do not get an error, simply a timeout. I have trouble understanding this, since the DD-WRT is set in the DMZ and everything should pass to it. To further complicate the problem, I tried connecting to the same IP using my phone (wireless disabled, so really from the WAN) and surprise, it works! If I go back on the local network by enabling the wifi, the ssh connection times out. To make this even stranger, my WAN IP address always responds to pings (meaning in all the above situations). What could be going on here? I know what I should do, completely disable the 2wire as a router and use it strictly as a modem and them use all the routing capabilities of the dd-wrt. It's what I will probably end up doing anyway, but my question remains, because I really want to know what is happening here.

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • Computer Freezes with "Bugcheck 0" on Windows 7. How do I figure out why?

    - by George Stocker
    After about 10 minutes of running, my computer will hang, exhibiting the following symptoms: Both monitors act as if there is no image being sent to them (on, but blacked out) The CAPS Lock key on the keyboard will not respond. The computer appears to still be running: CPU Fan is whirring. When I reboot, Windows says "The previous shutdown was unexpected." I've enabled the 'don't automatically restart' on an error, and asked the computer to make a memory dump whenever it crashes, but it hasn't done either. The problem is that there's no bugcheck for me to go off of, so there's no way for me to determine what the cause is (I think). Here are my system specs: Intel Core 2 Duo E6750 Gigabyte P35C-DS3R w/ 4.00 GB (DDR2 Ram) Nvidia 8800 GT Windows 7 I've tried running the Windows Memory checker, but the system also freezes when using that after about 10 minutes as well. How can I diagnose the problem with no bugcheck and no ability to run a memory checker? Update Running Memtest86 also causes the computer to crash (looks like it doesn't make it through a full pass - it was only running for about 10 minutes when the PC stopped responding).

    Read the article

  • Blocking an IP in Webmin

    - by Dan J
    I've been checking my /var/log/secure log recently and have seen the same bot trying to brute force onto my Centos server running webmin. I created a chain + rule in Networking - Linux Firewall: Drop If source is 113.106.88.146 But I'm still seeing the attempted logins in the log: Jun 6 10:52:18 CentOS5 sshd[9711]: pam_unix(sshd:auth): check pass; user unknown Jun 6 10:52:18 CentOS5 sshd[9711]: pam_succeed_if(sshd:auth): error retrieving information about user larry Jun 6 10:52:19 CentOS5 sshd[9711]: Failed password for invalid user larry from 113.106.88.146 port 49328 ssh2 Here is the contents of /etc/sysconfig/iptables: # Generated by webmin *filter :banned-ips - [0:0] -A INPUT -p udp -m udp --dport ftp-data -j ACCEPT -A INPUT -p udp -m udp --dport ftp -j ACCEPT -A INPUT -p udp -m udp --dport domain -j ACCEPT -A INPUT -p tcp -m tcp --dport 20000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp --dport https -j ACCEPT -A INPUT -p tcp -m tcp --dport http -j ACCEPT -A INPUT -p tcp -m tcp --dport imaps -j ACCEPT -A INPUT -p tcp -m tcp --dport imap -j ACCEPT -A INPUT -p tcp -m tcp --dport pop3s -j ACCEPT -A INPUT -p tcp -m tcp --dport pop3 -j ACCEPT -A INPUT -p tcp -m tcp --dport ftp-data -j ACCEPT -A INPUT -p tcp -m tcp --dport ftp -j ACCEPT -A INPUT -p tcp -m tcp --dport domain -j ACCEPT -A INPUT -p tcp -m tcp --dport smtp -j ACCEPT -A INPUT -p tcp -m tcp --dport ssh -j ACCEPT -A banned-ips -s 113.106.88.146 -j DROP COMMIT # Completed # Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed # Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • Terminate child processes on ctrl-c

    - by jackweirdy
    In tiny core linux, I have the following script: #!/bin/sh # ~/.X.d/freerdp.sh rdp(){ while true do xfreerdp -f [IP Address] done } rdp & It's pretty simple; when X starts up and checks the .X.d directory (as is the case in tiny core) it finds and executes this script. The script starts up freerdp and keeps a connection open to the server by restarting it whenever it closes. As you can see from the rdp & line, the function is run in the background to allow X to continue its startup routine. The problem is that whenever I cancel X with a Ctrl-Alt-Backspace the rdp process doesn't die. I'm looking for a way to kill the process as soon as X finishes, either through: A) a script, executed on X closing, which kills the process or B) by modifying the script to check the return value of the xfreerdp command. NB - if the solution does check the return value, it must only end if the command fails to open the X display. For that reason, if you could point me to a reference for xfreerdp return values I'd be grateful.

    Read the article

  • Fix bad superblock on logical partition

    - by Chris
    I was following http://www.howtoforge.com/linux_resi...xt3_partitions and when i reboot and run: root@Microknoppix:/home/knoppix# fsck -n /dev/sda7 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda7 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> so i ran e2fsck with all the block numbers that you need (forget exactly what tool i used to find where the superblocks are hidden) no dice then i ran testdisk and had it look for the superblock, no results anyone have any ideas? fdisk -l for reference: root@Microknoppix:/home/knoppix# fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x97646c29 Device Boot Start End Blocks Id System /dev/sda1 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 38912 312046593 f W95 Ext'd (LBA) /dev/sda5 64 326 2104320 82 Linux swap / Solaris /dev/sda6 * 327 2938 20972544 83 Linux /dev/sda7 2938 38912 288968672+ 83 Linux To be honest it looks like I lost it... Next step if that happens is to dump the partition to an image file and hope i can find or write some software to parse through the data looking for known file headers, i think.

    Read the article

  • Help about pure-ftp

    - by hai
    I setup pure-ftp on freebsd behind firewall. On pure-ftp setuped passsi mode ftp(rangle port 50400-50600) and firewall open port from 50400-50600 (include mode IN and out). But i try use ftp client connect but not connect. Nofinication error status: Connecting to 210.245.89.95:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 13:20. Server port: 21. Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER bk Response: 331 User bk OK. Password required Command: PASS Response: 230 OK. Current directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTA Response: PASV Response: EPSV Response: SPSV Response: ESTP Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (210,245,88,98,138,1) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing Status: Connecting to 210.245.88.98:21... Status: Connection established, waiting for welcome message... Help me.

    Read the article

  • Single m0n0wall - Two LAN Subnets - How To Setup

    - by SnAzBaZ
    I have two LAN subnets that I need to link together they are 192.168.4.0/24 and 192.168.5.0/24 There is a m0n0wall running on 192.168.4.1. It's LAN connection goes out to our network switch, and it's WAN port goes out to our ADSL modem. WAN is connected via PPPoE. The 192.168.4.0 subnet contains all of our office workstations. The 192.168.5.0 subnet contains development servers and test machines that need to obtain internet access and be "managed" by computers on the 192.168.4.0 subnet, but need to be on their own subnet as well. I have a Draytek 2820N configured on 192.168.5.1 with it's WAN2 port configured as 192.168.4.25 and a default gateway of 192.168.4.1. Machines on the 5.0 subnet can connect to the internet via the m0n0wall just fine. I configured a static route on the m0n0wall LAN interface, Network 192.168.5.0/24 and Gateway 192.168.4.25. Machines on the 5.0 subnet can ping machines on the 4.0 network but the reverse does not work. I configured a new firewall rule on the m0n0wall that allows any traffic on the LAN interface with a source IP of 192.168.4.25 to be allowed. The DrayTek firewall is currently configured to pass all traffic regardless. When I try to ping a machine in the 5.0 subnet from 4.0 I see this in my m0n0wall log: BLOCK 14:45:27.888157 LAN 192.168.4.25 192.168.4.37, type echoreply/0 ICMP So the reply is being sent from the 5.0 subnet but is not being allowed to reach my workstation because the firewall is blocking it. Why is the firewall blocking it ? I hope the explanation of my network is clear, please ask if you require further clarification. Thank you.

    Read the article

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

  • How to set up a PRIVATE vimwiki on Dropbox.com

    - by Zongheng Yang
    Hi everyone, I assume those who are reading this page know what vimwiki and dropbox.com are and what they are for, so I might directly go into my confusion. The common way of setting a PRIVATE vimwiki on dropbox is simply let your vimwiki directories be under Dropbox folder (but not Dropbox/Public/ because it would be PUBLIC). Dropbox allows directly viewing html with dropbox.com/* url: for example a index.html can be accessed by url https://dl-web.dropbox.com/get/Wiki/html/index.html?w=bfead71a, being added after the file name a specified string, ?w=bfead71a. Hence, if inside index.html there is reference to A.html, which is located in the same folder index.html is in, it has to be accessed using some url like https://dl-web.dropbox.com/get/Wiki/html/index.html?w=SPECIFIED_STRING. But it is seemingly impossible to hack vimwiki in order to make the hrefs in converted htmls corrected in this way. Is there some approach that can resolve this problem? I hope I make myself clear. Had you any questions, please ask me for further explanations. Thank you!

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >