Search Results

Search found 1218 results on 49 pages for 'env'.

Page 36/49 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • SSI includes not working on Debian with Apache

    - by Mike
    I'm trying to get SSI to work on Debian running Apache, however the .shtml files are not being parsed. From a PHP file with phpinfo() I can see that the following show up in the loaded modules section: mod_mime_xattr mod_mime mod_mime_magic In /etc/apache2/mods-enabled/mime.conf I have (among other things): AddType text/html .shtml AddOutputFilter INCLUDES .shtml In /etc/apache2/sites-enabled/domain.com.conf (for the virtual host in question) I have: <Directory /home/username/public_html> Options +Includes allow from all AllowOverride All </Directory> and for good measure, I added the following as well: <Directory /> Options +Includes </directory> In the user's .htaccess file, I tried adding: Options +Includes AddType text/html shtml AddHandler server-parsed shtml Nothing seems to work. How can I even debug this? Edit: Here is the output of ls /etc/apache2/mods-enabled/ in case this helps actions.conf dav_svn.load proxy_balancer.load actions.load deflate.conf proxy.conf alias.conf deflate.load proxy_connect.load alias.load dir.conf proxy_http.load auth_basic.load dir.load proxy.load auth_digest.load env.load python.load authn_file.load fcgid.conf reqtimeout.conf authz_default.load fcgid.load reqtimeout.load authz_groupfile.load mime.conf rewrite.load authz_host.load mime.load ruby.load authz_user.load mime_magic.conf setenvif.conf autoindex.conf mime_magic.load setenvif.load autoindex.load mime-xattr.load ssl.conf cgi.load negotiation.conf ssl.load dav_fs.conf negotiation.load status.conf dav_fs.load php5.conf status.load dav.load php5.load suexec.load dav_svn.conf proxy_balancer.conf

    Read the article

  • Remote Scripted Installation of Sun/Oracle JRE

    - by chrisbunney
    I'm attempting to automate the installation of a Debian server (debian 6.0 squeeze 64bit). Part of the installation requires the Sun JRE package to be installed. This package has a licence agreement, which has to be accepted. I have a script which uses the following lines to accept and install the JRE: echo "sun-java6-bin shared/accepted-sun-dlj-v1-1 boolean true" | debconf-set-selections apt-get install -y sun-java6-jre This works fine when executing the script locally. However, I need to execute the script remotely using the ssh command, e.g.: ssh -i keyFile root@hostname './myScript' This doesn't work. In particular, it fails on apt-get install -y sun-java6-jre. It would seem that in spite of me setting the licence agreement to accepted, when run remotely in this manner it is ignored. Despite setting the value to true, I still get prompted to manually accept the agreement when I run this command: ssh -i keyFile root@hostname 'apt-get install -y sun-java6-jre' I suspect it is something to do with environment that is taken care of when running a proper terminal session, but have no idea what to try next to fix it. So, what do I have to do to get this command (and hence my deployment script) to run correctly when executing it remotely? Or is there an alternative way that allows me to install the JRE remotely by another means? Edit 0: I have compared the output of env when executed remotely via ssh and when executed via a local terminal session. The only difference between the outputs is that the local terminal session has the additional value TERM=xterm.

    Read the article

  • GNU screen, how to get current sessionname programmably

    - by Jimm Chen
    [ This can be considered step 2 of my previous question Is it possible to change GNU screen session name after created? ] Actually, I'd like to write a script that can display current screen session name and change current session name. For example: sren armcross It will change the session name to armcross (ARM gcc cross compiler) and output something like: screen session name changed from '25278.pts-15.linux-ic37' to 'armcross' So, the key question now is how to get current session name. Not only for display the old session name, but according to Is it possible to change GNU screen session name after created? , I have to know it(pass to -d -r) before I can change it to something else. Can we use $STY for current session name? No. $STY will not change after you have changed the session name to a user-defined one. However, for command screen -d -r <oldsessname> -X sessionname armcross should be the user-defined name(if ever defined) instead of $STY, otherwise, screen spouts error "No screen session found." Maybe, there is a verbose way, use screen -list to list all sessions(user-defined name listed), then, match the pid part from $STY against those listed sessions and we will find current session's user-defined name. It should not be so verbose for such a straightforward question. Don't you think so? The -d -D and -r -R options seems to expose too much implementation detail to screen's user. It seems, to rename a session, you have to detach it, then do the rename, then reattach it. Right? My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06 Thank you.

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • email output of powershell script

    - by Gordon Carlisle
    I found this wonderful script that outputs the status of the current DFS backlog to the powershell console. This works great, but I need the script to email me so I can schedule it to run nightly. I have tried using the Send-MailMessage command, but can't get it to work. Mainly because my powershell skills are very weak. I believe most of the issue revolve around the script using the Write-Host command. While the coloring is nice I would much rather have it email me the results. I also need the solution to be able to specify a mail server since the dfs servers don't have email capability. Any help or tips are welcome and appreciated. Here is the code. $RGroups = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query "SELECT * FROM DfsrReplicationGroupConfig" $ComputerName=$env:ComputerName $Succ=0 $Warn=0 $Err=0 foreach ($Group in $RGroups) { $RGFoldersWMIQ = "SELECT * FROM DfsrReplicatedFolderConfig WHERE ReplicationGroupGUID='" + $Group.ReplicationGroupGUID + "'" $RGFolders = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGFoldersWMIQ $RGConnectionsWMIQ = "SELECT * FROM DfsrConnectionConfig WHERE ReplicationGroupGUID='"+ $Group.ReplicationGroupGUID + "'" $RGConnections = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGConnectionsWMIQ foreach ($Connection in $RGConnections) { $ConnectionName = $Connection.PartnerName.Trim() if ($Connection.Enabled -eq $True) { if (((New-Object System.Net.NetworkInformation.ping).send("$ConnectionName")).Status -eq "Success") { foreach ($Folder in $RGFolders) { $RGName = $Group.ReplicationGroupName $RFName = $Folder.ReplicatedFolderName if ($Connection.Inbound -eq $True) { $SendingMember = $ConnectionName $ReceivingMember = $ComputerName $Direction="inbound" } else { $SendingMember = $ComputerName $ReceivingMember = $ConnectionName $Direction="outbound" } $BLCommand = "dfsrdiag Backlog /RGName:'" + $RGName + "' /RFName:'" + $RFName + "' /SendingMember:" + $SendingMember + " /ReceivingMember:" + $ReceivingMember $Backlog = Invoke-Expression -Command $BLCommand $BackLogFilecount = 0 foreach ($item in $Backlog) { if ($item -ilike "*Backlog File count*") { $BacklogFileCount = [int]$Item.Split(":")[1].Trim() } } if ($BacklogFileCount -eq 0) { $Color="white" $Succ=$Succ+1 } elseif ($BacklogFilecount -lt 10) { $Color="yellow" $Warn=$Warn+1 } else { $Color="red" $Err=$Err+1 } Write-Host "$BacklogFileCount files in backlog $SendingMember->$ReceivingMember for $RGName" -fore $Color } # Closing iterate through all folders } # Closing If replies to ping } # Closing If Connection enabled } # Closing iteration through all connections } # Closing iteration through all groups Write-Host "$Succ successful, $Warn warnings and $Err errors from $($Succ+$Warn+$Err) replications." Thanks, Gordon

    Read the article

  • Setting up Apache 2.2 + FastCGI + SuExec + PHP-FPM on Centos 6

    - by mr1031011
    I'm trying to follow this very detailed instruction here, I simply changed from www-data user to apache user, and is using /var/www/hosts/sitename/public_html instead of /home/user/public_html However, I spent the whole day trying to figure out why the php file content is displayed without being parsed correctly. I just cant's seem to figure this out. Below is my current config: /etc/httpd/conf.d/fastcgi.conf User apache Group apache LoadModule fastcgi_module modules/mod_fastcgi.so # dir for IPC socket files FastCgiIpcDir /var/run/mod_fastcgi # wrap all fastcgi script calls in suexec FastCgiWrapper On # global FastCgiConfig can be overridden by FastCgiServer options in vhost config FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 # sample PHP config # see /usr/share/doc/mod_fastcgi-2.4.6 for php-wrapper script # don't forget to disable mod_php in /etc/httpd/conf.d/php.conf! # # to enable privilege separation, add a "SuexecUserGroup" directive # and chown the php-wrapper script and parent directory accordingly # see also http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ # FastCgiServer /var/www/www-data/php5-fcgi #AddType application/x-httpd-php .php AddHandler php-fcgi .php Action php-fcgi /fcgi-bin/php5-fcgi Alias /fcgi-bin/ /var/www/www-data/ #FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization #DirectoryIndex index.php # <Location /fcgi-bin/> # Order Deny,Allow # Deny from All # Allow from env=REDIRECT_STATUS SetHandler fcgid-script Options +ExecCGI </Location> /etc/httpd/conf.d/vhost.conf <VirtualHost> DirectoryIndex index.php index.html index.shtml index.cgi SuexecUserGroup www.mysite.com mygroup Alias /fcgi-bin/ /var/www/www-data/www.mysite.com/ DocumentRoot /var/www/hosts/mysite.com/w/w/w/www/ <Directory /var/www/hosts/mysite.com/w/w/w/www/> Options -Indexes FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> PS: 1. Also, with PHP5.5, do I even need FPM or is it already included? 2. I'm using mod_fastcgi, not sure if this is the problem and it I should switch to mod_fcgid? There seems to be conflicting records on the internet considering which one is better. I have many virtual hosts running on the machine and hope to be able to provide each user with their own opcache

    Read the article

  • Problems building nodejs on MacOS Snow Leopard

    - by mrwooster
    I am having trouble building nodejs on MacOS Snow Leopard. I think it might have something to do with my PATH variable not being set correctly for the developer tools location. For some reason, the Developer tools (gcc, g++, make etc) are all stored in /Developer/usr/bin I added it to my PATH variable as follows: $ export PATH=$PATH:/Developer/usr/bin $ echo $PATH /opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin:/Developer/usr/bin When i try to configure it complains about not finding open-ssl, ok, not a big problem. So I try with --without-ssl : $ ./configure --without-ssl Checking for program g++ or c++ : /Developer/usr/bin/g++ Checking for program cpp : /Developer/usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /Developer/usr/bin/ranlib Checking for g++ : ok Checking for program gcc or cc : /Developer/usr/bin/gcc Checking for gcc : ok Checking for library dl : yes Checking for library util : yes Checking for library rt : not found --- libeio --- Checking for library pthread : yes Checking for function pthread_create : not found /Users/Guy/git_src/node/node/deps/libeio/wscript:13: error: the configuration failed (see '/Users/Guy/git_src/node/node/build/config.log') Anyone know how I can get round this? I am suspicious that it might be something to do with the PATH or another ENV variable, but not sure. Thanks G

    Read the article

  • Sshfs is not working..

    - by Devrim
    Hi, When I run sshpass -p 'mypass' sshfs 'root'@'68.19.40.16':/ '/dir' -o StrictHostKeyChecking=no,debug It successfully mounts but it runs on foreground. When I run without 'debug' parameter, it doesn't mount at all. Server is ubuntu 8.04 Any ideas why? UPDATE: When I run the command as ROOT it does mount. It doesn't work with other users. here is the output of an unsuccessful mount $ sshpass -p 'pass' sshfs 'root'@'68.1.1.1':/ '/s6' -o StrictHostKeyChecking=no,sshfs_debug,loglevel=debug debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 68.1.1.1 [68.1.1.1] port 22. debug1: Connection established. debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa type -1 debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY Warning: Permanently added '68.1.1.1' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.UTF-8 debug1: Sending subsystem: sftp Server version: 3 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: Killed by signal 1.

    Read the article

  • SSH session closing whilst virtualenv session stays open (I think)

    - by ing0
    I've been developing some sites using Flask recently (running on debian within a virtualenv), and when I am testing I can run it on a port, let's say post 5000. So I run the script like so: . env/bin/activate <- go into virtual environment python file.py <- run python script And I will be given this message: Running on http://0.0.0.0:5000/ So this all works great and I can access my site on this port fine. However... my rubbish ISP always does this thing where it resets something around 1am every morning. I have no idea what this is, everything runs like normal but I always get disconnected from any SSH sessions open. This leaves it running and all I can do is call: lsof -i Which will show me the process but if I kill it and then rerun it things get weird. The: Running on http://0.0.0.0:5000 message still shows but I cannot connect to it anymore. I've tried changing the port number and it seems the only thing that works is trying again later on or on another day. Now I'm assuming that something on my server resets inbetween these times and I would like to think it was maybe that virtualenv session timing out, but I cannot find out how to do this manually, does anyone know?

    Read the article

  • Apache 2.2, worker mpm, mod_fcgid and PHP: Can't apply process slot

    - by mopoke
    We're having an issue on an apache server where every 15 to 20 minutes it stops serving PHP requests entirely. On occasions it will return a 503 error, other times it will recover enough to serve the page but only after a delay of a minute or more. Static content is still served during that time. In the log file, there's errors reported along the lines of: [Wed Sep 28 10:45:39 2011] [warn] mod_fcgid: can't apply process slot for /xxx/ajaxfolder/ajax_features.php [Wed Sep 28 10:45:41 2011] [warn] mod_fcgid: can't apply process slot for /xxx/statics/poll/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php There is RAM free and, indeed, it seems that more php processes get spawned. /server-status shows lots of threads in the "W" state as well as some FastCGI processes in "Exiting(communication error)" state. I rebuilt mod_fcgid from source as the packaged version was quite old. It's using current stable version (2.3.6) of mod_fcgid. FCGI config: FcgidBusyScanInterval 30 FcgidBusyTimeout 60 FcgidIdleScanInterval 30 FcgidIdleTimeout 45 FcgidIOTimeout 60 FcgidConnectTimeout 20 FcgidMaxProcesses 100 FcgidMaxRequestsPerProcess 500 FcgidOutputBufferSize 1048576 System info: Linux xxx.com 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:45:36 UTC 2009 x86_64 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.04 DISTRIB_CODENAME=jaunty DISTRIB_DESCRIPTION="Ubuntu 9.04" Apache info: Server version: Apache/2.2.11 (Ubuntu) Server built: Aug 16 2010 17:45:55 Server's Module Magic Number: 20051115:21 Server loaded: APR 1.2.12, APR-Util 1.2.12 Compiled using: APR 1.2.12, APR-Util 1.2.12 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" Apache modules loaded: alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.load cgi.load deflate.load dir.load env.load expires.load fcgid.load headers.load include.load mime.load negotiation.load rewrite.load setenvif.load ssl.load status.load suexec.load PHP info: PHP 5.2.6-3ubuntu4.6 with Suhosin-Patch 0.9.6.2 (cli) (built: Sep 16 2010 19:51:25) Copyright (c) 1997-2008 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies

    Read the article

  • Swap space maxing out - JVM dying

    - by travega
    I have a server running 3 WordPress instances, MySql, Apache and the play framework 2.0 on 64m initial & max heap. If I increase the max heap of the JVM that play is running in even by 16m I see the 128m of swap space steadily fill up until the the JVM dies. I notice that it is only when I am plugging away at the wordpress sites that the JVM will die. I assume this is because the JVM is not asking for memory at the time so gets collected. I notice that when I restart Apache I reclaim about half of my swap and RAM. So is there some way I can configure apache to consume less memory? Also what could be causing the swap space to get so heavily thrashed with just 16m added to the max heap size of the JVM? Server running: Ubuntu 12.04 RAM: 408m Swap: 128m Apache mods: alias.conf alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.conf autoindex.load cgi.load deflate.conf deflate.load dir.conf dir.load env.load mime.conf mime.load negotiation.conf negotiation.load php5.conf php5.load proxy_ajp.load proxy_balancer.conf proxy_balancer.load proxy.conf proxy_connect.load proxy_ftp.conf proxy_ftp.load proxy_http.load proxy.load reqtimeout.conf reqtimeout.load rewrite.load setenvif.conf setenvif.load status.conf status.load

    Read the article

  • upstart config to start sync daemon as non-root user

    - by Rudiger Wolf
    I am planning to use inosync to sync data from master server to several client servers. I have created a user called rsyncuser in both master and slaves with access permissions and passwordless ssh access from master to slave servers. Inosync is working when I use it from the command line as rsyncuser. Next I want this to start automatically when server is turned on. I figured upstart is the way to get this working. I am unable to find the right upstart command to get this working. Here is my upstart conf file. The problem seems to be around running "inosync -d -c /etc/inosync/inosync_rsyncuser.py" as a given user. As you can see I have tried a number of various options! description "start inosync to sync data to other CDN Servers as rsyncuser" console output #start on startup #stop on shutdown start on (net-device-up and local-filesystems) stop on runlevel [016] #start on runlevel [2345] #stop on runlevel [!2345] #kill timeout 30 env RUN_AS_USER=rsyncuser expect fork script echo "Inosync updtart job seems to have started" /tmp/upstart.log # exec sudo -u rsyncuser -c "ls -la" /tmp/upstart.log 2&1 # LOGFILE=/var/log/logfile.`date +%Y-%m-%d`.log # exec su - $RUN_AS_USER -c "inosync -d -c /etc/inosync/inosync_rsyncuser.py" $LOGFILE 2&1 # exec su -c "ls -la" /tmp/upstart.log 2&1 # emit inosync_running end script

    Read the article

  • Lock down Wiki access to password only but remain open to a subnet via .htaccess

    - by Treffynnon
    Basically we have a Wiki that has some sensitive information stored in it - not the best I know but my predecessor set it up. I want to be able to request password access from any one who is not on the local network subnet. Those on the local subnet should be able to proceed without entering a password. The following .htaccess does not seem to work any more as it is letting non-local access without requiring the password: AuthName "Our Wiki" AuthType Basic AuthUserFile /path/to/passwd/file AuthGroupFile /dev/null Require valid-user Allow from 192.168 Satisfy Any order deny,allow And I cannot work out why. The WikkaWiki it is supposed to be protecting was recently upgraded, which clobbered the .htaccess file so I restored the above from memory/googling. Maybe I am missing an important directive? The full .htaccess is as follows: AuthName "Our Wiki" AuthType Basic AuthUserFile /path/to/passwd/file AuthGroupFile /dev/null Require valid-user Allow from 192.168 Satisfy Any SetEnvIfNoCase Referer ".*($LIST_OF_ADULT_WORDS).*" BadReferrer order deny,allow deny from env=BadReferrer <IfModule mod_rewrite.c> # turn on rewrite engine RewriteEngine on RewriteBase / # if request is a directory, make sure it ends with a slash RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*/[^/]+)$ $1/ # if not rewritten before, AND requested file is wikka.php # turn request into a query for a default (unspecified) page RewriteCond %{QUERY_STRING} !wakka= RewriteCond %{REQUEST_FILENAME} wikka.php RewriteRule ^(.*)$ wikka.php?wakka= [QSA,L] # if not rewritten before, AND requested file is a page name # turn request into a query for that page name for wikka.php RewriteCond %{QUERY_STRING} !wakka= RewriteRule ^(.*)$ wikka.php?wakka=$1 [QSA,L] </IfModule>

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • TeamCity sends inadequate responses after Selenium tests

    - by Dmitriy Sukharev
    I have a TeamCity 7.0.2 at CentOS 6.2 server without X Server. I've installed x11-fonts*, xvfb, firefox, xauth, extracted env. variable DISPLAY=localhost:1, and started xvfb. After that I could start Selenium tests using maven. Tests are executed, but there's an issue with TeamCity. Usually TeamCity starts hehaves absolutely inadequate (it confuses images at the page, sends xml or strange text ampersants and numbers in responses and is a bit slower), also tests are executed 4 times slower (1h 15m) at server than at tester Windows 7-based machine (25m). It worth to notice that tests launch two Jetty servers for tested application (one for REST-services application and another for client). In TeamCity I set JVM command line parameters: -Xms256m -Xmx1224m -XX:MaxPermSize=320m, and Additional Maven command line parameters ends with "-DMAVEN_OPTS=-Xmx1024m" (without quotes). Also both web-services and TeamCity uses the same Oracle server (but different Oracle users). Finally TeamCity and its build agent is at the same server. Server has only 4GB of RAM, but during testing there're 400MB of RAM and 1.2GB of swap. TeamCity and Firefox uses about 65% of CPU during testing. There's no firefox process after end of testing. My knowledge about Selenium is weak. I only know that we use 2.20.0 version of selenium-java maven dependency. Please help me to determine why TeamCity sends wrong responces after Selenium tests. I've tried to give you all information I have, but feel free to ask me for more information.

    Read the article

  • Syntax Error when setting up Domain Alias

    - by Poundtrader
    I'm attempting to set up a domain alias via Pre VirtualHost Include and I'm recieving the following error: Error: An error occurred while running: /usr/local/apache/bin/httpd -DSSL -t -f /usr/local/apache/conf/httpd.conf Exit signal was: 0 Exit value was: 1 Output was: --- Syntax error on line 15 of /usr/local/apache/conf/includes/pre_virtualhost_global.conf: CustomLog takes two or three arguments, a file name, a custom log format string or format name, and an optional "env=" clause (see docs) --- Basically I have two domains, the main domain has an opencart installation within a directory (/buy) and I'm attempting to use the multi-store function which allows you to administer multiple stores on multiple domains via the one opencart dashboard. My issue is that I have the opencart installations within the /buy directory so I have been given the following code which should allow me to use this functionality over the multiple cPanel accounts within the same VPS. <VirtualHost 87.117.239.29:80> ServerName newdomain.co.uk ServerAlias www.newdomain.co.uk Alias /buy /home/originaldomain/public_html/buy/ DocumentRoot /home/newdom/public_html ServerAdmin [email protected] ## User newdom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup newdom newdom </IfModule> <IfModule mod_ruid2.c> RUidGid newdom newdom </IfModule> CustomLog /usr/local/apache/domlogs/newdomain.co.uk-bytes_log “%{%s}t %I .\n%{%s}t %O .” CustomLog /usr/local/apache/domlogs/newdomain.co.uk combined ScriptAlias /cgi-bin/ /home/newdom/public_html/cgi-bin/ </VirtualHost> Does anyone know how to get this code to work? Thanks,

    Read the article

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • Bash script to run a clamscan on Ubuntu- how to use return values properly?

    - by Marius
    I'm trying to put together a simple script that will scan my home directory with clamscan and give me a warning if any viruses were found. What I have so far is: #! /usr/bin/env bash clamscan -l ~/.ClamScan/$(date +"%a%b%d") -ir /home RETVAL=$? [ $RETVAL -eq 0 ] && notify-send 'clamscan finished. No viruses found' [ $RETVAL -eq 1 ] && notify-send 'clamscan found a virus' && touch ~/Desktop/VirusFound [ $RETVAL -eq 2 ] && notify-send 'clamscan encountered errors. Check the logs' && touch ~/Desktop/ClamscanError find ~/.ClamScan/* -mtime +7 -exec rm {} \; However, I'm unsure about a couple of things: I'm always wary of using rm- as far as I can tell, the find command I've got should be deleting any log files that are more than a week old. I'm also not entirely sure how the return value testing works- I've got a manual that briefly covers bash, which says that the meaning of $? is "match one character", and I'm not entirely sure how that grabs the return value. Should I be using -eq or = for testing the return value? From what I can tell -eq tests strings and = tests numerals, but I'm not sure what the type of the return value is.

    Read the article

  • Ubuntu: how to get audio to work in both Spotify (under Wine) and Flash (in Firefox)?

    - by Jonik
    I'm running Spotify on Linux using Wine. Sound worked great (even though the sound test in winecfg failed!), until I installed alsa-oss package yesterday to get Flash sound working in Firefox. Now Spotify says: "There is a problem with your sound card. Spotify can't play music." So the question is, how to get the sound in Spotify working again, so that it also keeps working in Flash & Firefox? Tweak some ALSA settings? Spotify settings? Add/remove some packages? By the way, curiously, now that sound doesn't work in Spotify, winecfg's "Test Sound" does work! This is Ubuntu 8.04 (Hardy). Sound card / driver is probably an integrated AC'97. Please mention if any additional information about the system is needed! Update: I have Flash 10 installed (outside the packaging system, using $MOZ_PLUGIN_PATH env variable), but also had Flash 9 from flashplugin-nonfree package - and the earlier version was being used by Firefox! Based on what Mike Arthur said about Flash and alsa-oss, I removed the older Flash (flashplugin-nonfree package) and alsa-oss - and Flash sound still works, which is nice. But for some reason Spotify still doesn't play sound, even though things should now be like they were originally... Update 2: Got it working, all smoothly, finally.

    Read the article

  • Using gentoo, how does one stick -9999 ebuild to a specific svn revision?

    - by hurikhan77
    As an example given the django-9999 ebuild, to match the developers environment I need to checkout R12120 from trunk. Installing Django manually is not option due to package management reasons. But there is also no ebuild in portage for 1.2 beta versions. So I did the following: ESVN_OPTIONS="-r12120" emerge -1a django Which installed the required revision from svn. But this is cumbersome in a way. Is there some way to define this statically per ebuild, eg something like: DJANGO_SVN_REV="12120" in make.conf. This would be much cleaner in my eyes. Because next time I need to rebuild django for whatever reason, I need to remember: "Oh I wanted this to stick to a specific revision" and next question will be "err, f&!#$?%, what was it again?" What's the best way to go here? Keep in mind: Manually installing packages without package manager knowledge is no option Working around with manual emerge variable prefixing is no option Setting up a /etc/portage/package.env would be a way to go (as described here) but that seems pretty unsupported and kludgy to me and thus unpreferable Modifying make.conf would be a way to go Keeping the ebuild in an overlay would be an option

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Weblogic 10.0: SAMLSignedObject.verify() failed to validate signature value

    - by joshea
    I've been having this problem for a while and it's driving me nuts. I'm trying to create a client (in C# .NET 2.0) that will use SAML 1.1 to sign on to a WebLogic 10.0 server (i.e., a Single Sign-On scenario, using browser/post profile). The client is on a WinXP machine and the WebLogic server is on a RHEL 5 box. I based my client largely on code in the example here: http://www.codeproject.com/KB/aspnet/DotNetSamlPost.aspx (the source has a section for SAML 1.1). I set up WebLogic based on instructions for SAML Destination Site from here:http://www.oracle.com/technology/pub/articles/dev2arch/2006/12/sso-with-saml4.html I created a certificate using makecert that came with VS 2005. makecert -r -pe -n "CN=whatever" -b 01/01/2010 -e 01/01/2011 -sky exchange whatever.cer -sv whatever.pvk pvk2pfx.exe -pvk whatever.pvk -spc whatever.cer -pfx whatever.pfx Then I installed the .pfx to my personal certificate directory, and installed the .cer into the WebLogic SAML Identity Asserter V2. I read on another site that formatting the response to be readable (ie, adding whitespace) to the response after signing would cause this problem, so I tried various combinations of turning on/off .Indent XMLWriterSettings and turning on/off .PreserveWhiteSpace when loading the XML document, and none of it made any difference. I've printed the SignatureValue both before the message is is encoded/sent and after it arrives/gets decoded, and they are the same. So, to be clear: the Response appears to be formed, encoded, sent, and decoded fine (I see the full Response in the WebLogic logs). WebLogic finds the certificate I want it to use, verifies that a key was supplied, gets the signed info, and then fails to validate the signature. Code: public string createResponse(Dictionary<string, string> attributes){ ResponseType response = new ResponseType(); // Create Response response.ResponseID = "_" + Guid.NewGuid().ToString(); response.MajorVersion = "1"; response.MinorVersion = "1"; response.IssueInstant = System.DateTime.UtcNow; response.Recipient = "http://theWLServer/samlacs/acs"; StatusType status = new StatusType(); status.StatusCode = new StatusCodeType(); status.StatusCode.Value = new XmlQualifiedName("Success", "urn:oasis:names:tc:SAML:1.0:protocol"); response.Status = status; // Create Assertion AssertionType assertionType = CreateSaml11Assertion(attributes); response.Assertion = new AssertionType[] {assertionType}; //Serialize XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("samlp", "urn:oasis:names:tc:SAML:1.0:protocol"); ns.Add("saml", "urn:oasis:names:tc:SAML:1.0:assertion"); XmlSerializer responseSerializer = new XmlSerializer(response.GetType()); StringWriter stringWriter = new StringWriter(); XmlWriterSettings settings = new XmlWriterSettings(); settings.OmitXmlDeclaration = true; settings.Indent = false;//I've tried both ways, for the fun of it settings.Encoding = Encoding.UTF8; XmlWriter responseWriter = XmlTextWriter.Create(stringWriter, settings); responseSerializer.Serialize(responseWriter, response, ns); responseWriter.Close(); string samlString = stringWriter.ToString(); stringWriter.Close(); // Sign the document XmlDocument doc = new XmlDocument(); doc.PreserveWhiteSpace = true; //also tried this both ways to no avail doc.LoadXml(samlString); X509Certificate2 cert = null; X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser); store.Open(OpenFlags.ReadOnly); X509Certificate2Collection coll = store.Certificates.Find(X509FindType.FindBySubjectDistinguishedName, "distName", true); if (coll.Count < 1) { throw new ArgumentException("Unable to locate certificate"); } cert = coll[0]; store.Close(); //this special SignDoc just overrides a function in SignedXml so //it knows to look for ResponseID rather than ID XmlElement signature = SamlHelper.SignDoc( doc, cert, "ResponseID", response.ResponseID); doc.DocumentElement.InsertBefore(signature, doc.DocumentElement.ChildNodes[0]); // Base64Encode and URL Encode byte[] base64EncodedBytes = Encoding.UTF8.GetBytes(doc.OuterXml); string returnValue = System.Convert.ToBase64String( base64EncodedBytes); return returnValue; } private AssertionType CreateSaml11Assertion(Dictionary<string, string> attributes){ AssertionType assertion = new AssertionType(); assertion.AssertionID = "_" + Guid.NewGuid().ToString(); assertion.Issuer = "madeUpValue"; assertion.MajorVersion = "1"; assertion.MinorVersion = "1"; assertion.IssueInstant = System.DateTime.UtcNow; //Not before, not after conditions ConditionsType conditions = new ConditionsType(); conditions.NotBefore = DateTime.UtcNow; conditions.NotBeforeSpecified = true; conditions.NotOnOrAfter = DateTime.UtcNow.AddMinutes(10); conditions.NotOnOrAfterSpecified = true; //Name Identifier to be used in Saml Subject NameIdentifierType nameIdentifier = new NameIdentifierType(); nameIdentifier.NameQualifier = domain.Trim(); nameIdentifier.Value = subject.Trim(); SubjectConfirmationType subjectConfirmation = new SubjectConfirmationType(); subjectConfirmation.ConfirmationMethod = new string[] { "urn:oasis:names:tc:SAML:1.0:cm:bearer" }; // // Create some SAML subject. SubjectType samlSubject = new SubjectType(); AttributeStatementType attrStatement = new AttributeStatementType(); AuthenticationStatementType authStatement = new AuthenticationStatementType(); authStatement.AuthenticationMethod = "urn:oasis:names:tc:SAML:1.0:am:password"; authStatement.AuthenticationInstant = System.DateTime.UtcNow; samlSubject.Items = new object[] { nameIdentifier, subjectConfirmation}; attrStatement.Subject = samlSubject; authStatement.Subject = samlSubject; IPHostEntry ipEntry = Dns.GetHostEntry(System.Environment.MachineName); SubjectLocalityType subjectLocality = new SubjectLocalityType(); subjectLocality.IPAddress = ipEntry.AddressList[0].ToString(); authStatement.SubjectLocality = subjectLocality; attrStatement.Attribute = new AttributeType[attributes.Count]; int i=0; // Create SAML attributes. foreach (KeyValuePair<string, string> attribute in attributes) { AttributeType attr = new AttributeType(); attr.AttributeName = attribute.Key; attr.AttributeNamespace= domain; attr.AttributeValue = new object[] {attribute.Value}; attrStatement.Attribute[i] = attr; i++; } assertion.Conditions = conditions; assertion.Items = new StatementAbstractType[] {authStatement, attrStatement}; return assertion; } private static XmlElement SignDoc(XmlDocument doc, X509Certificate2 cert2, string referenceId, string referenceValue) { // Use our own implementation of SignedXml SamlSignedXml sig = new SamlSignedXml(doc, referenceId); // Add the key to the SignedXml xmlDocument. sig.SigningKey = cert2.PrivateKey; // Create a reference to be signed. Reference reference = new Reference(); reference.Uri= String.Empty; reference.Uri = "#" + referenceValue; // Add an enveloped transformation to the reference. XmlDsigEnvelopedSignatureTransform env = new XmlDsigEnvelopedSignatureTransform(); reference.AddTransform(env); // Add the reference to the SignedXml object. sig.AddReference(reference); // Add an RSAKeyValue KeyInfo (optional; helps recipient find key to validate). KeyInfo keyInfo = new KeyInfo(); keyInfo.AddClause(new KeyInfoX509Data(cert2)); sig.KeyInfo = keyInfo; // Compute the signature. sig.ComputeSignature(); // Get the XML representation of the signature and save // it to an XmlElement object. XmlElement xmlDigitalSignature = sig.GetXml(); return xmlDigitalSignature; } To open the page in my client app, string postData = String.Format("SAMLResponse={0}&APID=ap_00001&TARGET={1}", System.Web.HttpUtility.UrlEncode(builder.buildResponse("http://theWLServer/samlacs/acs",attributes)), "http://desiredURL"); webBrowser.Navigate("http://theWLServer/samlacs/acs", "_self", Encoding.UTF8.GetBytes(postData), "Content-Type: application/x-www-form-urlencoded");

    Read the article

  • Immediately tell which output was sent to stderr

    - by Clinton Blackmore
    When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >