Search Results

Search found 5228 results on 210 pages for 'bash alias'.

Page 179/210 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • OpenOffice Calc Macro: Run shell command and return output as result of custom function

    - by Mark
    I would like to write a custom OpenOffice function that runs a shell command and puts the result into the cell from which it was invoked. I have a basic macro working, but I can't find a way to capture the command's output. Function MyTest( c1 ) MyTest = Shell("bash -c "" echo hello "" ") End Function The above always returns 0. Looking at the documentation of the Shell command, I don't think it actually returns STDOUT. How would I capture the output so that I can return it in my function? Thanks!

    Read the article

  • how do i convert RSAParameters from .net to .pem file so i can use it in php

    - by netuser24
    Hello i have a private and public keys for RSA generated in .net in this format string privateKey = "<RSAKeyValue>" + "<Modulus>...kCFhsjB4xMW49mrx5B/Ga...</Modulus>" + "<Exponent>...</Exponent>" + "<P>...7bRCrQVgVIfXdTIH3iY8x...</P>" + "<Q>...4SiQDhrAZADuFDTr7bRCrQVgVIfXdTIH3iY8x...</Q>" + "<DP>...ZADuFDTr7bRCrQVgVIfXdT...</DP>" + "<DQ>...4SiQDhrAZADuFDTr...</DQ>" + "<InverseQ>...sjB4xMW49mrx5B/Ga...</InverseQ>" + "<D>...SiQDhrAZADuFDTr7bRCrQVgVIf...</D>" + "</RSAKeyValue>"; how can i convert this so i can use it in php openssl functions to encrypt and decrypt data? i need both public and private keys converted. maybe with openssl bash command in linux where can i specify my own modulus, exponent and so on? any ideas? thanks

    Read the article

  • Linux binary built for 2.0 kernel wouldn't execute on 2.6.x kernel.

    - by lorin
    I was installing a binary Linux application on Ubuntu 9.10 x86_64. The app shipped with an old version of gzip (1.2.4), that was compiled for a much older kernel: $ file gzip gzip: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.0.0, stripped I wasn't able to execute this program. If I tried, this happened: $ ./gzip -bash: ./gzip: No such file or directory ldd was similarly unhappy with this binary: $ ldd gzip not a dynamic executable This isn't a showstopper for me, since my installation has a working version of gzip I can use. But I'm curious: What's the most likely source of this problem? A corrupted file? Or a binary incompatibility due to being built for a much older {kernel,libc,...}?

    Read the article

  • Qt on Mac: where to find "configure"

    - by Gil
    hi, I am very new to Mac. I downloaded QT SDK Mac Open source (http://get.qt.nokia.com/qtsdk/qt-sdk-mac-opensource-2010.02.dmg) and installed the Package. I can run qmake, build samples and run demos, but I cannot run configure (in order to build the Qt libraries statically). It says: -bash: No such file or directory. Documentation says I should run this in the "Qt root folder", but what is this folder in Mac? I looked for it in /usr/bin, /usr/local/Qt4.6, /Developer/Tools/Qt. Anyway, what is "configure" on Mac. is it an executable or a script? Thanks a lot

    Read the article

  • Use matching value of a RegExp to name the output file.

    - by fx42
    I have this file "file.txt" which I want to split into many smaller ones. Each line of the file has an id field which looks like "id:1" for a line belonging to id 1. For each id in the file, I like to create a file named idid.txt and put all lines that belong to this id in that file. My brute force bash script solution reads as follows. count=1 while [ $count -lt 19945 ] do cat file.txt | grep "id:$count " >> ./sets/id$count.txt count='expr $count + 1' done Now this is very inefficient as I have do read through the file about 20.000 times. Is there a way to do the same operation with only one pass through the file? - What I'm probably asking for is a way to use the value that matches for a regular expression to name the associated output file.

    Read the article

  • What's Your favorite f# use? where does f# makes life (a lot) easier (compared to c#)?

    - by luckyluke
    I've skimmed the stack and did not get the overflow as there is probably no such question. I'm just learning f# and I am A seasoned c# and .net dev. I am into financial apps and currently F# helps me a lot with maths calcs like zero finding or minimum finding (although I still want some good maths library there). I see that processing multiple items (files or smth) tends to be easier, but my GUI (web, win) are still c# based. I am in the team of 5 devs and we know that the new tool is out, we are learning it after hours (to pimp ourselves up) but maybe we shouldn't bash the door somebody already opened. So in business apps, whats Your first killer part of soft You would code in F# (if You could and would know IT would be easier, faster, more testable, easier to maintain etc.? Business rules? ImageProcessing? Data processing? hope it's not to subjective. luke

    Read the article

  • Magento cache wrong read permissions?

    - by Lucasmus
    There seems to be a problem in Magento's reading of the var/cache directory. I've disabled Full Page Caching for testing. When I execute the bash command chmod -R 777var/cache/` before loading the page, it loads ~3 seconds quicker (the time it takes before 'mage::dispatch::routers_match' is reached in the Profiler is reduced from ~4 seconds to ~1 second). This speed-up remains a while but then is lost until the chmod is called again. I'm guessing this has to do with writing permissions somehow? The odd thing is, the cache contents are afaik owned by the process that is executing magento (the web user). Does anyone have any clues what could be the problem or what could be changed to prevent this?

    Read the article

  • UIDs for service users in Mac OS X

    - by LaC
    Some third-party servers should be run under a special user for security reasons (eg, PostgreSQL is typically run by "postgres"). Of course, these service users should not show up in the Mac OS X login windows. I know how to create hidden users using dscl or dsimport, but I'm wondering what the best policy is for assigning UIDs (and matching GIDs). Apple's documentation states that UIDs from 0 to 100 are reserved (pg. 69), but OS X comes with several special users and groups outside that range. I used to use ids from 401 onwards for services, but I noticed that OS X 10.6 has started using that range for groups created by the Sharing pane in System Preferences. What is the recommended ID range to use for third-party services, then? Perhaps I should just use IDs in the 500 range, since all that is needed to hide a user in Snow Leopard is setting his password to "*"? Also, most of Apple's services have names starting with an underscore, with an alias sans underscore; eg, _sandbox and sandbox. Is there any special significance to this? Should I do the same for my services?

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • Problem creating ODBC connection to SQL Server 2008 with Vista

    - by earlz
    Well, I'm trying to get a database schema thing working, first I tried just doing it in Linux where I'm more comfortable, but ODBC seems to be a hack there and I couldn't get it to work. So I figured it shouldn't be too hard in Windows.. Ok, so I created a SQL Server Client Alias so that I can simply same windowsserver to refer to my SQL server. Then, I went to the ODBC configuration in Control Panel. I clicked Add in the User DSN section. I chose Native SQL Server (10), and then clicked next. Then I typed a short name and a description and gave the servername as windowsserver/SQLEXPRESS Then, I click next, give it my user name and password and click next. Then, after like 2 minutes it says "Login Timeout Expired" What can be wrong here? I know the server is configured cause I have SQL Server Management Studio opened up with that server in it. I'm also just trying to connect over regular TCP/IP and my firewall is disabled.

    Read the article

  • How do you tell if a string contains another string in Unix shell scripting?

    - by Matt
    Hi all, I want to write a Unix shell script that will do various logic if there is a string inside of another string. For example, if I am in a certain folder, branch off. Could someone please tell me how to accomplish this? If possible I would like to make this not shell specific (i.e. not bash only) but if there's no other way I can make do with that. #!/bin/sh CURRENT_DIR=`pwd` if [ CURRENT_DIR contains "String1" ] then echo "String1 present" elif [ CURRENT_DIR contains "String1" ] then echo "String2 present" else echo "Else" fi

    Read the article

  • dropping user to IRB after reading from pipe

    - by aurelian
    I have this script that drops the user to an IRB session when executed. All good, but when I use *nix pipes to get the input (e.g. with cat), the IRB session ends immediately. I could reduce the script (let's call it myscript.rb) to the following: require 'irb' if $stdin.stat.size 0 @text = $stdin.read else @text= "nothing" end ARGV.clear IRB.start When executed like: ruby myscript.rb, I end up in the IRB session (as expected). But (assuming foo.txt exists in the cwd): cat foo.txt | ruby myscript.rb will just print the IRB prompt and then the IRB session is closed (I'm being dropped to $bash). Any known workarounds or ideas? BTW: it has the same behavior on ruby 1.8.7 as well as on 1.9.2.

    Read the article

  • Git doesn't sync files until committed, even if checked out in a different branch

    - by DertWaiter
    Okay, I have git 1.7.11.1 on Windows and I have a local test repository with 2 branches. One is master with index.php and help.php. I then create another branch called slave :) I run from git bash rm help.php and it disappears from the folder, but I don't stage anything. I switch to checkout master branch and it is supposed to restore file help.php because it is not modified in the master branch, isn't it? And it does not do it. When I go back to the slave branch and commit and then switch to checkout master then help.php appears. Is that the way it is supposed to to work? Why?

    Read the article

  • Is there a way to make this perl code capture stderr as well as stdout from a tcsh?

    - by mikelong
    open UNIT_TESTER, qq(tcsh -c "gpath $dir/$tsttgt; bin/rununittests"|); while(<UNIT_TESTER>){ reportError($ignore{testabort},$tsttgt,"test problem detected for $tsttgt:$_ ") if /core dumped/; reportError($ignore{testabort},$tsttgt,"test problem detected for $tsttgt:$_ ") if /\[ FAILED \]/; writelog($tsttgt,$_); } close UNIT_TESTER; I have tried to redirect stderr to stdout using this syntax but it didn't work: open UNIT_TESTER, qq(tcsh -c "gpath $dir/$tsttgt; bin/rununittests >& "|); I have also read the discussion on the perl FAQ but that was in relation to bash: http://www.perl.com/doc/FAQs/FAQ/oldfaq-html/Q5.15.html

    Read the article

  • How do I get the username in Java (ie, who -m in Java) (or Jython 2.1)

    - by amertune
    Here's the situation. I have a jython 2.1 script in a shared account that needs to know who is calling it. In bash, I can simply use 'who -m' and it will give me the correct username. I haven't been able to find anything in java (or jython) that would give me a similar result. Even trying to call Runtime.getRuntime().exec("who -m") doesn't do anything. When I try to read the InputStream from the process returned by exec, the stream is empty.

    Read the article

  • How to validate Windows VC++ DLL on Unix systems

    - by Guildencrantz
    I have a solution, mostly C#, but with a few VC++ projects, that is pushed through our standard release process (perl and bash scripts on Unix boxes). Currently the initiative is to validate DLL and EXE versions as they pass through the process. All the versioning is set so that File Version is of the format $Id: $ (between the colon and the second dollar should be a git commit hash), and the Product Version is of the format $Hudson Build: $ (between the colon and the second dollar should be a string representing the hudson build details). Currently this system works extremely well for the C# projects because this version information is stored as plain strings within the compiled code (you can literally use the unix strings command and see the version information); the problem is that the VC++ projects do not expose this information as strings (I have used a windows system to verify that the version information is correctly being set), so I'm not sure how to extract the version on a unix system. Any suggestions for either A) Getting a string representation of the version embedded in the compiled code, or B) A utility/script which can extract this information?

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • ssrs 2008 programmatically add tables rows

    - by davethecoder
    Above is how my report looks, the part in yellow in hidden, and is only shown when the user clicks the + icon on the [name]. the result is basically the percentage difference from the Past [X] - [TERM] i.e there is a dropdown with, [weeks, months, days, hours] and a textbox of qty. so choosing qty = 4 and term = weeks will delivery a result set spread over 4 weeks based on the parent result sets date range and name ID I wish to populate here the number of rows, dependant on the value set by the user and the data will be from a dataset. Is it possible to dynamically add more sub rows ( like on row data bound ) if my first row is ID 123 [name], is it possible to send this value [123] to a dataset in order that all subrows are only relevant to the name with ID of 123? this is my first bash at SSRS so please no half cut answers, that just lead to more questions about the answer given :-) if this makes sense. Thanks

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • Securing phpmyadmin: non-standard port + https

    - by elect
    Trying to secure phpmyadmin, we already did the following: Cookie Auth login firewall off tcp port 3306. running on non-standard port Now we would like to implement https... but how could it work with phpmyadmin running already on a non-stardard port? This is the apache config: # PHP MY ADMIN <VirtualHost *:$CUSTOMPORT> Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> Options FollowSymLinks DirectoryIndex index.php <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_value include_path . </IfModule> </Directory> # Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/phpmyadmin.log combined </VirtualHost>

    Read the article

  • How to get the list of files in a directory in a shell script?

    - by jrharshath
    Hi, I'm trying to get the contents of a directory using shell script. My script is: for entry in `ls $search_dir`; do echo $entry done where $search_dir is a relative path. However, $search_dir contains many files with whitespaces in their names. In that case, this script does not run as expected. I know I could use for entry in *, but that would only work for my current directory. I know I can change to that directory, use for entry in * then change back, but my particular situation prevents me from doing that. I have two relative paths $search_dir and $work_dir, and I have to work on both simultaneously, reading them creating/deleting files in them etc. So what do I do now? PS: I use bash.

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • error 503: Can't deploy rails 3 app with apache + thin (bitnamy ruby stack)

    - by Pacu
    As you'll notice, I'm a bit of a noob on Rails. Here's the thing I have a EC2 Bitnami RubyStack AMI running. I'm trying to deploy the sample project to be sure I'm doing the right thing, but I'm not getting anywhere at all. I just get a 503 error I'm following bitnami's docs on thin + apache Here are my files: the httpd.conf I include in the main httpd.conf Alias /sample "/home/bitnami/stack/projects/sample/public" <Directory "/home/bitnami/stack/projects/sample/public"> AllowOverride None Order allow,deny Allow from all </Directory> ProxyPass /sample balancer://appcluster ProxyPassReverse /sample balancer://appcluster <Proxy balancer://appcluster> BalancerMember http://127.0.0.1:3001/sample BalancerMember http://127.0.0.1:3002/sample BalancerMember http://127.0.0.1:3003/sample BalancerMember http://127.0.0.1:3004/sample </Proxy> the thin.yml file chdir: /opt/bitnami/projects/sample environment: production address: 127.0.0.1 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 512 require: [] wait: 30 servers: 5 prefix: /sample daemonize: true I'm able to start and stop apache, but thin does not stop correctly though. When I try to stop thin, I get this output /opt/bitnami/projects/sample$ sudo thin -C config/thin.yml stop Stopping server on 127.0.0.1:3000 ... Can't stop process, no PID found in tmp/pids/thin.3000.pid Stopping server on 127.0.0.1:3001 ... Can't stop process, no PID found in tmp/pids/thin.3001.pid Stopping server on 127.0.0.1:3002 ... Can't stop process, no PID found in tmp/pids/thin.3002.pid Stopping server on 127.0.0.1:3003 ... Can't stop process, no PID found in tmp/pids/thin.3003.pid Stopping server on 127.0.0.1:3004 ... Can't stop process, no PID found in tmp/pids/thin.3004.pid I've tried to use nginx as well, without any luck unfortunately. Thank you for your time and help!

    Read the article

  • Make C# source run as a script?

    - by acidzombie24
    I am doing a little scripting and i find some more power would be nice. Like the ability to keep trying to delete a file with a 1sec delay AND have it portable since i spent some time today translating a bat script to bash. I know i can use php or python but i VERY MUCH PREFER static/compile time checking. Is there a way to run C# code as a script? I am hoping i dont have to create a custom ext and write a app to dynamically compile and execute the script (i know have source to compile .js somewhere...). Does anyone know of a solution?

    Read the article

  • Setting up Virtual Host in Fedora Core 15 using apache

    - by Roland
    I'm trying to setup a couple of Virtual Host files on my Localhost PC running Fedora Core 15. Now I get this working, but now onloy one Virtual Host site works, and if I type in 127.0.0.1/test/testApp.php which is not related to the Virtual Host site , I get redirected to the Virtual Host site. Here's what I did. I created a new folder called virtualhosts in /etc/httpd/ where all my host files are stored in the following format site.conf In /etc/conf/httpd.conf I enabled NameVirtualHost *:80 and included the host files at the bottom of the config page like this Include virtualhosts/*.conf In /etc/hosts I added the line 127.0.0.1 website No when I run sudo httpd -t I get Syntax OK I restart apache and then the Virtualhost works, but as soon as I add other hosts and only use 127.0.0.1 as above it still links to the original host. Am I doing anything wrong here or left out something? An example of my Virtual Host file looks like this <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html/website/ ServerName website ServerAlias website ErrorLog logs/dev-error_log CustomLog logs/dev-access_log common Alias /blog /var/www/html/blog/ <Directory /var/www/html/website/> Options FollowSymLinks Allow Override All Order allow,deny allow from all </Directory> #php_value error_reporting E_ALL & ~E_NOTICE & ~E_DEPRECATED php_flag display_errors On php_value date.timezone Europe/London </VirtualHost>

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >