Search Results

Search found 5564 results on 223 pages for 'scripts'.

Page 29/223 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Default Contoller in CodeIgniter

    - by gregavola
    I am wondering if there is any other configuration options for a default controller. For example - if I have a controller called "site" and I set the default controller in the following file: application/config/routes.php to: $route['default_controller'] = "site"; I should be able to go to http://localhost and that brings up the index(); function in the site controller. However, if I try to do go to http://localhost/index.php/index2 to load the index2(); function I get a 404 error. If I change the URL to http://localhost/index.php/site/index2 it works fine - but I thought already set the default controller. Is there any way around this?

    Read the article

  • problem with script

    - by lego69
    I'm workin on C-Shell, can somebody help me find the bug, my script: #! /bin/tcsh -f cut -d" " -f2 ${1} | ./rankHelper script rankHelper: #! /bin/tcsh -f set line = ($<) while(${#line} != 0) cat $line set line = ($<) end file lines from which the data was sent: 053-3787837 038280083 052-3436363 012345678 053-3232287 038280083 054-3923898 033333333 052-2222333 012345678 052-1111111 012390387 I run it using: > ./rank lines why do I receive only one number 038280083 I thought cut must cut 2 field from all rows... thanks in advance for any help I expect to see second field from all rows from lines

    Read the article

  • piping to variables

    - by lego69
    cut -d" " -f2 ${2} | $callsTo hello, can somebody please explain can I pipe the result of cut to variable callsTo, and how will it be stored, as the string or list?

    Read the article

  • .VBS scripts have stopped running... No idea why!

    - by Django Reinhardt
    We have two .vbs scripts that are run by our Task Scheduler that have suddenly stopped working for no reason we can fathom. We haven't significantly altered our system configuration in the last 24 hours, and the scripts have run without a hitch for months. According to the Task Scheduler the scripts just keep running and never stop, which is never the case. I stopped all running versions through the Scheduler and manually attempted to run one of the .vbs scripts. I got the following error message: Line: 15 Error: The system cannot locate the resource specified. Code: 800C0005 Source: msxml3.dll Line 15 (or 16 to be more accurate - line 15 itself is blank, but so is line 1) is: xml.Send Would could have suddenly caused this? Looking in system32\ and sysWOW64\ shows that msxml3.dll exists. Anybody got any ideas? Thanks a lot!

    Read the article

  • How can one request a debug build from javac without changing the ant scripts?

    - by mark
    I have a complex build system involving many ant scripts, some targets of which invoke the javac task. These ant scripts do not provide for a way to request a debug build from javac, i.e. neither debug nor debuglevel parameters of the javac task are specified. Is it still possible to instruct javac to build with debugging support without changing the build scripts themselves? The scripts are invoked from console.

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • How can I link to callback functions in Lua such that the callbacks will be updated when the scripts are reloaded?

    - by Raptormeat
    I'm implementing Lua scripting in my game using LuaBind, and one of the things I'm not clear on is the logistics of reloading the scripts live ingame. Currently, using the LuaBind C++ class luabind::object, I save references to Lua callbacks directly in the classes that use them. Then I can use luabind::call_function using that object in order to call the Lua code from the C++ code. I haven't tested this yet, but my assumption is that if I reload the scripts, then all the functions will be redefined, BUT the references to the OLD functions will still exist in the form of the luabind::object held by the C++ code. I would like to be able to swap out the old for the new without manually having to manage this for every script hook in the game. How best to change this so the process works? My first thought is to not save a reference to the function directly, but maybe save the function name instead, and grab the function by name every time we want to call it. I'm looking for better ideas!

    Read the article

  • Configuring httpd.conf to handle wildcard domains and multiple scripts?

    - by Steve
    I have a full-blown site like: http://www.example.com (uses index.php) http://www.example.com/scriptA.php http://www.example.com/scriptB.php I now want to have the possibility of setting up subsites like: http://alpha.example.com http://alpha.example.com/scriptA.php http://alpha.example.com/scriptB.php From http://stackoverflow.com/questions/2844004/subdomain-url-rewriting-and-web-apps/2844033#2844033 , I understand that I have to do: RewriteCond %{HTTP_HOST} ^([^./]+)\.example\.com$ RewriteCond %1 !=www RewriteRule ^ index.php?domain=%1 But what about the other scripts like scriptA and scriptB? How do I tell httpd.conf to handle those properly as well? How can I tell httpd.conf that handle everything after the 'forwardslash', exactly as it does on the main site, but pass a parameter flag like ?domain=alpha (Cross posted at: http://stackoverflow.com/questions/11365566/configuring-httpd-conf-to-handle-wildcard-domains-and-multiple-scripts)

    Read the article

  • error while running ruby application at system startup in ubuntu

    - by anjo
    I am on Ubuntu 12.04 machine. Have a script file which runs when entered manually in terminal gnome-terminal -e /home/precise/Desktop/cartodb/script.sh The content of script file is cd /home/ubuntupc/Desktop/cartodb20/ sh /home/ubuntupc/.rvm/scripts/rvm bundle exec foreman start -p 3000 So what i tried to do is to run this script at every system start up. So on Startup Applications command: gnome-terminal -e /home/precise/Desktop/cartodb/script.sh On terminal Edit - Profile Preferences - Title and Command Checked the "Run command as a login shell" But this seems to be not working. When restarted the machine found these error in terminal The child process exited normally with status 127. ERROR: RVM Ruby not used, run `rvm use ruby` first. Some info regarding the installed packages and system. $ which ruby /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/ruby $ which rails /home/ubuntupc/.rvm/gems/ruby-1.9.2-p320/bin/rails $ which gem /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/gem $ cat ~/.bash_profile [[ -s "$HOME/.profile" ]] && source "$HOME/.profile" # Load the default .profile [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function* $ which -a ruby /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/ruby $ sudo update-alternatives --config ruby update-alternatives: error: no alternatives for ruby. $ sudo find / -name "rubygems" -print /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/site_ruby/1.9.1/rubygems /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/lib/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/test/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/test/rubygems/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/doc/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/lib/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/test/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/test/rubygems/rubygems /home/ubuntupc/.rvm/src/rvm/scripts/functions/rubygems /home/ubuntupc/.rvm/src/rvm/scripts/rubygems /home/ubuntupc/.rvm/scripts/functions/rubygems /home/ubuntupc/.rvm/scripts/rubygems /usr/lib/ruby/1.9.1/rubygems /usr/local/rvm/rubies/ruby-1.9.2-p320/lib/ruby/site_ruby/1.9.1/rubygems /usr/local/rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/lib/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/test/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/test/rubygems/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/doc/rubygems /usr/local/rvm/src/rubygems-2.2.0/lib/rubygems /usr/local/rvm/src/rubygems-2.2.0/test/rubygems /usr/local/rvm/src/rubygems-2.2.0/test/rubygems/rubygems /usr/local/rvm/src/rvm/scripts/functions/rubygems /usr/local/rvm/src/rvm/scripts/rubygems /usr/local/rvm/scripts/functions/rubygems /usr/local/rvm/scripts/rubygems Please point out what i am missing as i am new to the ruby applications. Thanks in advance

    Read the article

  • java UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid [closed]

    - by Philippe
    I want use Unitils for testing my JAVA program, i have following error message "UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid" My table is into an DDL file, and table is not created Could you says me if dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema properties is still active in unitils 3.3 ? My EMPLOYEE table is not created and DBMAINTAIN_SCRIPTS not created too Where is my mistake ? My DDL file SET REFERENTIAL_INTEGRITY FALSE; SET DATABASE COLLATION "French"; SET SCHEMA PUBLIC; CREATE TABLE DBMAINTAIN_SCRIPTS (FILE_NAME VARCHAR2(150), FILE_LAST_MODIFIED_AT INTEGER, CHECKSUM VARCHAR2(50), EXECUTED_AT VARCHAR2(20), SUCCEEDED INTEGER); CREATE TABLE EMPLOYEES(ID IDENTITY NOT NULL,NAME VARCHAR(20),TITLE VARCHAR(20),SALARY DOUBLE,NI INTEGER NOT NULL) My unitils properties file # comments documenting these unitils configuration properties removed for # brevity. look for commenting in unitils-default.properties in the root of the # unitils jar if needed. unitils.modules=database,dbunit,easymock,inject unitils.module.hibernate.enabled=false unitils.module.spring.enabled=false # these placeholders are set in avaje.properties #gere la configuration DBUNIT database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:unitils-example database.schemaNames=PUBLIC database.userName=SA database.password= database.dialect=hsqldb # unitils will construct the test database using the ddl file found in this # directory dbMaintainer.fileScriptSource.scripts.location=src/main/resources updateDataBaseSchema.enabled=true sequenceUpdater.sequencevalue.lowestacceptable=100 dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema #dbMaintainer.autoCreateExecutedScriptsTable property to true

    Read the article

  • Does it make sense to write a build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Does it make sense to write build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Are PackageMaker installations with preinstall scripts broken on Snow Leopard?

    - by Stu Thompson
    Everything worked on 10.5, but now my PackageMaker installation project is broken. I've been fighting a problem for a few days now, and either Snow Leopard (OS X 10.6.1) has broken PackageMaker installations I am lacking a very, very basic tidbit of knowledge To narrow down the problem, I've gotten to this point: Create a new PackageMaker installation Have it install a jpeg image into my home directoy Define a preinstall script that does nothing #/bin/sh exit 0 Run the above...and watch it fail with the below error message like clock work Sep 14 15:09:45 manoa installd[5620]: PackageKit: ----- Begin install ----- Sep 14 15:09:45 manoa installd[5620]: PackageKit: request=PKInstallRequest <1 packages, destination=/> Sep 14 15:09:45 manoa installd[5620]: PackageKit: packages=(\n "PKLeopardPackage <file://localhost/Users/stu/Desktop/asdf.pkg>"\n) Sep 14 15:09:46 manoa installd[5620]: PackageKit: Extracting /Users/stu/Desktop/asdf.pkg (destination=/var/folders/Hb/HbXJFyEpFaupt5QyLN-pTk+++TI/-Tmp-/PKInstallSandbox-tmp/Root/~, uid=501) Sep 14 15:09:46 manoa installd[5620]: PackageKit: Executing script "./preinstall" in /private/tmp/PKInstallSandbox.cmlS2H/Scripts/test.test.5year_header.pkg.PFrHNB Sep 14 15:09:46 manoa installd[5620]: PackageKit: *** launch path not accessible Sep 14 15:09:46 manoa installd[5620]: PackageKit: Install Failed: PKG: pre-install scripts for "test.test.5year_header.pkg"\nError Domain=PKInstallErrorDomain Code=112 UserInfo=0x100149430 "An error occurred while running scripts from the package “asdf”." {\n NSFilePath = "./preinstall";\n NSLocalizedDescription = "An error occurred while running scripts from the package \U201casdf\U201d.";\n NSURL = "file://localhost/Users/stu/Desktop/asdf.pkg";\n PKInstallPackageIdentifier = "test.test.5year_header.pkg";\n} Sep 14 15:09:46 manoa Installer[5614]: install:didFailWithError:Error Domain=PKInstallErrorDomain Code=112 UserInfo=0x1195917c0 "An error occurred while running scripts from the package “asdf”." Sep 14 15:09:46 manoa Installer[5614]: Install failed: The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance. Sep 14 15:09:47 manoa Installer[5614]: IFDInstallController 144040 state = 7 Sep 14 15:09:47 manoa Installer[5614]: Displaying 'Install Failed' UI. Sep 14 15:09:47 manoa Installer[5614]: 'Install Failed' UI displayed message:'The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.'. There is no file in /private/tmp/PKInstallSandbox.cmlS2H/Scripts/test.test.5year_header.pkg.PFrHNB/, which makes me think the problem is with PackageMaker, and not me. But I'm new to the world of OS X software installation, so doubts remain. So, the question: Is PackageMaker with a preinstall script broken on OS X 10.6? Or is there some requirement regarding preinstall scripts that I do not understand?

    Read the article

  • PHP scripts randomly becoming really slow to respond - Database lockup?

    - by webnoob
    Hi All, I wasn't sure whether to post this here or on stackoverflow, so apologies if its in the wrong place. I have about 7 php scripts running on a centOS VPS. Each of these scripts contacts a game server and processes the logs, with the logs it either does some database queries or sends info back to the game server. I am having an issue where some of the scripts will randomly become REALLY slow to respond and I don't know where to start with my debugging. Each script connects to its own database schema but on the same MySQL server. Each script will do about 4 inserts per second and twice as many select statements on their respective databases. I thought a database lockup may cause the issue but some console messages that are read from the database are sent to the game servers console without issue every 30 seconds, even when the script is slow to responding to other commands. Non of the scripts are using a lot of memory or CPU power. About 0.1% each. I know this information is really vague but I don't know linux very well at all (in fact, top is about my limit) and I really don't know where to start debugging this. Thanks.

    Read the article

  • How would one go about integrated python into a c++ written game for the use of user-made scripts

    - by Spencer Killen
    I'm quite new to game development (not the site) and I'm currently just trying to educate myself about some certain things before I really begin working and a game. anyway, I'd like to know what basic algorithm/outline of how a game would be coded effeciently with the implementation of user coded scripts for gameplay and levels that are written in python, Is this even possible? would all the features of python be avalible? like say "multi-threading"?

    Read the article

  • What are some good examples of Powernap scripts and its use?

    - by shootingstars
    I would like to use powernap for putting my media server into suspend mode, and I haven't been able to find any example /etc/powernap/action scripts out there, except these: one two three Does anybody have a good script or recommend particular techniques with its use? From the comments of the default /etc/powernap/action script: # You may do one of: # 1) Write your own custom script below and make this file executable, # calling some specific action, such as: # /usr/sbin/pm-suspend # /usr/sbin/pm-hibernate # /sbin/poweroff # echo 'I am wasting electricity' | mail [email protected] # 2) Replace this file with an executable script or binary # 3) Symlink this file to some other executable script or binary

    Read the article

  • Why "./" is used to run ".sh" scripts in Unix? [duplicate]

    - by user283502
    This question already has an answer here: Why do I need to type `./` before executing a program in the current directory? 10 answers I am executing a .sh script today. It is executed with prefix "./.sh,I am a bit confuse because it is also executed without ./.but why this is required to use ./ Could you please explain me that why ./ is used to run .sh scripts?

    Read the article

  • How to use slider scripts given by websites? [closed]

    - by Payo
    There are many slider scripts and codes being given for free like the parallax sliders. Everything is given - the markup, CSS and JavaScript. As I am not a professional in these fields but do have some coding knowledge, how do I use these tutorials? They are not very explicit in the steps involved in implementing them to a site/blog. Is there any site that gives in-depth detail or if someone would like to help me over here?

    Read the article

  • Default shell for running scripts (w/o shebang) in macos?

    - by Igor Spasic
    I have ZSH as default shell in MacOS, everything is working fine. ZSH is installed as brew package, Ive set default shell in my account, new shell is listed in /etc/shells... everything is set, like I've said. I have some shell scripts in which I use some commands from zsh, like print. When I execute the script from command line, the print command is not recognized and the script fails. This script does not have the shebang line. When I put the shebang line for zsh, then everything works; the print command is working. Since I am using only ZSH, is it possible to set default shell for running scripts, so I don't have to put shebang line in my .zsh scripts? Or is it possible to associate .zsh extension to ZSH shell execution?

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • How do I use beta Perl modules from beta Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How do I use beta test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How do I use test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "test" code location (e.g. production Perl code us in /usr/code/scripts, test Perl code is in /usr/code/test/scripts; production Perl libraries are in /usr/code/lib/perl and test versions of those libraries are in /usr/code/test/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and test location. To clarify, to promote any code (library or script) from test to production, the ONLY thing which needs to happen is literally issuing cp command from test to prod location - both the file name AND file contents must remain identical. Test versions of scripts must call other test scripts and test libraries (if exist) or production libraries (if test libraries do not exist) The code paths must be the same between test and production with the exception of base directory (/usr/code/ vs /usr/code/test/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How do I use test/beta Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How to compile Python scripts for use in FORTRAN?

    - by Vincent Poirier
    Hello, Although I found many answers and discussions about this question, I am unable to find a solution particular to my situation. Here it is: I have a main program written in FORTRAN. I have been given a set of python scripts that are very useful. My goal is to access these python scripts from my main FORTRAN program. Currently, I simply call the scripts from FORTRAN as such: CALL SYSTEM ('python pyexample.py') Data is read from .dat files and written to .dat files. This is how the python scripts and the main FORTRAN program communicate to each other. I am currently running my code on my local machine. I have python installed with numpy, scipy, etc. My problem: The code needs to run on a remote server. For strictly FORTRAN code, I compile the code locally and send the executable to the server where it waits in a queue. However, the server does not have python installed. The server is being used as a number crunching station between universities and industry. Installing python along with the necessary modules on the server is not an option. This means that my “CALL SYSTEM ('python pyexample.py')” strategy no longer works. Solution?: I found some information on a couple of things in thread http://stackoverflow.com/questions/138521/is-it-feasible-to-compile-python-to-machine-code Shedskin, Psyco, Cython, Pypy, Cpython API These “modules”(? Not sure if that's what to call them) seem to compile python script to C code or C++. Apparently not all python features can be translated to C. As well, some of these appear to be experimental. Is it possible to compile my python scripts with my FORTRAN code? There exists f2py which converts FORTRAN code to python, but it doesn't work the other way around. Any help would be greatly appreciated. Thank you for your time. Vincent PS: I'm using python 2.6 on Ubuntu

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >