Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 130/937 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • How to install an init.d script in ubuntu?

    - by suhail
    i am trying to install an init.d script, to run celery for scheduling tasks. Here is the steps i followed: copied the file celeryd and pasted it in folder /etc/init.d/ created a configuration file celeryd in folder /etc/default/ now when i tried to start it by sudo /etc/init.d/celeryd start, it throws error sudo: /etc/init.d/celeryd: command not found I googled about how to install init.d, i got this SO-question. it says to issue a uname -a and when i does i get this: Linux capsonesystem8-desktop 3.2.0-43-generic-pae #68-Ubuntu SMP Wed May 15 03:55:10 UTC 2013 i686 i686 i386 GNU/Linux and also it says use utils like insserv to enable init.d script so tried: insserv /etc/init.d/celeryd but it throws error insserv: command not found so i tried to install insserv sudo apt-get install insserv. but it say aleady installed: insserv is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 222 not upgraded. So how to install init.d script?? Any help will be appreciated. update1: when i tried: $ sh -x /etc/init.d/celeryd start it reveal some errors. may be that is why the service won’t start. update2: I cleared all the errors when i run $ sh -x /etc/init.d/celeryd start but still sudo /etc/init.d/celeryd start throws command not found error

    Read the article

  • How can I making Twitter, Facebook and Reddit share buttons load last?

    - by Daniel Bingham
    I have a website with a number of pages that sport twitter, facebook and reddit share buttons. They take forever to load and until they do the rest of the page doesn't load. So how I can make them load last? Currently, they are loaded something like this: <div class="item"><a href="http://twitter.com/share" class="twitter-share-button" data-count="vertical" data-via="FridgeToFood" data-related="danielBingham:Recipe and update tweets from Fridge to Food.">Tweet</a><script type="text/javascript" src="http://platform.twitter.com/widgets.js"></script></div> <div class="item"><script src="http://connect.facebook.net/en_US/all.js#xfbml=1"></script><fb:like layout="box_count" width="40"></fb:like></div> <div class="item"> <script type="text/javascript">reddit_target='recipes';</script> <script type="text/javascript" src="http://reddit.com/static/button/button2.js"></script> </div> They are in a div called "shareWrapper" and are loading to one side of the page. The buttons load where ever the script code is placed. As far as I know, I can't place the script code at the bottom of the page and move the resulting buttons after the fact. I want them to appear near the top, which right now means they are stopping everything below them from loading for several seconds. I tried loading them using javascript, but using JQuery's $(document).ready(), but that failed. It seems to leave the page in some sort of loading loop from which it never emerges. Are there other ways to get these to load last?

    Read the article

  • iFrame ads from site to site

    - by user28327
    How I can make the iFrame ads from site A to site B not like this: <iframe src="http://www.site.com"></iframe> Example: site A <script type="text/javascript"><!-- google_ad_client = "ca-pub-3434343507"; /* site */ google_ad_slot = "343435270"; google_ad_width = 728; google_ad_height = 90; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script> site B --iFrame-- <script type="text/javascript"><!-- google_ad_client = "ca-pub-3434343507"; /* site */ google_ad_slot = "343435270"; google_ad_width = 728; google_ad_height = 90; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script> --iFrame--

    Read the article

  • Dojo and Separate JavaScript File

    - by Bunch
    For a project I needed to use the ArcGIS API for some mapping. To use this you need to use Dojo but in this case all it really comes down to is adding some require lines and a addOnLoad on your web page. At first everything was working great, the maps rendered and the various layers would populate as needed. Once it was working I started moving the various javascript functions into their own files to keep everything nice and neat. Then the problems started, mainly the map would not show up any more. So that was a pretty big problem. Luckily the fix was pretty simple, just move the dojo.addOnLoad line into it’s own script tag. If I had the dojo.addOnLoad in the same script block as the various require lines it would not work as expected. Works: <script type="text/javascript" language="javascript" src="javascript/test.js" />     <script type="text/javascript">       dojo.require("esri.map");       dojo.require("esri.tasks.locator");       dojo.require("esri.tasks.query");       dojo.require("esri.tasks.geometry");  </script>  <script type="text/javascript">      dojo.addOnLoad(init);  </script> Does not work: <script type="text/javascript" language="javascript" src="javascript/test.js" /> <script type="text/javascript">       dojo.require("esri.map");       dojo.require("esri.tasks.locator");       dojo.require("esri.tasks.query");       dojo.require("esri.tasks.geometry");       dojo.addOnLoad(init); </script> Technorati Tags: JavaScript,Dojo

    Read the article

  • Is sending data to a server via a script tag an outdated paradigm?

    - by KingOfHypocrites
    I inherited some old javascript code for a website tracker that submits data to the server using a script url: var src = "http://domain.zzz/log/method?value1=x&value2=x" var e = document.createElement('script'); e.src = src; I guess the idea was that cross domain requests didn't haven't to be enabled perhaps. Also it was written back in 2005. I'm not sure how well XmlHttpRequests were supported at the time. Anyone could stick this on their website and send data to our server for logging and it ideally would work in most any browser with javascript. The main limitation is all the server can do is send back javascript code and each request has to wait for a response from the server (in the form of a generic acknowledgement javascript method call) to know it was received, then it sends the next. I can't find anyone doing this online or any metrics as to whether this faster or more secure than XmlHttpRequests. I don't know if this is just an old way of doing things or it's still the best way to send data to the server when you are mostly trying to send data one way and you need the best performance possible. So in summary is sending data via a script tag an outdated paradigm? Should I abandon in favor of using XmlHttpRequsts?

    Read the article

  • Multisystem Script won't work! "Syntax error:redirection unexpected" Worked 2 days ago?

    - by user74005
    this is my first question. I use Multisystem all of the time and have installed it on both Kubuntu and Ubuntu and have used it with no issues. I wiped my hard drive to try some new OSs I'm now using the exact same OS (Ubuntu 12.05) I used to load my USB stick to begin with and now I'm getting this ridiculous syntax error. I know the script is correct, I'm following the exact same steps I used to get to this point and I'm getting different results ?!?! I'm very confused by this. I have no clue how to begin addressing this issue. I get the same syntax error on Kubuntu now too, which did have multisystem installed. I run "sh install-depot-multisystem.sh" and get "Syntax error:redirection unexpected", this worked literally 2 days ago. The only thing that has changed is my face has grown some more facial hair and my head hurts from bangin it against the wall over this issue. The OS is exactly the same, the script is the same; but now it won't install. I'm lost and really hoping someone can help. Append Just to append to this a bit https://lists.ubuntu.com/archives/ub...er/000264.html I needed to do a chmod 777 on the script, I'm still getting a syntax error on Kubuntu...but it did install successfully. I'll mark this as resovled! Thanks anyway, I'll try to spruse up on my Linux skills.

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • call python with system() in R to run a python script emulating the python console

    - by Yihui
    I want to pass a chunk of Python code to Python in R with something like system('python ...'), and I'm wondering if there is an easy way to emulate the python console in this case. For example, suppose the code is "print 'hello world'", how can I get the output like this in R? >>> print 'hello world' hello world This only shows the output: > system("python -c 'print \"hello world\"'") hello world Thanks! BTW, I asked in r-help but have not got a response yet (if I do, I'll post the answer here).

    Read the article

  • How to organize SQL script files

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too long. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • $this->headLink() includes duplicated script

    - by terrani
    Hi, Just like I did before, I used the following code for my new project. <?=$this->headLink()->appendStylesheet('/Layouts/admin/css/button.css');?> <?=$this->headLink()->appendStylesheet('/Layouts/admin/css/inputText.css');?> <?=$this->headLink()->appendStylesheet('/Layouts/admin/css/fancyTable.class.css');?> when I open the website and view source, there are duplicated css link tags. <link href="/Layouts/admin/css/button.css" media="screen" rel="stylesheet" type="text/css" ><link href="/Layouts/admin/css/button.css" media="screen" rel="stylesheet" type="text/css" > <link href="/Layouts/admin/css/inputText.css" media="screen" rel="stylesheet" type="text/css" ><link href="/Layouts/admin/css/button.css" media="screen" rel="stylesheet" type="text/css" > <link href="/Layouts/admin/css/inputText.css" media="screen" rel="stylesheet" type="text/css" > <link href="/Layouts/admin/css/fancyTable.class.css" media="screen" rel="stylesheet" type="text/css" > <link href="/Layouts/admin/css/button.css" media="screen" rel="stylesheet" type="text/css" > <link href="/Layouts/admin/css/inputText.css" media="screen" rel="stylesheet" type="text/css" > <link href="/Layouts/admin/css/fancyTable.class.css" media="screen" rel="stylesheet" type="text/css" > <link href="/Layouts/admin/css/divine.css" media="screen" rel="stylesheet" type="text/css" > What is going on with my code??

    Read the article

  • JQuery UI function errors out: Object is not a property or method

    - by Luke101
    In the following code I get an error that says autocomplete function Object is not a property or method Here is the code: <title><%= ViewData["pagetitle"] + " | " + config.Sitename.ToString() %></title> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/minified/jquery.ui.core.min.js" type="text/javascript"></script> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/minified/jquery.ui.core.min.js" type="text/javascript"></script> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/jquery.ui.widget.js" type="text/javascript"></script> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/jquery.ui.position.js" type="text/javascript"></script> <script src="../../Scripts/jqueryui/jquery-ui-1.8.1.custom/development-bundle/ui/jquery.ui.autocomplete.js" type="text/javascript"></script> <script language="javascript" type="text/javascript" src="/Scripts/main.js"></script> <script language="javascript" type="text/javascript"> $(document).ready(function () { Categories(); $('#tags1').autocomplete({ //error here url: '/Tag/TagAutoComplete', width: 320, max: 4, delay: 30, cacheLength: 1, scroll: false, highlight: false }); }); </script>

    Read the article

  • Bash/shell script - shell output redirection inside a function

    - by Josh
    function grabSourceFile { cd /tmp/lmpsource wget $1 > $LOG baseName=$(basename $1) tar -xvf $baseName > $LOG cd $baseName } When I call this function The captured output is not going to the log file. The output redirection works fine until I call the function. The $LOG variable is set at the top of the file. I tried echoing statements and they would not print. I am guessing the function captures the output itself? If so how do push the output to the file instead of the console. (The wget above prints to console, while an echo inside the function does nothing.)

    Read the article

  • Could not convert JavaScript argument arg 0" nsresult: "0x80570009 (NS_ERROR_XPC_BAD_CONVERT_JS

    - by Drahcir
    I am trying to make this captcha jquery plugin to work. The a certain line of code is executed, the error pops up. This is the line of code that causes the error : $(".ajax-fc-" + rand).draggable({ containment: '#ajax-fc-content' }); What I am assuming is that there is some kind of conflict with the javascript reference, but can't determain what. These are the referenes that I am using <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js"></script> <script src="js/ui.core.js"></script> <script src="js/ui.draggable.js"></script> <script src="js/ui.droppable.js"></script> <script src="js/effects.core.js"></script> <script src="js/effects.slide.js"></script>

    Read the article

  • PHP form script error

    - by Alex
    Hi, I have created a rather larger html form and I would like the data to send to my email address. I am using the POST method and thought my PHP was up to snuff. However, I now get the following error upon submission: Parse error: syntax error, unexpected '}' in C:\www\mo\marinecforum\send_form_application.php on line 90. I am having a hell of a time with this. Beyond the error above, I wonder if there is a better way to approach this? Here is the PHP: http://pastebin.com/MKUcgihg Many thanks, Alex

    Read the article

  • Doubt in action script for Flex: getting unique elements from an ArrayCollection

    - by Nirmal Singh Raja Reegan
    Hi, I have an ArrayCollection as mentioned below. private var initDG:ArrayCollection = new ArrayCollection([ {fact: "Order #2314", appName: "AA"}, {fact: "Order #2315", appName: "BB"} {fact: "Order #2316", appName: "BB"} ... {fact: "Order #2320", appName: "CC"} {fact: "Order #2321", appName: "CC"} ]); I want to populate a ComboBox with UNIQUE VALUES of "appName" field from the ArrayCollection initDG. <mx:ComboBox id="appCombo" dataProvider="{initDG}" labelField="appName"/> One method I could think is to loop through the Array objects and for each object check and push unique appName entries into another Array. Is there any better solution available?

    Read the article

  • Can't find AddOverloads.xsl used in scott hanselman's script

    - by asksuperuser
    In http://www.hanselman.com/blog/content/binary/sandcastledoc.ps1.txt the command XslTransform "$path\ProductionTransforms\AddOverloads.xsl" reflection.org uses AddOverloads.xls but in last Sandcastle ProductionTransforms directory I can only find: FixScriptSharp.xsl MergeDuplicates.xsl MergeHxF.xsl ReflectionToCDocML.xsl ReflectionToChmIndex.xsl ReflectionToChmProject.xsl ReflectionToManifest.xsl TocToChmContents.xsl TocToHxsContents.xsl Vs2005TocToDsToc.xsl AddFriendlyFilenames.xsl AddGuidFilenames.xsl AddXamlSyntaxData.xsl ApplyPrototypeDocModel.xsl ApplyVSDocModel.xsl CreateHxC.xsl CreateHxt.xsl CreatePrototypeToc.xsl CreateVSToc.xsl DsManifestToManifest.xsl DsTocToManifest.xsl DsTocToSitemap.xsl DsTocToToc.xsl Which one should I use instead ?

    Read the article

  • display image and script for a set period of time

    - by Ryan Max
    This is very similar to a question I asked the other day but my page code has become significantly more complicated and I need to revisit it. I've been using the following code: $('#myLink').click(function() { $('#myImg').attr('src', 'newImg.jpg'); setTimeout(function() { $('#myImg').attr('src', 'oldImg.jpg'); }, 15000); }); To replace an image for a set period of time (15 seconds) when the link is clicked, then after 15 seconds, revert to the original image. However now, I'd like to run a snippet of javascript as well when the link is clicked (in addition to replacing the image), and only when the link is clicked (it's related to the 15 second image) and then have the js code disappear as well after the 15 seconds...however I'm not sure how to have jquery send js code into the page...Basically I just want jQuery to "echo" this code onto the page underneath the 15 second while I am there, but I don't know how jquery formats this "echo". Does this question make sense?

    Read the article

  • how to send on users profile page on selecting username( using jason autosuggest script)

    - by I Like PHP
    i m using auto suggest using Ajax Jason . now when a user select a user name , i want to send user on the link of that user name my jason data is coming in this way { query:'hel', suggestions:["hello world","hell boy ","bac to hell"], data:["2","26","34"] } now what i want that user goes to http://userProfile.php?uid=26 on select username(suppose user select "hell boy") how to do this??

    Read the article

  • php-curl script that saves images---actually captcha images

    - by user253530
    I have a curl class, called Curl. Let's presume i have this code: $url = 'http://www.google.com' $fields = array('q'=>'search term'); //maybe some other arguments. but let's keep it simple. $curl = new Curl(); $page = $curl->post($url,$fields); $page will have some images wich curl doesn't load them by default. I need to know how i can save a specific image without using curl. Once I use $page = $curl-post(..) I need to know how I can have that image saved without using another $curl-post(_image_location_) to get that file. The reason why need this is to save a captcha image from a form. I need to access the form and get that specific image that's being loaded. If i try to access the URL of the image, it will be a different captcha image.

    Read the article

  • Vim script to compile TeX source and launch PDF only if no errors

    - by Jeet
    Hi, I am switching to using Vim for for my LaTeX editing environment. I would like to be able to tex the source file from within Vim, and launch an external viewing if the compile was successful. I know about the Vim-Latex suite, but, if possible, would prefer to avoid using it: it is pretty heavy-weight, hijacks a lot of my keys, and clutters up my vimruntime with a lot of files. Here is what I have now: if exists('b:tex_build_mapped') finish endif " use maparg or mapcheck to see if key is free command! -buffer -nargs=* BuildTex call BuildTex(0, <f-args>) command! -buffer -nargs=* BuildAndViewTex call BuildTex(1, <f-args>) noremap <buffer> <silent> <F9> <Esc>:call BuildTex(0)<CR> noremap <buffer> <silent> <S-F9> <Esc>:call BuildTex(1)<CR> let b:tex_build_mapped = 1 if exists('g:tex_build_loaded') finish endif let g:tex_build_loaded = 1 function! BuildTex(view_results, ...) write if filereadable("Makefile") " If Makefile is available in current working directory, run 'make' with arguments echo "(using Makefile)" let l:cmd = "!make ".join(a:000, ' ') echo l:cmd execute l:cmd if a:view_results && v:shell_error == 0 call ViewTexResults() endif else let b:tex_flavor = 'pdflatex' compiler tex make % if a:view_results && v:shell_error == 0 call ViewTexResults() endif endif endfunction function! ViewTexResults(...) if a:0 == 0 let l:target = expand("%:p:r") . ".pdf" else let l:target = a:1 endif if has('mac') execute "! open -a Preview ".l:target endif endfunction The problem is that v:shell_error is not set, even if there are compile errors. Any suggestions or insight on how to detect whether a compile was successful or not would be greatly appreciated! Thanks!

    Read the article

  • HTML text input and using the input as a variable in a script(tcl)/sql(sqlite)

    - by Fantastic Fourier
    Hello all, I'm very VERY new at this whole web thing. And I'm just very confused in general. Basically, what I want to do is take an input via text using HTML and adding that input to database, table trans. Should be simple but I am lost. <li>Transaction Number</li> <li><input type=|text| name=|tnumber| </li> // do i need to use value? <li>Employee Name</li> <li><input type=|text| name=|ename| </li> <li><input type=|SUBMIT| value=|Add|></li> ...... ...... sqlite3 db $::env(ROOT)/database.db mb eval {INSERT INTO trans VALUES ($tnumber, $ename} mb close They are both in a same file and there are only two fields to the database to keep things simple. What I can see here is that tnumber and ename aren't declared as variables. So how do I do that so that the text input is assigned to respective variables?

    Read the article

  • Deploy to remote server using scp in NANT script

    - by Mini
    I am trying to copy a file to a remote server using scp task in Nant.Contrib . I have used the following code to do that: <target name= "QADeploy"description="gthtyb" > <loadtasks assembly="C:\nantcontrib-0.85\bin\NAnt.Contrib.Tasks.dll" /> <echo message="htyh"/> <scp file="D:\SourceTest\redist.txt" server="\\10.4.30.19" user="xxx:uuuu"> </scp> </target> But I am getting an error: scp failed to start. The system cannot find the file specified. The code is as follows: Then I have downloaded pscp.exe and modified the code as below: <target name= "QADeploy" description="gthtyb" > <loadtasks assembly="C:\nantcontrib-0.85\bin\NAnt.Contrib.Tasks.dll" /> <echo message="htyh"/> <scp file="D:\SourceTest\redist.txt" server="\\10.4.30.19" user="xxx:uuuu" program="C:\pscp\pscp.exe"> </scp> Now I am getting the following error: [scp] ssh_init:host does not exist External Program Failed:C:\pscp\pscp.exe can u please help whats the best way to copy a file to a remote server using Nant. I am using this code to deploy files to a remote server. Thanks

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >