Search Results

Search found 29619 results on 1185 pages for 'external script'.

Page 171/1185 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Function behaviour on shell(ksh) script

    - by footy
    Here are 2 different versions of a program: this Program: #!/usr/bin/ksh printmsg() { i=1 print "hello function :)"; } i=0; echo I printed `printmsg`; printmsg echo $i Output: # ksh e I printed hello function :) hello function :) 1 and Program: #!/usr/bin/ksh printmsg() { i=1 print "hello function :)"; } i=0; echo I printed `printmsg`; echo $i Output: # ksh e I printed hello function :) 0 The only difference between the above 2 programs is that printmsg is 2times in the above program while printmsg is called once in the below program. My Doubt arises here: To quote Be warned: Functions act almost just like external scripts... except that by default, all variables are SHARED between the same ksh process! If you change a variable name inside a function.... that variable's value will still be changed after you have left the function!! But we can clearly see in the 2nd program's output that the value of i remains unchanged. But we are sure that the function is called as the print statement gets the the output of the function and prints it. So why is the output different in both?

    Read the article

  • How can I make check_nrpe wait for my remote script to finish executing?

    - by Rauffle
    I have a python script that's being used as a plugin for NRPE. This script checks to see if a process if running on a virtual machine by doing an SSH one-liner with a "ps ax | grep process" attached. When executing the script manually, it works as expected and returns a single line of output for NRPE as well as a status based on whether or not the process is running. When I attempt to run the command setup to execute this script (from my Nagios server), I instantly get the output "NRPE: Unable to read output", however when I run the script manually it takes about a second before it returns output. Other commands run just fine, so it would seem like NRPE needs to wait a second or two for output rather than instantly failing, but I've been unable to find any way of accomplishing this; any tips? Thanks PS: The virtual machines are not accessible from anywhere other than the host machine, hence the need for the nrpe plugin to ssh from the host into the VM to check the process.

    Read the article

  • How can I kill and wait for background processes to finish in a shell script when I Ctrl+C it?

    - by slipheed
    I'm trying to set up a shell script so that it runs background processes, and when I ctrl+C the shell script, it kills the children, then exits. The best that I've managed to come up with is this. It appears that kill 0 -INT also kills the script before the wait happens, so the shell script dies before the children complete. Any ideas on how I can make this shell script wait for the children to die after sending INT? #!/bin/bash trap 'killall' INT killall() { echo **** Shutting down... **** kill 0 -INT wait # Why doesn't this wait?? echo DONE } process1 & process2 & process3 & cat # wait forever

    Read the article

  • how to make a script for esxi vms rebooting process.

    - by user116374
    I'm Using ESXi 5.0 And I have created a lab for metasploit. one time my system exploited, so i have to restart my vms. so it's too much Headache. i have to execute lots of exploit. so i don't want to do the same restart process again and again. so maybe it is possible through the script. so again im stuck. I don't know how to write a script for this process. PLEASE tell me how to write a script for AUTOMATION restart process in 10 20 minute. And how to execute that script. Please Share your knowledge if you know. this script thing is very very useful for me. And also tell me any other way if you know. Ty. Alen.

    Read the article

  • How to add a shutdown script (not by using gpedit.msc or active directory)?

    - by Francis
    I have created a script I want to deploy on my XP workstations as a shutdown script. I know I can add my script as a shutdown script with the UI (gpedit.msc), but I want to automate the deployment of my script. My workstations are not part of a Windows domain. I will deploy with OCS Inventory. I tried to add entries to the Windows registry, but this doesn't work. I don't see what I added when I run gpedit.msc. If I add something with gpedit.msc, this seem to overwrite what I added manually into the registry.

    Read the article

  • How to Script a backup for each database on an MSSQL Engine?

    - by Geo
    We need to backup 40 databases inside an MS SQL Server Engine. We backup each database with the following script: BACKUP DATABASE [dbname1] TO DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH NOFORMAT, INIT, NAME = N'dbname1-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO declare @backupSetId as int select @backupSetId = position from msdb..backupset where database_name=N'dbname1' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'dbname1' ) if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''dbname1'' not found.', 16, 1) end RESTORE VERIFYONLY FROM DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND GO We will like to add to the script the functionality of taking each database and replacing it in the above script. Basically a script that will create and verify each database backup from an engine. I am looking for something like this: For each database in database-list sp_backup(database) // this is the call to the script above. End For any ideas?

    Read the article

  • My powershell script wont save a file when run using Task Scheduler, do I need to specify a specific argument?

    - by EGr
    I have a script that downloads a temporary Excel file, copies parts of it to a new file, and saves it to a specific location on the network. The problem I'm having is that the new file is never created/saved. If I run the script locally (through cmd.exe, powershell, or powershell ise), it WILL save the file locally, or to the network. If I try running the script via a schedule or on-demand via Task Scheduler, the temporary file is created, but the final document is never created or saved. Is there a specific argument I need to pass, or anything I could be doing wrong? This is the command I'm currently using: powershell.exe -file C:\path\to\my\powershell\script\thescript.ps1 Since it calls environment variables, and other variables relative to the scripts positon, I also set "Start in" to C:\path\to\my\powershell\script\

    Read the article

  • How to make Ubuntu recognize an unknown external display (so I can adjust its resolution)?

    - by WagnerAA
    I have a Dell laptop with an external monitor attached (a Samsumg SyncMaster 931c). My laptop display was recognized, and I can adjust its optimum resolution. My external display is still unknown, thus I'm stuck at a lower resolution (1024x768): I tried the "Detect Displays" button, but it didn't work, nothing happens. I recently upgraded from Ubuntu 12.04 to 12.10. Things were working before. I don't know if I can actually change this configuration, or if this is a bug. I searched for an answer here and also in Launchpad's website, but found none. I even tried to install Nvidia drivers, and just messed things up. It seems I wasn't even using nvidia before, as I guessed by looking at my additional drivers configuration: My laptop has an Intel chipset, I guess: $ dpkg --get-selections | grep -i -e nvidia -e intel intel-gpu-tools install libdrm-intel1:amd64 install libdrm-intel1:i386 install nvidia-common install xserver-xorg-video-intel install I don't have an xorg.conf file (I think this is nvidia related, am I right?): $ cat /etc/X11/xorg.conf cat: /etc/X11/xorg.conf: No such file or directory $ ls -l /etc/X11/ total 76 drwxr-xr-x 2 root root 4096 Out 19 23:41 app-defaults drwxr-xr-x 2 root root 4096 Abr 25 2012 cursors -rw-r--r-- 1 root root 18 Abr 25 2012 default-display-manager drwxr-xr-x 4 root root 4096 Abr 25 2012 fonts -rw-r--r-- 1 root root 17394 Dez 3 2009 rgb.txt lrwxrwxrwx 1 root root 13 Mai 1 03:33 X -> /usr/bin/Xorg drwxr-xr-x 3 root root 4096 Out 19 23:41 xinit drwxr-xr-x 2 root root 4096 Jan 23 2012 xkb -rw-r--r-- 1 root root 0 Out 24 08:55 xorg.conf.nvidia-xconfig-original -rwxr-xr-x 1 root root 709 Abr 1 2010 Xreset drwxr-xr-x 2 root root 4096 Out 19 10:08 Xreset.d drwxr-xr-x 2 root root 4096 Out 19 10:08 Xresources -rwxr-xr-x 1 root root 3730 Jan 20 2012 Xsession drwxr-xr-x 2 root root 4096 Out 20 00:11 Xsession.d -rw-r--r-- 1 root root 265 Jul 1 2008 Xsession.options -rw-r--r-- 1 root root 13 Ago 15 06:43 XvMCConfig -rw-r--r-- 1 root root 601 Abr 25 2012 Xwrapper.config Here is some information I gathered by looking at other related posts: $ sudo lshw -C display; lsb_release -a; uname -a *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:48 memory:f6800000-f6bfffff memory:d0000000-dfffffff ioport:1800(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:f6100000-f61fffff LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch:cxx-3.0-amd64:cxx-3.0-noarch:cxx-3.1-amd64:cxx-3.1-noarch:cxx-3.2-amd64:cxx-3.2-noarch:cxx-4.0-amd64:cxx-4.0-noarch:desktop-3.1-amd64:desktop-3.1-noarch:desktop-3.2-amd64:desktop-3.2-noarch:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.0-amd64:graphics-3.0-noarch:graphics-3.1-amd64:graphics-3.1-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-3.2-amd64:printing-3.2-noarch:printing-4.0-amd64:printing-4.0-noarch:qt4-3.1-amd64:qt4-3.1-noarch Distributor ID: Ubuntu Description: Ubuntu 12.10 Release: 12.10 Codename: quantal Linux Batcave 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:31:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ xrandr -q Screen 0: minimum 320 x 200, current 2304 x 800, maximum 32767 x 32767 LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 286mm x 1790mm 1280x800 59.9*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 connected 1024x768+1280+32 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 DP1 disconnected (normal left inverted right x axis y axis) If there's anything else I can do, any other information I can post here, to help me configure this external display, please let me know. If this is actually a bug, I apologize (I know bugs are not allowed here), but I really wasn't sure. And I will promptly file a bug report in Launchpad if that's the case. Thanks a lot in advance. ;)

    Read the article

  • Configuring correct port for Oozie (invoking PIG script) in Cloudera Hue

    - by user2985324
    I am new to CDH4 Oozie workflow editor. While trying to invoke a pig script from Oozie workflow editor, i am getting the following error. HadoopAccessorException: E0900: Jobtracker [mymachine:8032] not allowed, not in Oozies whitelist It looks like Oozie is submitting the job to Yarn port (8032). I want it to submit to 8021 (MR jobtracker) port. Can someone help me in identify where to set the job tracker URL or port so that oozie picks up the correct one (using Hue or Cloudera manager). Previously I tried the following but none of them helped Modfied workflow.xml file /user/hue/oozie/workspaces/../workflow.xml file. However it gets overwritten when I submit the job from workflow editor. In cloudera Manager -- oozie -- configuration --Oozie Server (advanced) -- Oozie Server Configuration Safety Valve for oozie-site.xml property I set the following- <property> <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name> <value>mymachine:8020</value> oozie.service.HadoopAccessorService.jobTracker.whitelist mymachine:8021 and restarted the oozie service. 3. Tried to override 'jobTracker' property while configuring the pig task. This appears as follows in the workflow file however it doesn't take effect (or doesn't override) and still uses 8032 port. <global> <configuration> <property> <name>jobTracker</name> <value>mymachine:8021</value> </property> </configuration> </global> I am using CDH4 version. Thanks for looking into my question.

    Read the article

  • jquery.ui.draggable.js and jquery.ui.widget.js conflict

    - by Daniel S
    hello I had a working application, which uses a jquery ui dialog. I wanted to make the dialog draggable. As far as I know the only thing needed is the jquery.ui.draggable.js script. So I added it to the scripts I am using, but know I get the following error (as shown in the firebug console): base is not a constructor The relevante line in jquery.ui.widget.js is: var basePrototype = new base(); This is how I am adding all the scripts: <script type="text/javascript" src="/media/development-bundle/jquery-1.4.2.js"></script> <script type="text/javascript" src="/media/development-bundle/ui/jquery.ui.core.js"></script> <script type="text/javascript" src="/media/development-bundle/ui/jquery.ui.widget.js"></script> <script type="text/javascript" src="/media/development-bundle/ui/jquery.ui.draggable.js"></script> <script type="text/javascript" src="/media/development-bundle/ui/jquery.ui.position.js"></script> <script type="text/javascript" src="/media/development-bundle/ui/jquery.ui.autocomplete.js"></script> <script type="text/javascript" src="/media/development-bundle/ui/jquery.ui.dialog.js"></script> Am I doing something wrong? or is this a problem with jquery? Thanks in advance for any help

    Read the article

  • MSBuild Script and VS2010 publish apply Web.config Transform

    - by Jason
    So, I have VS 2010 installed and am in the process of modifying my MSBuild script for our TeamCity build integration. Everything is working great with one exception. How can I tell MSBuild that I want to apply the Web.conifg transform files that I've created when I publish the build... I have the following which produces the compiled web site but, it outputs a Web.config, Web.Debug.config and, Web.Release.config files (All 3) to the compiled output directory. In studio when I perform a publish to file system it will do the transform and only output the Web.config with the appropriate changes... <Target Name="CompileWeb"> <MSBuild Projects="myproj.csproj" Properties="Configuration=Release;" /> </Target> <Target Name="PublishWeb" DependsOnTargets="CompileWeb"> <MSBuild Projects="myproj.csproj" Targets="ResolveReferences;_CopyWebApplication" Properties="WebProjectOutputDir=$(OutputFolder)$(WebOutputFolder); OutDir=$(TempOutputFolder)$(WebOutputFolder)\;Configuration=Release;" /> </Target> Any help would be great..! I know this can be done by other means but I would like to do this using the new VS 2010 way if possible

    Read the article

  • Error with perl script using module Mail::Sendmail

    - by Octopus
    I am using below code to connect to SMTP server where authentication is required for connection. While executing script I am getting an error message saying "Error sending mail: Connection error from mail.utiba.com on port 465 ()". However the connection to the mail server is working fine, I can connect to the mail server using other mail clients. Please suggest what I am doing wrong here. Is there any sendmail settings required? #!/usr/bin/perl use strict; use warnings; use Mail::Sendmail; print "Testing Mail::Sendmail version $Mail::Sendmail::VERSION\n"; print "Default server: $Mail::Sendmail::mailcfg{smtp}->[0]\n"; print "Default sender: $Mail::Sendmail::mailcfg{from}\n"; my %mail = ( From=> 'username', Bcc => '[email protected]', Cc=> '[email protected]', Subject => 'Test message', 'X-Mailer' => "Mail::Sendmail version $Mail::Sendmail::VERSION", ); $mail{Smtp} = 'mail.server.com:port'; $mail{auth} = {user=>'username', password=>"password", required=>1 }; $mail{'X-custom'} = 'My custom additionnal header'; $mail{'mESSaGE : '} = "The message key looks terrible, but works."; $mail{Date} = Mail::Sendmail::time_to_date( time() - 86400 ); if (sendmail %mail) { print "Mail sent OK.\n" } else { print "Error sending mail: $Mail::Sendmail::error \n" } print "\n\$Mail::Sendmail::log says:\n", $Mail::Sendmail::log;

    Read the article

  • Remove All Nodes with (nodeName = "script") from a Document Fragment *before placing it in dom*

    - by Matrym
    My goal is to remove all <[script] nodes from a document fragment (leaving the rest of the fragment intact) before inserting the fragment into the dom. My fragment is created by and looks something like this: range = document.createRange(); range.selectNode(document.getElementsByTagName("body").item(0)); documentFragment = range.cloneContents(); sasDom.insertBefore(documentFragment, credit); document.body.appendChild(documentFragment); I got good range walker suggestions in a separate post, but realized I asked the wrong question. I got an answer about ranges, but what I meant to ask about was a document fragment (or perhaps there's a way to set a range of the fragment? hrmmm). The walker provided was: function actOnElementsInRange(range, func) { function isContainedInRange(el, range) { var elRange = range.cloneRange(); elRange.selectNode(el); return range.compareBoundaryPoints(Range.START_TO_START, elRange) <= 0 && range.compareBoundaryPoints(Range.END_TO_END, elRange) >= 0; } var rangeStartElement = range.startContainer; if (rangeStartElement.nodeType == 3) { rangeStartElement = rangeStartElement.parentNode; } var rangeEndElement = range.endContainer; if (rangeEndElement.nodeType == 3) { rangeEndElement = rangeEndElement.parentNode; } var isInRange = function(el) { return (el === rangeStartElement || el === rangeEndElement || isContainedInRange(el, range)) ? NodeFilter.FILTER_ACCEPT : NodeFilter.FILTER_SKIP; }; var container = range.commonAncestorContainer; if (container.nodeType != 1) { container = container.parentNode; } var walker = document.createTreeWalker(document, NodeFilter.SHOW_ELEMENT, isInRange, false); while (walker.nextNode()) { func(walker.currentNode); } } actOnElementsInRange(range, function(el) { el.removeAttribute("id"); }); That walker code is lifted from: http://stackoverflow.com/questions/2690122/remove-all-id-attributes-from-nodes-in-a-range-of-fragment PLEASE No libraries (ie jQuery). I want to do this the raw way. Thanks in advance for your help

    Read the article

  • Writing an efficient cron job script utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • Use the Django ORM in a standalone script (again)

    - by Rishabh Manocha
    I'm trying to use the Django ORM in some standalone screen scraping scripts. I know this question has been asked before, but I'm unable to figure out a good solution for my particular problem. I have a Django project with defined models. What I would like to do is use these models and the ORM in my scraping script. My directory structure is something like this: project scrape #scraping scripts ... test.py web django_project settings.py ... #Django files I tried doing the following in project/scrape/test.py: print os.path.join(os.path.abspath('..'), 'web', 'django_project') sys.path.append(os.path.join(os.path.abspath('..'), 'web', 'django_project')) print sys.path print "-------" os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project.settings' #print os.environ from django_project.myapp.models import MyModel print MyModel.objects.count() However, I get an ImportError when I try to run test.py: Traceback (most recent call last): File "test.py", line 12, in <module> from django_project.myapp.models import MyModel ImportError: No module named django_project.myapp.models One solution I found around this problem is to create a symbolic link to ../web/govcheck in the scrape folder: :scrape rmanocha$ ln -s ../web/govcheck ./govcheck With this, I can then run test.py just fine. However, this seems like a hack, and more importantly, is not very portable (I will have to create this symbolic link everywhere I run this code). So, I was wondering if anyone has any better solutions for my problem?

    Read the article

  • feedparser fails during script run, but can't reproduce in interactive python console

    - by Rhubarb
    It's failing with this when I run eclipse or when I run my script in iPython: 'ascii' codec can't decode byte 0xe2 in position 32: ordinal not in range(128) I don't know why, but when I simply execute the feedparse.parse(url) statement using the same url, there is no error thrown. This is stumping me big time. The code is as simple as: try: d = feedparser.parse(url) except Exception, e: logging.error('Error while retrieving feed.') logging.error(e) logging.error(formatExceptionInfo(None)) logging.error(formatExceptionInfo1()) Here is the stack trace: d = feedparser.parse(url) File "C:\Python26\lib\site-packages\feedparser.py", line 2623, in parse feedparser.feed(data) File "C:\Python26\lib\site-packages\feedparser.py", line 1441, in feed sgmllib.SGMLParser.feed(self, data) File "C:\Python26\lib\sgmllib.py", line 104, in feed self.goahead(0) File "C:\Python26\lib\sgmllib.py", line 143, in goahead k = self.parse_endtag(i) File "C:\Python26\lib\sgmllib.py", line 320, in parse_endtag self.finish_endtag(tag) File "C:\Python26\lib\sgmllib.py", line 360, in finish_endtag self.unknown_endtag(tag) File "C:\Python26\lib\site-packages\feedparser.py", line 476, in unknown_endtag method() File "C:\Python26\lib\site-packages\feedparser.py", line 1318, in _end_content value = self.popContent('content') File "C:\Python26\lib\site-packages\feedparser.py", line 700, in popContent value = self.pop(tag) File "C:\Python26\lib\site-packages\feedparser.py", line 641, in pop output = _resolveRelativeURIs(output, self.baseuri, self.encoding) File "C:\Python26\lib\site-packages\feedparser.py", line 1594, in _resolveRelativeURIs p.feed(htmlSource) File "C:\Python26\lib\site-packages\feedparser.py", line 1441, in feed sgmllib.SGMLParser.feed(self, data) File "C:\Python26\lib\sgmllib.py", line 104, in feed self.goahead(0) File "C:\Python26\lib\sgmllib.py", line 138, in goahead k = self.parse_starttag(i) File "C:\Python26\lib\sgmllib.py", line 296, in parse_starttag self.finish_starttag(tag, attrs) File "C:\Python26\lib\sgmllib.py", line 338, in finish_starttag self.unknown_starttag(tag, attrs) File "C:\Python26\lib\site-packages\feedparser.py", line 1588, in unknown_starttag attrs = [(key, ((tag, key) in self.relative_uris) and self.resolveURI(value) or value) for key, value in attrs] File "C:\Python26\lib\site-packages\feedparser.py", line 1584, in resolveURI return _urljoin(self.baseuri, uri) File "C:\Python26\lib\site-packages\feedparser.py", line 286, in _urljoin return urlparse.urljoin(base, uri) File "C:\Python26\lib\urlparse.py", line 215, in urljoin params, query, fragment)) File "C:\Python26\lib\urlparse.py", line 184, in urlunparse return urlunsplit((scheme, netloc, url, query, fragment)) File "C:\Python26\lib\urlparse.py", line 192, in urlunsplit url = scheme + ':' + url File "C:\Python26\lib\encodings\cp1252.py", line 15, in decode return codecs.charmap_decode(input,errors,decoding_table)

    Read the article

  • Error Log states that I have MySQL connect error, yet script runs fine

    - by rob - not a robber
    Hello All, First, thanks for all the help I've received so far from StackOverflow. I've learned much. Once again, I'm posing a rudimentary question that I've searched on, but cannot find the exact answer to. Here or on PHP.net. It's sort of like what this guy asked, but not exactly: http://stackoverflow.com/questions/288603/mysql-throwing-query-error-yet-finishing-query-just-fine-why So, I saw my errorlog ballooning up when I checked my site directory and opened to notice that a bunch of errors have been recorded since I wrote this new Admin area. I know something is obviously awry with my scripting for the error to be thrown, but the weird thing is, the script actually runs through and pulls all the data I need without breaking. The log contains: PHP Warning: mysql_query() [function.mysql-query]: Access denied for user 'someuser'@'localhost' (using password: NO) in /home/mysite/adminconsole.php on line 15 I don't get that because that very line is where I setup my connection... the exact same way I do it everywhere else on the site with no problem. After that error, I have these thrown at the same time [09-Apr-2010 08:44:18] PHP Warning: mysql_query() [function.mysql-query]: A link to the server could not be established in /home/mysite/adminconsole.php on line 15 [09-Apr-2010 08:44:18] PHP Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/mysite/adminconsole.php on line 16 From what I read in the other guys thread, the problem is the contents of the query maybe? Maybe my query is malformed? Thanks so much for any guidance you can provide. -Rob

    Read the article

  • How to Include Multiple Javascript Files in .NET (Like they do in rails)

    - by Kyle West
    I'm jealous of the rails guys. They can do this: <%= javascript_include_tag "all_min" %> ... and I'm stuck doing this: <script src="/public/javascript/jquery/jquery.js" type="text/javascript"></script> <script src="/public/javascript/jquery/jquery.tablesorter.js" type="text/javascript"></script> <script src="/public/javascript/jquery/jquery.tablehover.pack.js" type="text/javascript"></script> <script src="/public/javascript/jquery/jquery.validate.js" type="text/javascript"></script> <script src="/public/javascript/jquery/jquery.form.js" type="text/javascript"></script> <script src="/public/javascript/jquery/application.js" type="text/javascript"></script> Are there any libraries to compress, gzip and combine multiple js files? How about CSS files?

    Read the article

  • Download-from-PyPI-and-install script

    - by zubin71
    Hello, I have written a script which fetches a distribution, given the URL. After downloading the distribution, it compares the md5 hashes to verify that the file has been downloaded properly. This is how I do it. def download(package_name, url): import urllib2 downloader = urllib2.urlopen(url) package = downloader.read() package_file_path = os.path.join('/tmp', package_name) package_file = open(package_file_path, "w") package_file.write(package) package_file.close() I wonder if there is any better(more pythonic) way to do what I have done using the above code snippet. Also, once the package is downloaded this is what is done: def install_package(package_name): if package_name.endswith('.tar'): import tarfile tarfile.open('/tmp/' + package_name) tarfile.extract('/tmp') import shlex import subprocess installation_cmd = 'python %ssetup.py install' %('/tmp/'+package_name) subprocess.Popen(shlex.split(installation_cmd) As there are a number of imports for the install_package method, i wonder if there is a better way to do this. I`d love to have some constructive criticism and suggestions for improvement. Also, I have only implemented the install_package method for .tar files; would there be a better manner by which I could install .tar.gz and .zip files too without having to write seperate methods for each of these?

    Read the article

  • PHP Shopping Cart Script - When to empty cart?

    - by john
    Im working on a shopping cart script in php and need some advice on how to handle the final process. Once the customer has entered items into the cart, chosen shipping option, and then clicked the checkout button, they are then redirected to a paypal button which is dynamically generated using BMCreateButton. My question is, when is the best time to empty the customers cart? I have set up the auto return feature on paypal, which i was goin to use to then empty the cart, but its not very good as customers have to click a link in order to redirect. So should i empty it when they click the checkout button just before the dynamic button? I can also use these setting in php to prevent cach back button issues // Date in the past header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); // Always modified header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); // HTTP/1.1 header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); // HTTP/1.0 header("Pragma: no-cache"); What would to guys recommend? cheers.

    Read the article

  • future-proofing java version check in ant script

    - by carneades
    The script provided here gave a great way to check if Ant is using Java 6. <?xml version="1.0" encoding="UTF-8"?> <project name="project" default="default"> <target name="default" depends="javaCheck" if="isJava6"> <echo message="Hello, World!" /> </target> <target name="javaCheck"> <echo message="ant.java.version=${ant.java.version}" /> <condition property="isJava6"> <equals arg1="${ant.java.version}" arg2="1.6" /> </condition> </target> </project> However, I have good reason to think the next person holding my position may not be a Java programmer and I want to make sure the builds don't fail because of Java 7. Is there any way to pick apart a String or otherwise ask for Java 6 or higher?

    Read the article

  • Get current PHP executable from within script?

    - by benizi
    I want to run a PHP cli program from within PHP cli. On some machines where this will run, both php4 and php5 are installed. If I run the outer program as php5 outer.php I want the inner script to be run with the same php version. In Perl, I would use $^X to get the perl executable. It appears there's no such variable in PHP? Right now, I'm using $_SERVER['_'], because bash (and zsh) set the environment variable $_ to the last-run program. But, I'd rather not rely on a shell-specific idiom. UPDATE: Version differences are but one problem. If PHP isn't in PATH, for example, or isn't the first version found in PATH, the suggestions to find the version information won't help. Additionally, csh and variants appear to not set the $_ environment variable for their processes, so the workaround isn't applicable there. UPDATE 2: I was using $_SERVER['_'], until I discovered that it doesn't do the right thing under xargs (which makes sense... zsh sets it to the command it ran, which is xargs, not php5, and xargs doesn't change the variable). Falling back to using: $version = explode('.', phpversion()); $phpcli = "php{$version[0]}";

    Read the article

  • WMI Query Script as a Job

    - by Kenneth
    I have two scripts. One calls the other with a list of servers as parameters. The second query is designed to execute a WMI query. When I run it manually, it does this perfectly. When I try to run it as a job it hangs forever and I have to remove it. For the sake of space here is the relevant part of the calling script: ProcessServers.ps1 Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList $sqlsrv,$destdb,$server,$instance GetServerDetailsLight.ps1 param($sqlsrv,$destdb,$server,$instance) $password = get-content C:\SQLPS\auth.txt | convertto-securestring $credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\MYUSER",$password [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $box_id = 0; if ($sqlsrv.length -eq 0) { write-output "No data passed" break } function getinfo { param( [string]$svr, [string]$inst ) "Entered GetInfo with: $svr,$inst" $cs = get-wmiobject win32_operatingsystem -computername $svr -credential $credentials -authentication 6 -Verbose -Debug | select Name, Model, Manufacturer, Description, DNSHostName, Domain, DomainRole, PartOfDomain, NumberOfProcessors, SystemType, TotalPhysicalMemory, UserName, Workgroup write-output "WMI Results: $cs" } getinfo $server $instance write-output "Complete" Executed as a job it will show as 'running' forever: PS C:\sqlps> Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList DBSERVER,LOGDB,SERVER01,SERVER01 Id Name State HasMoreData Location Command -- ---- ----- ----------- -------- ------- 21 Job21 Running True localhost param($sqlsrv,$destdb,... GAC Version Location --- ------- -------- True v2.0.50727 C:\WINDOWS\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll getinfo MSDCHR01 MSDCHR01 Entered GetInfo with: SERVER01,SERVER01 The last output I ever get is the 'Entered GetInfo with: SERVER01,SERVER01'. If I run it manually like so: PS C:\sqlps> .\GetServerDetailsLight.ps1 DBSERVER LOGDB SERVER01 SERVER01 The WMI query executes just as expected. I am trying to determine why this is, or at least a useful way to trap errors from within jobs. Thanks!

    Read the article

  • Script to copy files on CD and not on hard disk to a new directory

    - by John22
    I need to copy files from a set of CDs that have a lot of duplicate content, with each other, and with what's already on my hard disk. The file names of identical files are not the same, and are in sub-directories of different names. I want to copy non-duplicate files from the CD into a new directory on the hard disk. I don't care about the sub-directories - I will sort it out later - I just want the unique files. I can't find software to do that - see my post at SuperUser http://superuser.com/questions/129944/software-to-copy-non-duplicate-files-from-cd-dvd Someone at SuperUser suggested I write a script using GNU's "find" and the Win32 version of some checksum tools. I glanced at that, and have not done anything like that before. I'm hoping something exists that I can modify. I found a good program to delete duplicates, Duplicate Cleaner (it compares checksums), but it won't help me here, as I'd have to copy all the CDs to disk, and each is probably about 80% duplicates, and I don't have room to do that - I'd have to cycle through a few at a time copying everything, then turning around and deleting 80% of it, working the hard drive a lot. Thanks for any help.

    Read the article

  • JQuery - Make my script work?

    - by JasonS
    Hi, I have written a script. It does work! It's just a bit rubbish. Could someone tell me where I am going wrong. Basically, I have a 5 star rating box. When the mouse hovers over the various parts of the box I need the stars to change. Currently the stars only change when you move your mouse over and out of the element. I need the stars to change while the mouses within the element. I think the problem is with the event but I have tried a couple and it seems to make no difference. $(".rating").mouseover(function(e) { // Get Element Position var position = $(this).position(); var left = position.left; // Get mouse position var mouseLeft = e.pageX; // Get Actual var actLeft = mouseLeft - left; $(".info").html(actLeft); if (actLeft < 20) { $(this).attr('style', 'background-position:0;'); } else if (actLeft < 40) { $(this).attr('style', 'background-position:0 -20px;'); } else if (actLeft < 60) { $(this).attr('style', 'background-position:0 -40px;'); } else if (actLeft < 80) { $(this).attr('style', 'background-position:0 -60px;'); } else { $(this).attr('style', 'background-position:0 -80px;'); } });

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >