Search Results

Search found 25651 results on 1027 pages for 'shell script'.

Page 403/1027 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • TechEd 2010 Day Four: Learning how to help others learn

    - by BuckWoody
    I do quite a few presentations, and teach at the University of Washington, and also teach other classes. But I'm always learning from others how to help others learn. At events like TechEd I have access to some of the best speakers around, so I try to find out what they do that works. I attended a great session by allen White, in which he demonstrated a set of PowerShell scripts. He said that Dan Jones of the Microsoft Manageability team told him while he demonstrated a script he needed to provide some visual way to represent the process. Allen used one of the oldest visualizations around - a flowchart. It was the first time I'd seen one used to illustrate a PowerShell script, and it was very effective. I'm totally stealing the idea. All of us are teachers - we help others on our team understand what we're up to. Make sure you make notes for what you find effective in dealing with you, and then meld that into your own way of teaching. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Character Jump Control

    - by Abdullah Sorathia
    I would like to know how can I control the jump movement of a character in Unity3D. Basically I am trying to program the jump in such a way that while a jump is in progress the character is allowed to move either left or right in mid-air when the corresponding keys are pressed. With my script the character will correctly move to the left when, for example, the left key is pressed, but when the right key is pressed afterwards, the character moves to the right before the movement to the left is completed. Following is the script: void Update () { if(touchingPlatform && Input.GetButtonDown("Jump")){ rigidbody.AddForce(jumpVelocity, ForceMode.VelocityChange); touchingPlatform = false; isJump=true; } //left & right movement Vector3 moveDir = Vector3.zero; if(Input.GetKey ("right")||Input.GetKey ("left")){ moveDir.x = Input.GetAxis("Horizontal"); // get result of AD keys in X if(ShipCurrentSpeed==0) { transform.position += moveDir * 3f * Time.deltaTime; }else if(ShipCurrentSpeed<=15) { transform.position += moveDir * ShipCurrentSpeed * 2f * Time.deltaTime; }else { transform.position += moveDir * 30f * Time.deltaTime; } }

    Read the article

  • Powershell if statement not working - help!

    - by Dmart
    I have a batch file that takes a computer name as its argument and will install Java JRE on it remotely. Now, Im trying to run this Powershell script to repeatedly call the batch file and install Java on any system that it finds without the latest version. It seems to run error-free, but the statements inside the if code block never seem to run - even when the if conditional test evaluates to true. Can anyone look at this script and point out what I'm possibly missing? I'm using the Quest AD cmdlets, and BSOnPosh module. Thank you. get-qadcomputer -sizelimit 0 -name mypc* -searchroot 'OU=MyComputers,DC=MyDomain,DC=lcl'| test-host -property name |ForEach-Object -process { $targnm = $_.name $tststr=reg query "\\$targnm\HKLM\SOFTWARE\JavaSoft\Java Runtime Environment" /v Java6FamilyVersion if(-not ($tststr | select-string -SimpleMatch '1.6.0_20')) { $mssg="Updating to JRE 6u20 on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append cmd /c \\server\apps\java\installjreremote.cmd $targnm } else { $mssg ="JRE 6u20 found on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append } }

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • Simple Interactive Search with jQuery and ASP.Net MVC

    - by Doug Lampe
    Google now has a feature where the search updates as you type in the search box.  You can implement the same feature in your MVC site with a little jQuery and MVC Ajax.  Here's how: Create a javascript global variable to hold the previous value of your search box. Use setTimeout to check to see if the search box has changed after some interval.  (Note: Don't use setInterval since we don't want to have to turn the timer off while Ajax is processing.) Submit the form if the value is changed. Set the update target to display your results. Set the on success callback to "start" the timer again.  This, along with step 2 above will make sure that you don't sent multipe requests until the initial request has finished processing. Here is the code: <script type="text/javascript"> var searchValue = $('#Search').val(); $(function () {     setTimeout(checkSearchChanged, 0.1); }); function checkSearchChanged() {     var currentValue = $('#Search').val();     if ((currentValue) && currentValue != searchValue && currentValue != '') {         searchValue = $('#Search').val();         $('#submit').click();     }     else {         setTimeout(checkSearchChanged, 0.1);     } } </script> <h2>Search</h2> <% using (Ajax.BeginForm("SearchResults", new AjaxOptions { UpdateTargetId = "searchResults", OnSuccess = "checkSearchChanged" })) { %>     Search: <%   = Html.TextBox("Search", null, new { @class = "wide" })%><input id="submit" type="submit" value="Search" /> <% } %> <div id="searchResults"></div> That's it!

    Read the article

  • What's the difference between WSGI <app> and <module>?

    - by Leftium
    I followed these instructions to serve Python (Web2Py) via uWSGI. However, the web server returned an error: uWSGI Error Python application not found until I modified the config.xml config file from: <uwsgi> <pythonpath>/var/web2py/</pythonpath> <app mountpoint="/"> <script>wsgihandler</script> </app> </uwsgi> to: <uwsgi> <pythonpath>/var/web2py/</pythonpath> <module>wsgihandler</module> </uwsgi> What's the difference between <app> and <module>? Why did <module> work, but not <app>?

    Read the article

  • How to disable shared folders passowrd prompt on window & Linux

    - by user53864
    I'm using windows xp, vista, server 2008 R2, ubuntu-linux 9.10, 9.04, 8.10 desktop editions. I shared some folders/directories in windows and in ubuntu machines. Normally for the first time it's prompted for the password when tried to access the shared folders on either of the machine and again they prompt for the password when the system is rebooted. Recently I've created a batch and sh script which copies from the shared folders from windows to ubuntu and vice-versa. But the scripts cannot automate the password prompt and I'm wondering if I could disable the password prompt of the shared folders on windows and as well as on ubuntu machines so that my script runs smoothly. I don't know if it's possible or not but want to confirm posting here and if it is then nothing but saying need help...!. Thanks all!

    Read the article

  • How to distinguish doc, ppt, xls files, without looking at file extension

    - by Shelby. S
    So I was wondering how would you differentiate ppt, xls and doc files from each other in linux regardless of extensions. I tried 'file' but from the looks of it, all of MSOffice files are categorized under the same file type. Similarly I'm having trouble with docx, xlsx and pptx files, since they're essentially all zip files containing a bunch of xml. I also tried a python script importing the magic module, but no go. I'm trying to identify the actual file for a sandbox analysis. And for this specific purpose I need to find the actual file type in order to run it in the sandbox vm (the Windows vm runs everything by extension). Let's say my sample file is labeled as try.exe, but in reality it's just a doc file. My script will rename it as try.exe.doc, which would work fine for doc files. But since linux identifies all MSOffice files as simple DOC files then there's no way to identify ppt or xls files. As a result the sandbox wont' analyze the sample correctly.

    Read the article

  • Virtual Server 2005 R2 kungfu

    - by AngryHacker
    Does Virtual Server 2005 R2 have a command line interface, that's versatile enough? Here is a situation. I run a Win2k VM on an old memory constrained machine. I allocate it 378MB of RAM and the VM runs just fine. Once a month, inside the VM, I backup the (a very large) database, compress it using 7Zip and ftp it to the backup site (all in a script). Unfortunately the compression part takes a massive amount of RAM (far exceeding the 378MB), it goes for the paging file and brings absolutely everything to a crawl and literally takes 2-3 days, if left unattended. So to fix this, I have to shutdown the VM, give it temporarily 768MB of RAM and then the whole thing finishes in 20 minutes. So, is there a way do the following automatically from the host machine in a script? Shutdown the guest OS (I think, I got this part) Change the RAM allocation from 378 to 768 Start the guest OS again then, 1 hour later, do everything in reverse.

    Read the article

  • Upload a directory recursively to an FTP server

    - by Nicolas Raoul
    I am writing a Linux shell script to copy a local directory to a remote server (removing any existing files). Local server: ftp and lftp commands are available, no ncftp or any graphical tools. Remote server: only accessible via FTP. No rsync nor SSH nor FXP. I am thinking about listing local and remote files to generate a lftp script and then run it. Is there a better way? Note: Uploading only modified files would be a plus, but not required

    Read the article

  • Why should I use MSBuild instead of Visual Studio Solution files?

    - by Sid
    We're using TeamCity for continuous integration and it's building our releases via the solution file (.sln). I've used Makefiles in the past for various systems but never msbuild (which I've heard is sorta like Makefiles + XML mashup). I've seen many posts on how to use msbuild directly instead of the solution files but I don't see a very clear answer on why to do it. So, why should we bother migrating from solution files to an MSBuild 'makefile'? We do have a a couple of releases that differ by a #define (featurized builds) but for the most part everything works. The bigger concern is that now we'd have to maintain two systems when adding projects/source code. UPDATE: Can folks shed light on the lifecycle and interplay of the following three components? The Visual Studio .sln file The many project level .csproj files (which I understand an "sub" msbuild scripts) The custom msbuild script Is it safe to say that the .sln and .csproj are consumed/maintained as usual from within the Visual Studio IDE GUI while the custom msbuild script is hand-written and usually consumes the already existing individual .csproj "as-is"? That's one way I can see reduce overlap/duplicate in maintenance... Would appreciate some light on this from other folks' operational experience

    Read the article

  • "ssh_exchange_identification: Connection closed by remote host lost connection" when running cron job

    - by grautur
    I have a Ruby script that connects to a remote machine via ssh and executes a command. The script runs fine when I just run it in my terminal. In my crontab, I have 1 * * * * /bin/bash -l -c 'ruby myfile.rb' and if I go ahead and run /bin/bash -l -c 'ruby myfile.rb', everything executes fine. But when cron itself executes the job, I get a ssh_exchange_identification: Connection closed by remote host error. What's the cause of this? How do I fix it?

    Read the article

  • How to kill all screens that has been up longer then 3 weeks?

    - by Darkmage
    Im creating a script that i am executing every night at 03.00 that will kill all screens that has been running longer than 3 weeks. anyone done anything similar that can help? If you got a script or suggestion to a better method please help by posting :) I was thinking maybe somthing like this. First do a dump to textfile ps -U username -ef | grep SCREEN dump.txt then do a loop running through all lines of dump.txt with a regex and putting pid of the prosseses with STIME 3weeksago in a array. then do a kill loop on the array result.

    Read the article

  • quick check of open port

    - by shantanuo
    The following is working as expected. (do not want to use nmap) I need to use nc (or any other built-in centOS) command in shell script to check the port 6379 of a remote server. I want the script to exit quickly if no response received in less than 1 second. But it seems that nc will wait for too long before quitting with exit code of 1 How do I "quickly" check if the port is listening? # time nc -z 1.2.3.4 1234 real 0m21.001s user 0m0.000s sys 0m0.000s # echo $? 1 # time nc -z 1.2.3.4 6379 Connection to 1.2.3.4 6379 port [tcp/*] succeeded! real 0m0.272s user 0m0.000s sys 0m0.008s # echo $? 0

    Read the article

  • PHP5 giving failed to open stream: HTTP request failed error when using fopen.

    - by mickey
    Hello everyone. This problem seems to have been discussed in the past everywhere on google and here, but I have yet to find a solution. A very simple fopen gives me a PHP Warning: fopen(http://www.google.ca): failed to open stream: HTTP request failed!". The URL I am fetching have no importance because even when I fetch http://www.google.com it doesnt work. The exact same script works on different server. The one failing is Ubuntu 10.04 and PHP 5.3.2. This is not a problem in my script, it's something different in my server or it might be a bug in PHP. I have tried using a user_agent in php.ini but no success. My allow_url_fopen is set to On. If you have any ideas, feel free!

    Read the article

  • bridge to share WiFi internet connection over eth0

    - by rubo77
    In this script there is implemented that you create a mesh network and connect other machines to your device to serve an internet connection with dhcp: echo "starting bridge to share internet connection over eth0" ifconfig eth0 up promisc brctl addbr br-freifunk brctl addif br-freifunk bat0 brctl addif br-freifunk eth0 echo "internet starting, this may take some minutes due to latency..." echo "(use" echo "tail -f /var/log/syslog" echo "in another window for debugging)" echo echo dhclient br-freifunk echo "The error 'Rather than invoking init scripts through /etc/init.d...' can be ignored:" dhclient br-freifunk # without bridge: dhclient bat0 This works fine with the freifunk-mesh network. But how can I serve internet in a normal case? I would like to connect to any open Wifi I find with my network-manager or wicd (yes, with the graphical GUI) and then start a small script that creates a bridge to my network-out on my laptop I use Ubuntu and my networkcards are eth0 and wlan0

    Read the article

  • Google Analytics Visitors drop-off for certain region of site only

    - by crmpicco
    I have an issue with the tracking on my site where I have seen a dramatic drop off of visitors to the site from a certain region. I have four regions on my site at the moment, these are UK, EU, US and RoW (Rest of the World). The UK, EU and US regions are unaffected, only the RoW region suffers this drop-off. I have included a screen shot below from my GA account, which shows this effect. My GA code, which is included on every page on the site is below. I have changed the UA account number intentionally for this example. There have been no changes made to the GA account or the tracking code in a live environment for some considerable time, but for some reason I am seeing the drop-off for this region only. In the code below I am not tracking page views on certain pages as I have event tracking setup for these pages. <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-18721873-5']); _gaq.push(['_setCookiePath', '/row/']); if ( typeof(p_page) != 'undefined') { // do nothing if user is on above pages // N.B. there are a series of conditions in this if statement checking that we are not on a particular page } else { _gaq.push(['_trackPageview']); } </script>

    Read the article

  • How should I architect a personal schedule manager that runs 24/7?

    - by Crawford Comeaux
    I've developed an ADHD management system for myself that's attempting to change multiple habits at once. I know this is counter to conventional wisdom, but I've tried the conventional for years & am now trying it my way. (just wanted to say that to try and prevent it from distracting people from the actual question) Anyway, I'd like to write something to run on a remote server that monitors me, helps me build/avoid certain habits, etc. What this amounts to is a system that: runs 24/7 may have multiple independent tasks to run at once may have tasks that require other tasks to run first lets tasks be scheduled by specific time, recurrence (ie. "run every 5 mins"), or interval (ie. "run from 2pm to 3pm") My first naive attempt at this was just a single PHP script scheduled to run every minute by cron (language was chosen in order to use a certain library, but no longer necessary). The logic behind when to run this or that portion of code got hairy pretty quick. So my question is how should I approach this from here? I'm not tied to any one language, though I'm partial to python/javascript. Thoughts: Could be done as a set of scripts that include a scheduling mechanism with one script per bit of logic...but the idea just feels wrong to me. Building it as a daemon could be helpful, but still unsure what to do about dozens of if-else statements for detecting the current time

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Cannot get nscd to run. DNS cache stale as a result

    - by Phunt
    I'm trying to troubleshoot an issue on a MediaTemple server (running CentOS5) where the DNS cache has grown stale - I think because nscd has crashed. I've tried restarting nscd: # service nscd restart Stopping nscd: [FAILED] Starting nscd: [ OK ] This makes sense since I believe nscd has crashed so it shouldn't already be running, but When I view the status of nscd: # service nscd status nscd dead but subsys locked And ps -A returns no processes related to nscd (I assume because it's dead). I've edited /etc/nscd.conf and uncommented the line that defines the location for the log file. It created the file but it never writes anything to it. I tried looking at the init script but found that it's no help since the script thinks everything is running fine - the service returns that it started up correctly. How do I 'unlock' the subsys that nscd is complaining about?

    Read the article

  • Deploying Office 2013 via GPO

    - by NickC
    Looking at potential ways to deploy Office 2013 via GPO. First and most obvious way is to run a startup script which calls the Office 2013 setup.exe. Problem here is what happens after it is installed, will that startup script keep re-installing the product every time the machine boots? Another potential way is to install each Office component separately using the multitude of .msi files which are present, would that work and provide the same thing as a full install of Office? There is actually twenty three separate .msi files. What about officemui.msi is that a wrapper which contains calls to all of the other office components.

    Read the article

  • Connect through SSH and type in password automatically, without using a public key

    - by binary255
    A server allows SSH connections, but not using public key authentication. It's not within my power to change this at the moment (due to technical difficulties, not organizational) but I will get on it as soon as possible! What I need now is to execute commands on the server using plain old account+password authentication from a script. That is, I need to do it in a non-interactive way. Is it possible? And how do I do it? The client which will be executing the script runs Ubuntu Server 8.04. The server runs Cygwin and OpenSSH.

    Read the article

  • Unity Problem with colliding instances of same object

    - by Kuba Sienkiewicz
    I want to check if object's instance is overlapping with another instance (any spawned object with another spawned object, not necessary the same object). I'm doing this by detecting collisions between bodies. But I have a problem. Spawned object (instances) are detecting collision with everything but other spawned objects. I've checked collision layers etc. All of spawned objects have rigidbodies and mesh colliders. Also when I attach my script to another body and I touch that body with an instanced object it detects collision. So problem is visible only in collision between spawned objects. And one more information I have script, rigid body and collider attached to child of main object. using UnityEngine; using System.Collections; public class CantPlace : MonoBehaviour { public bool collided = false; // Use this for initialization void Start () { } // Update is called once per frame void Update () { //Debug.Log (collided); } void OnTriggerEnter(Collider collider) { //if (true) { //foreach (Transform child in this.transform) { // if (child.name == "Cylinder") { //collided = true; Color c; c = this.renderer.material.color; c.g = 0f; c.b = 1f; c.r = 0f; this.renderer.material.color = c; Debug.Log (collider.name); //} // } //} //foreach (ContactPoint contact in collision.contacts) { // Debug.DrawRay(contact.point, contact.normal, Color.red,15f); // } } }

    Read the article

  • Maintaining Two Separate Software Versions From the Same Codebase in Version Control

    - by Joseph
    Let's say that I am writing two different versions of the same software/program/app/script and storing them under version control. The first version is a free "Basic" version, while the second is a paid "Premium" version that takes the codebase of the free version and expands upon it with a few extra value-added features. Any new patches, fixes, or features need to find their way into both versions. I am currently considering using master and develop branches for the main codebase (free version) along side master-premium and develop-premium branches for the paid version. When a change is made to the free version and merged to the master branch (after thorough testing on develop of course), it gets copied over to the develop-premium branch via the cherry-pick command for more testing and then merged into master-premium. Is this the best workflow to handle this situation? Are there any potential problems, caveats, or pitfalls to be aware of? Is there a better branching strategy than what I have already come up with? Your feedback is highly appreciated! P.S. This is for a PHP script stored in Git, but the answers should apply to any language or VCS.

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >