Search Results

Search found 37604 results on 1505 pages for 'build script'.

Page 202/1505 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • How to force remove a package if dpkg removal script fails?

    - by fodon
    I'm trying to remove a package where I deleted the /etc/init.d/disco-master file (in an attempt to remove the package manually). I want to remove the disco-master package. How do I do this now? This is what happens when I do sudo apt-get remove disco-master: removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--remove): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master E: Sub-process /usr/bin/dpkg returned an error code (1) When I do sudo apt-get install --reinstall disco-master I get the following: You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.2+nmu1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I do sudo apt-get -f install I get this: Unpacking disco-node (from .../disco-node_0.4.2+nmu1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb (--unpack): trying to overwrite '/usr/lib/disco/master/ebin/disco.app', which is also in package disco-master 0.4.1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) When I run sudo apt-get remove disco-node I get the following: Package disco-node is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.1) but it is not going to be installed Depends: python-disco (= 0.4.1) but 0.4.2+nmu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I did sudo dpkg -P --force-all disco-master I got: Removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--purge): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master

    Read the article

  • Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots into GIT. The end result being that the latest code is the same as the last snapshot, and other editions are accessible and are as numbered. Some other requirements: that the latest edition is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the latest edition of the code. meanwhile, there should be continuity between files that do persist between snapshots. I would like GIT to know that there are previous editions of these files and not treat them as brand new files within each edition. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in GIT as well as subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering. (I haved posted an answer separately for each tool so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same but separate question for Subversion: Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    Read the article

  • Jenkins Paramerized Trigger + Copy Artifact

    - by Josh Kelley
    I'm working on setting up Jenkins to handle our release builds. A release build consists of a Windows installer that includes some binaries that must be built on Linux. Here's what I have so far: The Windows portion and Linux portion are set up as separate Jenkins projects. The Windows project is parameterized, taking the Subversion tag to build and release. As part of its build, the Windows project triggers a build of that same Subversion tag for the Linux project (using the Parameterized Trigger plugin) then copies the artifacts from the Linux project (using the Copy Artifact plugin) to the Windows project's workspace so that they can be included in the Windows installer. Where I'm stuck: Right now, Copy Artifact is set up to copy the last successful build. It seems more robust to configure Copy Artifact to copy from the exact build that Parameterized Trigger triggered, but I'm having trouble figuring out how to make that work. There's an option for a "build selector" parameter that I think is intended to help with this, but I can't figure out how it's supposed to be set up (and blindly experimenting with different possibilities is somewhat painful when the build takes an hour or two to find success or failure). How should I set this up? How does build selector work?

    Read the article

  • How can I use a script to control a VirtualBox guest?

    - by TheWickerman666
    Refer to : Launch an application in Windows from the Ubuntu desktop I was wondering if Takkat could elaborate on the actual execution i.e. howto in the script file. This will be greatly helpful. Thanks in advance my script file InternetExplorerVM.sh looks like this, execution is /path/to/InternetExplorerVM.sh "C:\Program Files\Internet Explorer\iexplore.exe" #!/bin/bash # start Internet Explorer inside of a Windows7 Ultimate VM echo "Starting 'Internet Explorer' browser inside Windows7 virtual machine" echo "" sleep 1 echo "Please be patient" VBoxManage startvm b307622e-6b5e-4e47-a427-84760cf2312b sleep 15 echo "" echo "Now starting 'Internet Explorer'" ##VBoxManage --nologo guestcontrol b307622e-6b5e-4e47-a427-84760cf2312b execute --image "$1" --username RailroadGuest --password bnsf1234 VBoxManage --nologo guestcontrol b307622e-6b5e-4e47-a427-84760cf2312b execute --image "C:\\Program/ Files\\Internet/ Explorer\\iexplore.exe" --username RailroadGuest --password bnsf1234 --wait-exit --wait-stdout echo "" echo "Saving the VM's state now" VBoxManage controlvm b307622e-6b5e-4e47-a427-84760cf2312b savestate sleep 2 #Check VM state echo "" echo "Check the VM state" VBoxManage showvminfo b307622e-6b5e-4e47-a427-84760cf2312b | grep State exit My apologies for any mistakes, this is my first time posting on askubuntu.Thanks a ton in advance. This has been very helpful. Need this for BNSF guests, their Mainframe emulator works exclusively on Java enabled Internet Explorer.

    Read the article

  • How to build a good service layer in ASP.NET?

    - by Swippen
    I have looked through some questions, technologies for building a good service layer but I have some questions regarding this that I need help with. First some information of what I have for requirements. We currently have a number of web applications that talk to each other in a spiderweb looking way (all talking to each other in a confusing way via webservices and database data). We want to change this so that all applications go through a service layer where we can work more with cache and encapsulate common functionality and more. We want this layer to also have a Web API so that 3rd party clients can consume information from the service. The problem I see is that if we build the service layer with say MVC4 Web API don't we need to communicate between the application using the webAPI meaning we have to construct URLs and consume JSON/Xml. That does not sound too effective. I assume a better method would be working with entities and WCF to communicate between the application but then we might loose the Web API magic? So the question is if there is a way to consume a service layer as both a Web API (JSON/XML) and as a more backend service layer with entities. If we are forced to use 2 different service layers we might have to duplicate some functionality and other bad things. Hope the question is clear enough and please ask if you need any more information.

    Read the article

  • Using the promote builds plugin to tag subversion repository in jenkins

    - by mark
    We have a task which builds based on data from 4 different SVN repositories. I want to allow QA promote a build, so that the revisions participating in the build are tagged with the build number and some optional label. I have encountered the following problem - the promoted build may not be the most recent build. How do I know the SVN revision of each of the four repositories used during that build? I know that each build has this information in the revision.txt and build.xml files associated with the build, but how does it become available in the context of promotion? Thanks. P.S. Asked here before, but did not get a satisfying answer.

    Read the article

  • When uninstalling all GRUB packages for EC2 AMI build script, how do I bypass prompting?

    - by Skaperen
    When I try to uninstall all GRUB packages from the cloud-image version of Ubuntu I use in AWS EC2, I get an interactive prompt. I need put this all in a script under automation, so prompts need to be avoided. I have searched for means to bypass this, and all of the features like debconf apply to prompts for config settings which can be provided in advance. But the prompt I am getting is not a config prompt. It would be in the class of "are you sure" prompts. I tried --force-all and that did not work. The prompt is referring to the files in /boot/grub so I tried by first doing "rm -fr /boot/grub" (they do all go away) and it still prompts to ask me if I want to delete them even though there is nothing to delete. The wording of the prompt is: Do you want to have all GRUB 2 files removed from /boot/grub? This will make the system unbootable unless another boot loader is installed. Remove GRUB 2 from /boot/grub? <YES> <NO> The command I am doing is: dpkg --purge grub-common grub-gfxpayload-lists grub-legacy-ec2 grub-pc grub-pc-bin grub2-common How can I get past this prompt without trying to encapsulate it in something that tries to answer (which itself has issues when there is no TTY so I want to avoid that) ?

    Read the article

  • How can I center XHTML content with CSS?

    - by drea
    so I recently converted a website of mine from a table content format to a div content format. Table format Version: Table version of the website: here. Table version style CSS: body { width: 1020px; margin: 0 auto; background-image: url(images/bg.png); } .logo{ width:301px; height:151px; background:url(images/logo.png); text-indent:-9999px; border:none; cursor:pointer; } .logo:hover { opacity:0.9; } .signin{ width:69px; height:30px; background:url(images/signin.png); text-indent:-9999px; border:none; cursor:pointer; } .signin:hover { opacity:0.9; } .register{ width:79px; height:30px; background:url(images/register.png); text-indent:-9999px; border:none; cursor:pointer; } .register:hover { opacity:0.9; } .Contact_Us{ width:53px; height:9px; background:url(images/Contact_Us.png); text-indent:-9999px; border:none; cursor:pointer; } .Contact_Us:hover { opacity:0.9; } .Code_of_Conduct{ width:84px; height:9px; background:url(images/Code_of_Conduct.png); text-indent:-9999px; border:none; cursor:pointer; } .Code_of_Conduct:hover { opacity:0.9; } .Privacy_Policy{ width:65px; height:12px; background:url(images/Privacy_Policy.png); text-indent:-9999px; border:none; cursor:pointer; } .Privacy_Policy:hover { opacity:0.9; } .Copyright{ width:149px; height:9px; background:url(images/Copyright.png); text-indent:-9999px; border:none; cursor:pointer; } .Copyright:hover { opacity:0.9; } .slideshow{ width:301px; height:151px; background: url(slideshow.png), url(minecraft.png), url(tf2.png), url(CSS.png), url(GM.png), url(aos.png), url(CSGO.png), url(voip.png), text-indent:-9999px; border:none; cursor:pointer; } .slideshow:hover { opacity:0.9; } Table version source: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head profile="http://www.w3.org/2005/10/profile"> <link rel="icon" type="image/png" href="http://www.xodusen.com/resources/images/favicon.png"> <title>Welcome to XodusEN</title> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script> <script type="text/javascript" src="http://cloud.github.com/downloads/malsup/cycle/jquery.cycle.all.latest.js"></script> <script type="text/javascript"> $(document).ready(function() { $('.slideshow').cycle({ fx: 'fade' // choose your transition type, ex: fade, scrollUp, shuffle, etc... }); }); </script> <meta name="description" content="This is the homepage of XodusEN. Xodus Entertainment Network is a unique & friendly Gaming Community that welcomes & realises the potential, and value within any user regardless of their origin. " > <meta name="keywords" content="XeN, Xodus, XEN, xen, Xodus Entertainment Network, gaming, community, PC, Steam, XBL, Xbox 360, PSN, Playstation, games, Gaming, Community, XodusEN, Gaming Network, Network, TF2, Server, CS:S, Minecraft, premium, servers, Counter-Strike: Source, Website, Homepage, Minecraftia" > <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <link href="style.css" rel="stylesheet" type="text/css"> <!--[if IE]> <script type="text/javascript"> window.location = "http://www.xodusen.com/ie/"; </script> <![endif]--> </head> <body bgcolor="#d7d7d7"> <table id="Table_01" border="0" cellpadding="0" cellspacing="0"> <tr> <td colspan="18"> <img src="images/index_01.png" width="1020" height="9" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="9" alt=""></td> </tr> <tr> <td colspan="11" rowspan="2"> <img src="images/index_02.png" width="826" height="252" alt=""></td> <td> <a id="signin" class="signin" href="http://s.xodusen.com/VrtqYm"> <img src="images/signin.png" width="69" height="30" border="0" alt=""></a> <td rowspan="6"> <img src="images/index_04.png" width="3" height="643" alt=""></td> <td colspan="3"> <a id="register" class="register" href="http://s.xodusen.com/WW3rpZ"> <img src="images/Register.png" width="79" height="30" border="0" alt=""></a> <td colspan="2" rowspan="6"> <img src="images/index_06.png" width="43" height="643" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="30" alt=""></td> </tr> <tr> <td rowspan="5"> <img src="images/index_07.png" width="69" height="613" alt=""></td> <td colspan="3" rowspan="5"> <img src="images/index_08.png" width="79" height="613" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="222" alt=""></td> </tr> <tr> <td colspan="5"> <img src="images/index_09.png" width="385" height="53" alt=""></td> <td> <img src="images/index_10.png" width="250" height="53" alt=""></td> <td colspan="5"> <img src="images/index_11.png" width="191" height="53" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="53" alt=""></td> </tr> <tr> <td colspan="4" rowspan="3"> <img src="images/index_09-13.png" width="360" height="338" alt=""></td> <td colspan="3"> <a id="logo" class="logo" href="http://www.xodusen.com/community"> <img src="images/logo.png" alt=""></a> </td> <td colspan="4" rowspan="3"> <img src="images/index_11-15.png" width="165" height="338" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="151" alt=""></td> </tr> <tr> <td rowspan="2"> <img src="images/index_09-16.png" width="25" height="187" alt=""></td> <td> <img src="images/index_16.png" width="250" height="46" alt=""></td> <td rowspan="2"> <img src="images/index_11-18.png" width="26" height="187" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="46" alt=""></td> </tr> <tr> <td> <img src="images/index_12.png" width="250" height="141" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="141" alt=""></td> </tr> <tr> <td rowspan="7"> <img src="images/index_13.png" width="27" height="548" alt=""></td> <td colspan="16" id="slideshow" class="slideshow"> <a href="http://www.xodusen.com/community"><img src="images/slideshow.png" width="960" height="305" alt=""></a> <a href="http://www.xodusen.com/mcurl"><img src="images/minecraft.png" width="960" height="305" alt=""></a> <a href="steam://connect/74.121.188.194:27015"><img src="images/tf2.png" width="960" height="305" alt=""></a> <a href="steam://connect/74.121.188.195:27015"><img src="images/CSS.png" width="960" height="305" alt=""></a> <a href="steam://connect/74.121.188.197:27015"><img src="images/GM.png" width="960" height="305" alt=""></a> <a href="aos://3267131722:32887"><img src="images/aos.png" width="960" height="305" alt=""></a> <a href="steam://connect/74.121.188.196:27015"><img src="images/CSGO.png" width="960" height="305" alt=""></a></td> <td rowspan="7"> <img src="images/index_15.png" width="33" height="548" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="305" alt=""></td> </tr> <tr> <td colspan="16"> <img src="images/index_16-23.png" width="960" height="155" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="155" alt=""></td> </tr> <tr> <td rowspan="5"> <img src="images/index_17.png" width="38" height="88" alt=""></td> <td rowspan="2"> <a id="Copyright" class="Copyright" href="http://www.xodusen.com/community"> <img src="images/Copyright.png" width="149" height="9" border="0" alt=""></a></td> <td colspan="14"> <img src="images/index_25.png" width="773" height="5" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="5" alt=""></td> </tr> <tr> <td colspan="5" rowspan="4"> <img src="images/index_20.png" width="527" height="83" alt=""></td> <td rowspan="3"> <a id="Privacy_Policy" class="Privacy_Policy" href="http://s.xodusen.com/VhGEkH"> <img src="images/Privacy_Policy.png" width="65" height="12" border="0" alt=""></a></td> <td rowspan="4"> <img src="images/index_28.png" width="8" height="83" alt=""></td> <td colspan="3" rowspan="2"> <a id="Code_of_Conduct" class="Code_of_Conduct" href="http://s.xodusen.com/Tf5Gz7"> <img src="images/Code_of_Conduct.png" width="84" height="9" border="0" alt=""></a></td> <td rowspan="4"> <img src="images/index_30.png" width="6" height="83" alt=""></td> <td rowspan="2"> <a id="Contact_Us" class="Contact_Us" href="http://s.xodusen.com/T5EYsG"> <img src="images/Contact_Us.png" width="53" height="9" border="0" alt=""></a></td> <td colspan="2" rowspan="4"> <img src="images/index_26.png" width="30" height="83" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="4" alt=""></td> </tr> <tr> <td rowspan="3"> <img src="images/index_27.png" width="149" height="79" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="5" alt=""></td> </tr> <tr> <td colspan="3" rowspan="2"> <img src="images/index_28-35.png" width="84" height="74" alt=""></td> <td rowspan="2"> <img src="images/index_29.png" width="53" height="74" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="3" alt=""></td> </tr> <tr> <td> <img src="images/index_30-37.png" width="65" height="71" alt=""></td> <td> <img src="images/spacer.gif" width="1" height="71" alt=""></td> </tr> <tr> <td> <img src="images/spacer.gif" width="27" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="38" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="149" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="146" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="25" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="250" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="26" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="80" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="65" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="8" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="12" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="69" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="3" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="6" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="53" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="20" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="10" height="1" alt=""></td> <td> <img src="images/spacer.gif" width="33" height="1" alt=""></td> <td></td> </tr> </table> </body> </html> Div format Version: Div version of the website: here. Div version style CSS: body { width: 1020px; margin: 0 auto; background-image: url(images/bg.png); } #Table_01 { position:absolute; left:0px; top:0px; width:1020px; height:1200px; } #index-01_ { position:absolute; left:0px; top:0px; width:1020px; height:9px; } #index-02_ { position:absolute; left:0px; top:9px; width:826px; height:305px; } #Signin_ { position:absolute; left:826px; top:9px; width:69px; height:30px; } #index-04_ { position:absolute; left:895px; top:9px; width:3px; height:643px; } #Register_ { position:absolute; left:898px; top:9px; width:79px; height:30px; } #index-06_ { position:absolute; left:977px; top:9px; width:43px; height:643px; } #index-07_ { position:absolute; left:826px; top:39px; width:69px; height:613px; } #index-08_ { position:absolute; left:898px; top:39px; width:79px; height:613px; } #index-09_ { position:absolute; left:0px; top:314px; width:360px; height:338px; } #Logo_ { position:absolute; left:360px; top:314px; width:301px; height:151px; } #index-11_ { position:absolute; left:661px; top:314px; width:165px; height:338px; } #index-12_ { position:absolute; left:360px; top:465px; width:301px; height:187px; } #index-13_ { position:absolute; left:0px; top:652px; width:27px; height:548px; } #Slideshow_ { position:absolute; left:27px; top:652px; width:960px; height:305px; } #index-15_ { position:absolute; left:987px; top:652px; width:33px; height:548px; } #index-16_ { position:absolute; left:27px; top:957px; width:960px; height:155px; } #index-17_ { position:absolute; left:27px; top:1112px; width:39px; height:88px; } #Copyright_ { position:absolute; left:66px; top:1112px; width:148px; height:13px; } #index-19_ { position:absolute; left:214px; top:1112px; width:773px; height:5px; } #index-20_ { position:absolute; left:214px; top:1117px; width:526px; height:83px; } #Privacy-Policy_ { position:absolute; left:740px; top:1117px; width:68px; height:23px; } #index-22_ { position:absolute; left:808px; top:1117px; width:6px; height:83px; } #Code-of-Conduct_ { position:absolute; left:814px; top:1117px; width:84px; height:23px; } #index-24_ { position:absolute; left:898px; top:1117px; width:2px; height:83px; } #Contact-Us_ { position:absolute; left:900px; top:1117px; width:57px; height:23px; } #index-26_ { position:absolute; left:957px; top:1117px; width:30px; height:83px; } #index-27_ { position:absolute; left:66px; top:1125px; width:148px; height:75px; } #index-28_ { position:absolute; left:740px; top:1140px; width:68px; height:60px; } #index-29_ { position:absolute; left:814px; top:1140px; width:84px; height:60px; } #index-30_ { position:absolute; left:900px; top:1140px; width:57px; height:60px; } .logo{ width:301px; height:151px; background:url(images/logo.png); text-indent:-9999px; border:none; cursor:pointer; } .logo:hover { opacity:0.9; } .signin{ width:69px; height:30px; background:url(images/signin.png); text-indent:-9999px; border:none; cursor:pointer; } .signin:hover { opacity:0.9; } .register{ width:79px; height:30px; background:url(images/register.png); text-indent:-9999px; border:none; cursor:pointer; } .register:hover { opacity:0.9; } .contact_Us{ width:53px; height:9px; background:url(images/Contact_Us.png); text-indent:-9999px; border:none; cursor:pointer; } .contact_Us:hover { opacity:0.9; } .code_of_Conduct{ width:84px; height:9px; background:url(images/Code_of_Conduct.png); text-indent:-9999px; border:none; cursor:pointer; } .code_of_Conduct:hover { opacity:0.9; } .privacy_policy{ width:65px; height:12px; background:url(images/Privacy_Policy.png); text-indent:-9999px; border:none; cursor:pointer; } .privacy_policy:hover { opacity:0.9; } .copyright{ width:148px; height:13px; background:url(images/Copyright.png); text-indent:-9999px; border:none; cursor:pointer; } .copyright:hover { opacity:0.9; } .slideshow{ width:301px; height:151px; background: url(slideshow.png), url(minecraft.png), url(tf2.png), url(CSS.png), url(GM.png), url(aos.png), url(CSGO.png), url(voip.png), text-indent:-9999px; border:none; cursor:pointer; } .slideshow:hover { opacity:0.9; } Div version source: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script type="text/javascript"> //<![CDATA[ window.__CF=window.__CF||{};window.__CF.AJS={"vig_key":{"sid":"c6d1454039dd49b1c8400bbfdf74df7a"},"trumpet":{"message":"XodusEN is undergoing background maintenance, that will provide performance & graphical improvements to our system, but will not hinder your experience across our services."},"ga_key":{"ua":"UA-35779435-1","ga_bs":"2"},"exprmntly":{"service_id":"7967"},"cdnjs":{"__h":"1","cdnjs":"MO,GF,FX,CS,JS"},"abetterbrowser":{"ie":"10"}}; //]]> </script> <script type="text/javascript"> //<![CDATA[ try{if (!window.CloudFlare) { var CloudFlare=[{verbose:0,p:0,byc:0,owlid:"cf",mirage:{responsive:0,lazy:0},oracle:0,paths:{cloudflare:"/cdn-cgi/nexp/aav=1870252173/"},atok:"d6e39f49946fcb6d690f0d10d5a963f3",zone:"xodusen.com",rocket:"a",apps:{"vig_key":{"sid":"c6d1454039dd49b1c8400bbfdf74df7a"},"trumpet":{"message":"XodusEN is undergoing background maintenance, that will provide performance & graphical improvements to our system, but will not hinder your experience across our services."},"ga_key":{"ua":"UA-35779435-1","ga_bs":"2"},"exprmntly":{"service_id":"7967"},"cdnjs":{"__h":"1","cdnjs":"MO,GF,FX,CS,JS"},"abetterbrowser":{"ie":"10"}}}];document.write('<script type="text/javascript" src="//ajax.cloudflare.com/cdn-cgi/nexp/aav=4114775854/cloudflare.min.js"><'+'\/script>')}}catch(e){}; //]]> </script> <script type="text/javascript" src="//ajax.cloudflare.com/cdn-cgi/nexp/aav=1566821048/appsh.min.js"></script><script type="text/javascript">__CF.AJS.inith();</script><link rel="icon" type="image/png" href="http://www.xodusen.com/resources/images/favicon.png"> <title>Welcome to XodusEN</title> <meta name="description" content="This is the homepage of XodusEN. Xodus Entertainment Network is a unique & friendly Gaming Community that welcomes & realises the potential, and value within any user regardless of their origin. "> <meta name="keywords" content="XeN, Xodus, XEN, xen, Xodus Entertainment Network, gaming, community, PC, Steam, XBL, Xbox 360, PSN, Playstation, games, Gaming, Community, XodusEN, Gaming Network, Network, TF2, Server, CS:S, Minecraft, premium, servers, Counter-Strike: Source, Website, Homepage, Minecraftia"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <link href="style.css" rel="stylesheet" type="text/css"> <script type="text/rocketscript" data-rocketsrc="https://ajax.googleapis.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script> <script type="text/rocketscript" data-rocketsrc="http://cloud.github.com/downloads/malsup/cycle/jquery.cycle.all.latest.js"></script> <script type="text/rocketscript"> $(document).ready(function() { $('.slideshow').cycle({ fx: 'fade' // choose your transition type, ex: fade, scrollUp, shuffle, etc... }); }); </script> <!--[if IE]> <script type="text/javascript"> window.location = "http://www.xodusen.com/ie/"; </script> <![endif]--> <script type="text/javascript"> /* <![CDATA[ */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-35779435-1']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); (function(b){(function(a){"__CF"in b&&"DJS"in b.__CF?b.__CF.DJS.push(a):"addEventListener"in b?b.addEventListener("load",a,!1):b.attachEvent("onload",a)})(function(){"FB"in b&&"Event"in FB&&"subscribe"in FB.Event&&(FB.Event.subscribe("edge.create",function(a){_gaq.push(["_trackSocial","facebook","like",a])}),FB.Event.subscribe("edge.remove",function(a){_gaq.push(["_trackSocial","facebook","unlike",a])}),FB.Event.subscribe("message.send",function(a){_gaq.push(["_trackSocial","facebook","send",a])}));"twttr"in b&&"events"in twttr&&"bind"in twttr.events&&twttr.events.bind("tweet",function(a){if(a){var b;if(a.target&&a.target.nodeName=="IFRAME")a:{if(a=a.target.src){a=a.split("#")[0].match(/[^?=&]+=([^&]*)?/g);b=0;for(var c;c=a[b];++b)if(c.indexOf("url")===0){b=unescape(c.split("=")[1]);break a}}b=void 0}_gaq.push(["_trackSocial","twitter","tweet",b])}})})})(window); /* ]]> */ </script> <meta name="pinterest" content="nopin"/></head> <body style="background-color:#d7d7d7;"><script type="text/javascript"> //<![CDATA[ try{(function(a){var b="http://",c="www.xodusen.com",d="/cdn-cgi/cl/",e="618e40fe1e01787d9cb9aa2f8abc52caf8a32796.gif",f=new a;f.src=[b,c,d,e].join("")})(Image)}catch(e){} //]]> </script> <div id="Table_01"> <div id="index-01_"> <img id="index_01" src="images/index_01.png" width="1020" height="9" alt=""/> </div> <div id="index-02_"> <img id="index_02" src="images/index_02.png" width="826" height="305" alt=""/> </div> <div id="Signin_"> <a href="http://s.xodusen.com/VrtqYm"> <img id="Signin" class="signin" src="images/Signin.png" width="69" height="30" border="0" alt=""/></a> </div> <div id="index-04_"> <img id="index_04" src="images/index_04.png" width="3" height="643" alt=""/> </div> <div id="Register_"> <a href="http://s.xodusen.com/WW3rpZ"> <img id="Register" class="register" src="images/Register.png" width="79" height="30" alt=""/></a> </div> <div id="index-06_"> <img id="index_06" src="images/index_06.png" width="43" height="643" alt=""/> </div> <div id="index-07_"> <img id="index_07" src="images/index_07.png" width="69" height="613" alt=""/> </div> <div id="index-08_"> <img id="index_08" src="images/index_08.png" width="79" height="613" alt=""/> </div> <div id="index-09_"> <img id="index_09" src="images/index_09.png" width="360" height="338" alt=""/> </div> <div id="Logo_"> <a href="http://s.xodusen.com/WW3rpZ"> <img class="logo" src="images/Logo.png" width="301" height="151" alt=""></a> </div> <div id="index-11_"> <img id="index_11" src="images/index_11.png" width="165" height="338" alt=""/> </div> <div id="index-12_"> <img id="index_12" src="images/index_12.png" width="301" height="187" alt=""/> </div> <div id="index-13_"> <img id="index_13" src="images/index_13.png" width="27" height="548" alt=""/> </div> <div id="Slideshow_" class="slideshow"> <a href="http://www.xodusen.com/community"> <img src="images/slideshow.png" width="960" height="305" alt=""></a> <a href="http://www.xodusen.com/mcurl"> <img src="images/minecraft.png" width="960" height="305" alt=""></a> <a href="steam://connect/74.121.188.194:27015"> <img src="images/tf2.png" width="960" height="305" alt=""></a> <a href="steam://connect/74.121.188.195:27015"> <img src="images/CSS.png" width="960" height="305" alt=""></a> <a href="steam://connect/74.121.188.197:27015"> <img src="images/GM.png" width="960" height="305" alt=""></a> <a href="aos://3267131722:32887"> <img src="images/aos.png" width="960" height="305" alt=""></a> <a href="steam://connect/74.121.188.196:27015"> <img src="images/CSGO.png" width="960" height="305" alt=""></a> </div> <div id="index-15_"> <img id="index_15" src="images/index_15.png" width="33" height="548" alt=""/> </div> <div id="index-16_"> <img id="index_16" src="images/index_16.png" width="960" height="155" alt=""/> </div> <div id="index-17_"> <img id="index_17" src="images/index_17.png" width="39" height="88" alt=""/> </div> <div id="Copyright_"> <a href="http://www.xodusen.com/community"> <img id="Copyright" src="images/Copyright.png" width="148" height="13" alt=""></a> </div> <div id="index-19_"> <img id="index_19" src="images/index_19.png" width="773" height="5" alt=""/> </div> <div id="index-20_"> <img id="index_20" src="images/index_20.png" width="526" height="83" alt=""/> </div> <div id="Privacy-Policy_"> <a href="http://s.xodusen.com/VhGEkH"> <img id="Privacy_Policy" src="images/Privacy_Policy.png" width="68" height="23" alt=""></a> </div> <div id="index-22_"> <img id="index_22" src="images/index_22.png" width="6" height="83" alt=""/> </div> <div id="Code-of-Conduct_"> <a href="http://s.xodusen.com/Tf5Gz7"> <img id="Code_of_Conduct" src="images/Code_of_Conduct.png" width="84" height="23" alt=""></a> </div> <div id="index-24_"> <img id="index_24" src="images/index_24.png" width="2" height="83" alt=""/> </div> <div id="Contact-Us_"> <a href="http://s.xodusen.com/T5EYsG"> <img id="Contact_Us" src="images/Contact_Us.png" width="57" height="23" alt=""></a> </div> <div id="index-26_"> <img id="index_26" src="images/index_26.png" width="30" height="83" alt=""/> </div> <div id="index-27_"> <img id="index_27" src="images/index_27.png" width="148" height="75" alt=""/> </div> <div id="index-28_"> <img id="index_28" src="images/index_28.png" width="68" height="60" alt=""/> </div> <div id="index-29_"> <img id="index_29" src="images/index_29.png" width="84" height="60" alt=""/> </div> <div id="index-30_"> <img id="index_30" src="images/index_30.png" width="57" height="60" alt=""/> </div> </div> <script type="text/javascript" src="//ajax.cloudflare.com/cdn-cgi/nexp/aav=4188748942/apps1.min.js"></script><script type="text/javascript">__CF.AJS.init1();</script></body> </html> My issue is, how can I achieve the same 'centered' results in the div format of the website, as the table format of the website? I have done some research to no avail, so I'd thought given the reputation of this site, that i'd post my issue here. Thank you in advance, ~ drea.

    Read the article

  • Lazy loading the addthis script? (or lazy loading external js content dependent on already fired eve

    - by Keith Bentrup
    I want to have the addthis widget available for my users, but I want to lazy load it so that my page loads as quickly as possible. However, after trying it via a script tag and then via my lazy loading method, it appears to only work via the script tag. In the obfuscated code, I see something that looks like it's dependent on the DOMContentLoaded event (at least for firefox). Since the DOMContentLoaded event has already fired, the widget doesn't render properly. What to do? I could just use a script tag (slower)... or could I fire (in a cross browser way) the DOMContentLoaded (or equivalent) event? I have a feeling this may not be possible b/c I believe that (like jQuery) there are multiple tests of the content ready event, and so multiple simulated events would have to occur. Nonetheless, this is an interesting problem b/c I have seen a couple widgets now assume that you are including their stuff via static script tags. It would be nice if they wrote code that was more useful to developers concerned about speed, but until then, is there a work around?? And/or are any of my assumptions wrong? Edit: Because the 1st answer to the question seemed to miss the point of my problem, I wanted to clarify the situation. This is about a specific problem. I'm not looking for yet another lazy load script or check if some dependencies are loaded script. Specifically this problem deals with external widgets that you do not have control over and may or may not be obfuscated delaying the load of the external widgets until they are needed or at least, til substantially after everything else has been loaded including other deferred elements b/c of the how the widget was written, precludes existing, typical lazy loading paradigms While it's esoteric, I have seen it happen with a couple widgets - where the widget developers assume that you're just willing to throw in another script tag at the bottom of the page. I'm looking to save those 500-1000 ms** though as numerous studies by yahoo, google, and amazon show it to be important to your user's experience. **My testing with hammerhead and personal experience indicates that this will be my savings in this case.

    Read the article

  • Does the <script> tag position in HTML affects performance of the webpage?

    - by Rahul Joshi
    If the script tag is above or below the body in a HTML page, does it matter for the performance of a website? And what if used in between like this: <body> ..blah..blah.. <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> ... some text here too ... </body> Or is this better?: <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> <body> ..blah..blah.. ..call above functions on some events like onclick,onfocus,etc.. </body> Or this one?: <body> ..blah..blah.. ..call above functions on some events like onclick,onfocus,etc.. <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> </body> Need not tell everything is again in the <html> tag!! How does it affect performance of webpage while loading? Does it really? Which one is the best, either out of these 3 or some other which you know? And one more thing, I googled a bit on this, from which I went here: Best Practices for Speeding Up Your Web Site and it suggests put scripts at the bottom, but traditionally many people put it in <head> tag which is above the <body> tag. I know it's NOT a rule but many prefer it that way. If you don't believe it, just view source of this page! And tell me what's the better style for best performance.

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • how do i install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    i want to use the dropbox api via this library http://code.google.com/p/dropbox-php/ i installed MAMP then I tried "sudo pecl install oauth" but I got downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or 'attribute' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or 'attribute' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: * [oauth.lo] Error 1 ERROR: `make' failed

    Read the article

  • How do I install PHP with JSON and OAuth on Mac Snow Leopard?

    - by meilas
    I want to use the Dropbox API via this library, http://code.google.com/p/dropbox-php/. I installed MAMP, and then I tried sudo pecl install oauth but I got the following. >>>> downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/oauth-1.0.0 running: /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/configure checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /opt/local/bin/gsed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-apple-darwin10.4.0 checking host system type... i686-apple-darwin10.4.0 checking target system type... i686-apple-darwin10.4.0 checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... no configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking for oauth support... yes, shared checking for cURL in default path... found in /usr checking for ld used by cc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from cc object... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory checking if cc static flag works... rm: conftest.dSYM: is a directory yes checking if cc supports -fno-rtti -fno-exceptions... rm: conftest.dSYM: is a directory no checking for cc option to produce PIC... -fno-common checking if cc PIC flag -fno-common works... rm: conftest.dSYM: is a directory yes checking if cc supports -c -o file.o... rm: conftest.dSYM: is a directory yes checking whether the cc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.4.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no >>>> creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /private/var/tmp/pear-build-root/oauth-1.0.0/libtool --mode=compile cc -I. -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main -I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c -o oauth.lo mkdir .libs cc -I. "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -DPHP_ATOM_INC -I/private/var/tmp/pear-build-root/oauth-1.0.0/include -I/private/var/tmp/pear-build-root/oauth-1.0.0/main "-I/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth" -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -Wall -g -c "/private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c" -fno-common -DPIC -o .libs/oauth.o In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:29:18: error: pcre.h: No such file or directory In file included from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/php_oauth.h:47, from /private/var/tmp/apache_mod_php/apache_mod_php-53~1/Build/tmp/pear/temp/oauth/oauth.c:14: /usr/include/php/ext/pcre/php_pcre.h:37: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:38: error: expected '=', ',', ';', 'asm' or '__attribute__' before '*' token /usr/include/php/ext/pcre/php_pcre.h:44: error: expected specifier-qualifier-list before 'pcre' make: *** [oauth.lo] Error 1 ERROR: `make' failed </block>

    Read the article

  • Nginx http_mp4_module seam installed but dont work

    - by Tahola
    I try to use the http_mp4_module on my Ubuntu server but that didnt seem to work at all. When i check nginx -V i get : nginx version: nginx/1.1.19 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.1.19/debian/modules/chunkin-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/headers-more-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-development-kit --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-http-push --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-lua --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-progress --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-dav-ext-module --with-http_mp4_module and --with-http_flv_module are there, I also add on sites-available/domaine.conf location ~ .mp4$ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 10M; } location ~ .flv$ { flv; } and Nginx restarted witout error, everything seem ok but when i check my urls myvideo.mp4?start=60 return a 404 error (what i think is normal) and video.mp4?starttime=60 return the video but whatever the starttime number is i get the full video from the begining, did i miss something ?

    Read the article

  • Error installing new rails version. Failed to build gem native extension.

    - by davidcmolina
    I am trying to build my first ruby on rails app using the following guide (http://ruby.railstutorial.org/chapters/a-demo-app#code-demo_gemfile_sqlite_version_redux) and have run into a few obstacles. The first, receiving errors when upgrading to the latest rails version 3.2.8. bash-3.2$ gem install rails Building native extensions. This could take a while... ERROR: Error installing rails: ERROR: Failed to build gem native extension. /Users/davidmolina/.rvm/rubies/ruby-1.9.3-p194/bin/ruby extconf.rb creating Makefile make compiling generator.c make: /usr/bin/gcc-4.2: No such file or directory make: *** [generator.o] Error 1 Gem files will remain installed in /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5 for inspection. Results logged to /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5/ext/json/ext/generator/gem_make.out Even when trying to install from rails app: $ gem install rails Building native extensions. This could take a while... ERROR: Error installing rails: ERROR: Failed to build gem native extension. /Users/davidmolina/.rvm/rubies/ruby-1.9.3-p194/bin/ruby extconf.rb creating Makefile make compiling generator.c make: /usr/bin/gcc-4.2: No such file or directory make: *** [generator.o] Error 1 Gem files will remain installed in /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5 for inspection. Results logged to /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5/ext/json/ext/generator/gem_make.out When trying to Bundle Install: $ bundle install Could not locate Gemfile Background details: Mac OS X Version 10.8.2 Ruby 1.9.3 Rails 2.3.4 I'm wondering if there is a direct one-liner or gem that is missing?

    Read the article

  • Can't install MailParse on cpanel server

    - by Tom
    Hi, I've got a linux vps running CentOs 5.5 (cpanel/whm), I've installed MailParse via Module Installers section on whm, and it did install it, the end of setup log: running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-mailparse-2.1.5" install Installing shared extensions: /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-mailparse-2.1.5" | xargs ls -dils 508718 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5 508745 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr 508746 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib 508747 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php 508748 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions 508749 4 drwxr-xr-x 2 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626 508744 196 -rwxr-xr-x 1 root root 193502 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' install ok: channel://pecl.php.net/mailparse-2.1.5 Extension mailparse enabled in php.ini The mailparse.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Now, when i try to use mailparse functions using php i get the following error: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 What should i do?

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • Need help trying to allow my remote PowerShell script to run on my Windows 2008 r2 Server

    - by Pure.Krome
    I've got a Windows 2008 r2 server with Sql Server 2008 r2 installed. I've got a Sql Server Agent which tries to run a Powershell job, but fails :- Message Executed as user: FooServer\SqlServerUser. A job step received an error at line 1 in a PowerShell script. The corresponding line is '& '\\polanski\Backups\Database\7ZipFooDatabases.ps1'' Correct the script and reschedule the job. The error information returned by PowerShell is: 'File \\polanski\Backups\Database\7ZipFooDatabases.ps1 cannot be loaded. The file \\polanski\Backups\Database\7ZipFooDatabases.ps1 is not digitally signed. The script will not execute on the system. Please see "get-help about_signing" for more details.. '. Process Exit Code -1. The step failed. Ok. So i run Powershell on that server then set the execution policy to unrestricted. To check .. PS C:\Users\theUser> Get-ExecutionPolicy Unrestricted PS C:\Users\theUser> Kewl :) but it still doesn't work :( Ok ... what happens when i try to run the powershell from the command line.... PS C:\Users\justin.adler . '\polanski\Backups\Database\7ZipMotorshoutDatabases.ps1' Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run \\polanski\Backups\Database\7ZipFooDatabases.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): er..... didn't I already tell the server that ANY file can be ran? Notice the file is located at... \\polanski\Backups\Database\ So can someone make any suggestions?

    Read the article

  • puppet execution of a python script where os.system(...) command is not working

    - by philippe
    I am trying to manage Unix users with puppet. Puppet provides enough tools to create accounts and provide authorized_keys files for instance, but no to set up user password, and it tell to the user. What I have done is a python script which generate a random password and send it to the user by email. The problem is, it is not possible to launch passwd Unix command with python, I have then written a bash script with the command: echo -ne "$password\n$password\n" | passwd $user passwd -e $user Launched manually, the script works fine and the created user has its password sent by email. But when puppet launches it, only the python script gets executed, as if the os.system('/bin/bash my_bash_script') is ignored. No error is displayed. And the user gets its password, but the passwd commands are not launched. Is there any limitation with puppet preventing to perform what I described? Or, how can I otherwise change the user account, its expiration, and send password by email? I can provide more information, but right now, I don't know which are accurate. Many thanks!

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • Different font SIZES in a Text Editor, based on Script(Alphabet) type (ie. per Unicode Code-Block)

    - by fred.bear
    Some non-Latin-based scripts(alphabets) have more detail in their glyphs than do the Latin-based-script equivalents, and typically need a larger font to give the same degree of legibility (resolution-wise). Sometimes, both script types need to be present in the same file. Notepad++ allows different font SIZES (and colour, etc) courtesy of syntax-highlighting. This allows me to display larger-fonted non-Latin-based script in a // BIG-FONT comment. Although this has been quite handy for me in some situations, it is quite limited. A Word Processor can handle this scenario, but I'm not interested in that. I want a nice simple(?) plain(?) Text Editor to do it... on a per script-type basis... eg. mixing Latin-1 and Devanagari (and Mandarin, and ... Such a thing may not exits, but Notepad++ has shown that a simple(?) plain(?) Text Editor is capable of it. Does anyone know of such a Text Editor? ...Q. Why not a Word Processor? ...A. Because GCC and Python don't like that format! but UTF-8 is fine.

    Read the article

  • What are the ways to build a failover cluster?

    - by light
    I have a task where I need to build a failover cluster in two cases: first with servers on Red Hat Enterprise 5.1 and second with SUSE Linux Enterprise 11 SP1. Both cases have SAN. I know there are many ways to build failover cluster, but I can’t find out more, so I need next: The ways to build it? I know only virtualization. Any good book or resource to broad my mind? I’ll be glad to hear any suggestion. Thanks! EDIT #1: failover of servers with bussiness application on it. EDIT #2: will be great to hear summary about solutions with SLES servers? EDIT #3: So if I understand correctly, in my cases the main ways are to use internal solutions or virtualization. So now I have additional questions: Does manufacturer of blades provide some solution? For example HP or IBM. (Without virtualization) Do I need additional server to control "heartbeat" between main and redundant servers? (Virtualization) For example I have several physical servers with VMs. Do I need additional server to control availability of VMs and to move VMs to another physical server in the case their physical server failure? Sorry for my poor English. EDIT #4: Failover of VM or OS on physical server. In both cases will be used SAN , it's not specified, but I think with file system image on it. I started to think that my question is stupid and I need to remake it.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >