Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 32/1022 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • How to filter a character stream from an application using PowerShell?

    - by Christian
    A powershell question: I want to extract each line in a character stream produced by an application that matches a certain pattern which in pseudo-code would be something like this: PS <a_ps_command> <the_application_command_for_outputting_the_text_stream> | <my_filter > output_file.txt In my case the application is a CM-tool that outputs the change history of a source file and the (psuedo)pattern should be something like: <a couple of numbers><a name><a time stamp><a line of characters> Cheers, Christian

    Read the article

  • How to use POWERSHELL to set MimeTypes in an IIS6 website?

    - by jacko
    I want to be able to replicate this adsutil.vbs behaviour in powershell: cscript adsutil.vbs set W3SVC/$(ProjectWebSiteIdentifier)/MimeMap ".pdf,application/pdf" and I've gotten as far as getting the website object: $website = gwmi -namespace "root\MicrosoftIISv2" -class "IISWebServerSetting" -filter "ServerComment like '%$name%'" if (!($website -eq $NULL)) { #add some mimetype } and listing out the MimeMap collection: ([adsi]"IIS://localhost/MimeMap").MimeMap Anyone know how to fill in the blanks so that I can add mimetypes to an exiting IIS6 website???

    Read the article

  • How can I stop and start individual websites in IIS using PowerShell?

    - by Joey Green
    I have multiple sites configured in IIS7 on my Windows7 development machine to run on the same port and usually only run one at a time depending on what I'm working on. I would like to be able to start and stop my development sites from PowerShell instead of having the IIS manager opened. Does anyone have a good resource to point me in the right direction or a script that already accomplishes this? Thanks

    Read the article

  • How to open document that contains AutoOpen macro with powershell?

    - by grom
    My current powershell script: $document = "C:\\test.doc" $word = new-object -comobject word.application $word.Visible = $false $word.DisplayAlerts = "wdAlertsNone" $word.AutomationSecurity = "msoAutomationSecurityForceDisable" $doc = $word.Documents.Open($document) $word.ActivePrinter = "\\http://ptr-server:631\pdf-printer" $background = $false $doc.PrintOut([ref]$background) $doc.close([ref]$false) $word.quit() But it results in an alert box "The macros in this project are disabled. Please refer to the online help or documentation of the host application to determine how to enable macros." How can I open the document without it running the AutoOpen macro or displaying any sort of dialog prompt?

    Read the article

  • Why is my PowerShell multi dimensional array being interpreted as a 1 dimensional array?

    - by Jim
    I have the following code: function HideTemplates($File, $Templates) { foreach ($Template in $Templates) { Write-Host $Template[0] $Template[1] $Template[2] } } HideTemplates "test.xml" @(("one", "two", "three")) HideTemplates "test.xml" @(("four", "five", "six"), ("seven", "eight", "nine")) It prints: o n e t w o t h r four five six seven eight nine I want it to print: one two three four five six seven eight nine Am I doing something wrong in my code? Is there a way to force PowerShell to tread a multi-dimensional array with a single item differently?

    Read the article

  • How to pass the 'argument-line' of one PowerShell function to another?

    - by jwfearn
    I'm trying to write some PowerShell functions that do some stuff and then transparently call through to existing built-in function. I want to pass along all the arguments untouched. I don't want to have to know any details of the arguments. I tired using 'splat' to do this with @args but that didn't work as I expected. In the example below, I've written a toy function called myls which supposed to print hello! and then call the same built-in function, Get-ChildItem, that the built-in alias ls calls with the rest of the argument line intact. What I have so far works pretty well: function myls { Write-Output "hello!" Invoke-Expression("Get-ChildItem "+$MyInvocation.UnboundArguments -join " ") } A correct version of myls should be able to handle being called with no arguments, with one argument, with named arguments, from a line containing multiple semi-colon delimited commands, and with variables in the arguments including string variables containing spaces. The tests below compare myls and the builtin ls: [NOTE: output elided and/or compacted to save space] PS> md C:\p\d\x, C:\p\d\y, C:\p\d\"jay z" PS> cd C:\p\d PS> ls # no args PS> myls # pass PS> cd .. PS> ls d # one arg PS> myls d # pass PS> $a="A"; $z="Z"; $y="y"; $jz="jay z" PS> $a; ls d; $z # multiple statements PS> $a; myls d; $z # pass PS> $a; ls d -Exclude x; $z # named args PS> $a; myls d -Exclude x; $z # pass PS> $a; ls d -Exclude $y; $z # variables in arg-line PS> $a; myls d -Exclude $y; $z # pass PS> $a; ls d -Exclude $jz; $z # variables containing spaces in arg-line PS> $a; myls d -Exclude $jz; $z # FAIL! Is there a way I can re-write myls to get the behavior I want?

    Read the article

  • 10 PowerShell One Liners

    - by BizTalk Visionary
    Here are a few one-liners that use NetCmdlets. Some of these I've blogged about before, some are new. Let me know if you have questions, which ones you find useful, or how you altered these to suit your own needs. Send email to a list of recipient addresses: import-csv users.csv | % { send-email -to $_.email -from [email protected] -subject "Important Email" –message "Hello World!" -server 10.0.1.1 } Show the access control list for a specific Exchange folder: get-imap -server $mymailserver -cred $mycred -folder INBOX.RESUMES –acl Add look and read permissions on an Exchange folder, for a list of accounts pulled from a CSV file: import-csv users.csv | % { set-imap -server -acluser $_.username $mymailserver -cred $mycred -folder INBOX.RESUMES –acl “lr”  } Sync system time with an Internet time server: get-time -server clock.psu.edu –set To remotely sync the time on a set of computers: import-csv computers.csv | % { Invoke-Command -computerName $_.computer -cred $mycred -scriptblock { get-time -server clock.psu.edu –set } } Delete all emails from an Exchange folder that match a certain criteria.  For example, delete all emails from [email protected]: get-imap -server $mailserver –cred $mycred | ? {$_.FromEmail -eq [email protected]} | %{ set-imap -server $mailserver –cred $mycred-message $_.Id -delete } Update Twitter status from PowerShell: get-http –url "http://twitter.com/statuses/update.xml" –cred $mycred -variablename status -variablevalue "Tweeting with NetCmdlets!" A test-path that works over FTP, FTPS (SSL), and SFTP (SSH) connections: get-ftp -server $remoteserver –cred $mycred -path /remote/path/to/checkfor* Don't forget the *.  Also, to use SSL or SSH just add an –ssl or –ssh parameter. List disabled user accounts in Active Directory (or any other LDAP server): get-ldap -server $ad -cred $mycred -dn dc=yourdc -searchscope wholesubtree     -search "(&(objectclass=user)(objectclass=person)(company=*)(userAccountControl:1.2.840.113556.1.4.803:=2))" List Active Directory groups and their members: get-ldap -server testman -cred $mycred -dn dc=NS2 -searchscope wholesubtree -search "(&(objectclass=group)(cn=*admin*))" | select ResultDN, member Display the last initialization time (e.g. last reboot time) of all discoverable SNMP agents on a network: import-csv computers.csv | % { get-snmp -agent $_.computer -oid sysUpTime.0 | %{([datetime]::Now).AddSeconds(-($_.OIDValue/100))} } Not mentioned here:  data conversion (Yenc, QP, UUencoding, MD5, SHA1, base64, etc), DNS, News Groups (NNTP/UseNet), POP mail, RSS feeds, Amazon S3, Syslog, TFTP, TraceRoute, SNMP Traps, UDP, WebDAV, whois, Rexec/Rshell/Telnet, Zip files, sending IMs (Jabber/GoogleTalk/XMPP), sending text messages and pages, ping, and more. Original Source: Lance's Textbox

    Read the article

  • PowerShell Script To Find Where SharePoint 2007 Features Are Activated

    - by Brian T. Jackett
    Recently I posted a script to find where SharePoint 2010 Features Are Activated.  I built the original version to use SharePoint 2010 PowerShell commandlets as that saved me a number of steps for filtering and gathering features at each level.  If there was ever demand for a 2007 version I could modify the script to handle that by using the object model instead of commandlets.  Just the other week a fellow SharePoint PFE Jason Gallicchio had a customer asking about a version for SharePoint 2007.  With a little bit of work I was able to convert the script to work against SharePoint 2007.   Solution    Below is the converted script that works against a SharePoint 2007 farm.  Note: There appears to be a bug with the 2007 version that does not give accurate results against a SharePoint 2010 farm.  I ran the 2007 version against a 2010 farm and got fewer results than my 2010 version of the script.  Discussing with some fellow PFEs I think the discrepancy may be due to sandboxed features, a new concept in SharePoint 2010.  I have not had enough time to test or confirm.  For the time being only use the 2007 version script against SharePoint 2007 farms and the 2010 version against SharePoint 2010 farms.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first. 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(     [Parameter(position = 1, valueFromPipeline=$true)]     [string]     $Identity )#end param     Begin     {         # load SharePoint assembly to access object model         [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")             # declare empty array to hold results. Will add custom member for Url to show where activated at on objects returned from Get-SPFeature.         $results = @()                 $params = @{}     }     Process     {         if([string]::IsNullOrEmpty($Identity) -eq $false)         {             $params = @{Identity = $Identity}         }                 # create hashtable of farm features to lookup definition ids later         $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local                         # check farm features         $results += ($farm.FeatureDefinitions | Where-Object {$_.Scope -eq "Farm"} | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value ([string]::Empty) -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                 # check web application features         $contentWebAppServices = $farm.services | ? {$_.typename -like "Windows SharePoint Services Web Application"}                 foreach($webApp in $contentWebAppServices.WebApplications)         {             $results += ($webApp.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $webApp.GetResponseUri(0).AbsoluteUri -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                         # check site collection features in current web app             foreach($site in ($webApp.Sites))             {                 $results += ($site.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                  % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $site.Url -PassThru} |                                  Select-Object -Property Scope, DisplayName, Id, Url)                                 # check site features in current site collection                 foreach($web in ($site.AllWebs))                 {                     $results += ($web.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                      % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $web.Url -PassThru} |                                      Select-Object -Property Scope, DisplayName, Id, Url)                                                        $web.Dispose()                 }                 $site.Dispose()             }         }     }     End     {         $results     } } #end Get-SPFeatureActivated Get-SPFeatureActivated   Conclusion    I have posted this script to the TechNet Script Repository (click here).  As always I appreciate any feedback on scripts.  If anyone is motivated to run this 2007 version script against a SharePoint 2010 to see if they find any differences in number of features reported versus what they get with the 2010 version script I’d love to hear from you.         -Frog Out

    Read the article

  • What is a proper way to pass a parameter to Set-Alias in powershell?

    - by Nick Gorbikoff
    Hello. A little background: I use PowerShell on windows xp at work and I set a bunch of useful shortcuts in Microsoft.PowerShell_profile.ps1 in My Documents, trying to emulate Mac environment inspired by Ryan Bates's shortcuts I have things like: Set-Alias rsc Rails-Console function Rails-Console {Invoke-Expression "ruby script/console"} Which works just fine when in command prompt I say: rsc #it calls the proper command However this doesn't work properly Set-Alias rsg Rails-Generate function Rails-Generate {Invoke-Expression "ruby script/generate"} So when I do : rsg model User which is supposed to call ruby script/generate model User all it calls is ruby script/generate #Dumping my params So how would I properly modify my functions to take params I send to functions? Thank you!!

    Read the article

  • How to write Bash scribt to open two different terminals

    - by Ahmed Zain El Dein
    How to write Bash script to open two different taped terminal ,and write in both of them commands separately to be executed unrelationally for instance : Terminal number one open skype terminal number two open in the end , i want one more thing , can i write in the bash script my skype username and password to put them in skype when open it in terminal one automatically then login too Thanks

    Read the article

  • How do I set the Execute Permissions for an IIS6 website with Powershell using WMI?

    - by DarkwingDuck
    In inetmgr you can set the property I desire by going to Home Directory - Application Settings - Execute Permissions - and setting the drop down to 'Scripts Only'. I'm trying to replicate this behavior in Powershell. The Target OS is Windows Server 2003 running IIS6. Currently I have this simple code to get the site: $Site = get-wmiobject -Namespace root\MicrosoftIISv2 -query ('select * from IISWebServerSetting where ServerComment="mySite"') There are lots of properties it might be but nothing really leaps out. I've tried changing the setting in inetmgr and dumping the properties out before and after, but I see no differences (it could be a child property though). Any ideas? Thanks in advance.

    Read the article

  • How to call an ASP.NET WebMethod using PowerShell?

    - by Domenic
    It seems like ASP.NET WebMethods are not "web servicey" enough to work with New-WebServiceProxy. Or maybe it is, and I haven't figured out how to initialize it? So instead, I tried doing it manually, like so: $wc = new-object System.Net.WebClient $wc.Credentials = [System.Net.CredentialCache]::DefaultCredentials $url = "http://www.domenicdenicola.com/AboutMe/SleepLog/default.aspx/GetSpans" $postData = "{`"starting`":`"\/Date(1254121200000)\/`",`"ending`":`"\/Date(1270018800000)\/`"}" $result = $wc.UploadString($url, $postData) But this gives me "The remote server returned an error: (500) Internal Server Error." So I must be doing something slightly wrong. Any ideas on how to call my PageMethod from PowerShell, and not get an error?

    Read the article

  • Write and fprintf for file I/O

    - by Darryl Gove
    fprintf() does buffered I/O, where as write() does unbuffered I/O. So once the write() completes, the data is in the file, whereas, for fprintf() it may take a while for the file to get updated to reflect the output. This results in a significant performance difference - the write works at disk speed. The following is a program to test this: #include <fcntl.h #include <unistd.h #include <stdio.h #include <stdlib.h #include <errno.h #include <stdio.h #include <sys/time.h #include <sys/types.h #include <sys/stat.h static double s_time; void starttime() { s_time=1.0*gethrtime(); } void endtime(long its) { double e_time=1.0*gethrtime(); printf("Time per iteration %5.2f MB/s\n", (1.0*its)/(e_time-s_time*1.0)*1000); s_time=1.0*gethrtime(); } #define SIZE 10*1024*1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT,S_IWGRP|S_IWOTH|S_IWUSR); for (int i=0; i<SIZE; i++) { write(file,"a",1); } close(file); endtime(SIZE); } void test_fprintf() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); } fclose(file); endtime(SIZE); } void test_flush() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); fflush(file); } fclose(file); endtime(SIZE); } int main() { test_write(); test_fprintf(); test_flush(); } Compiling and running I get 0.2MB/s for write() and 6MB/s for fprintf(). A large difference. There's three tests in this example, the third test uses fprintf() and fflush(). This is equivalent to write() both in performance and in functionality. Which leads to the suggestion that fprintf() (and other buffering I/O functions) are the fastest way of writing to files, and that fflush() should be used to enforce synchronisation of the file contents.

    Read the article

  • Using WIndows PowerShell 1.0 or 2.0 to evaluate performance of executable files.

    - by Andry
    Hello! I am writing a simple script on Windows PowerShell in order to evaluate performance of executable files. The important hypothesisi is the following: I have an executable file, it can be an application written in any possible language (.net and not, Viual-Prolog, C++, C, everything that can be compiled as an .exe file). I want to profile it getting execution times. I did this: Function Time-It { Param ([string]$ProgramPath, [string]$Arguments) $Watch = New-Object System.Diagnostics.Stopwatch $NsecPerTick = (1000 * 1000 * 1000) / [System.Diagnostics.Stopwatch]::Frequency Write-Output "Stopwatch created! NSecPerTick = $NsecPerTick" $Watch.Start() # Starts the timer [System.Diagnostics.Process]::Start($ProgramPath, $Arguments) $Watch.Stop() # Stops the timer # Collectiong timings $Ticks = $Watch.ElapsedTicks $NSecs = $Watch.ElapsedTicks * $NsecPerTick Write-Output "Program executed: time is: $Nsecs ns ($Ticks ticks)" } This function uses stopwatch. Well, the functoin accepts a program path, the stopwatch is started, the program run and the stopwatch then stopped. Problem: the System.Diagnostics.Process.Start is asynchronous and the next instruction (watch stopped) is not executed when the application finishes. A new process is created... I need to stop the timer once the program ends. I thought about the Process class, thicking it held some info regarding the execution times... not lucky... How to solve this?

    Read the article

  • What is the best way to lazy load doubleclick ads that use document.write?

    - by user560585
    Ads requested through via doubleclick often get served from an ad provider network that returns javascript that in turn performs document.write to place ads in the page. The use of document.write requires that the document be open, implying that the page hasn't reached document.complete. This gets in the way of deferring or lazy loading ad content. Putting such code at page bottom is helpful but doesn't do enough to lower all-important "page-loaded" time. Are "friendly iframes" the best we have? Is there any other alternative such as a clever way to override document.write?

    Read the article

  • How to make the start menu finds a program based on a custom keyword?

    - by Pierre-Alain Vigeant
    I am searching for a way to type a keyword in the start menu Search programs and files field and that it will return the application that match the keyword. An example will better explain this: Suppose that I want to start the powershell. Currently what I can type in the search field is power and the first item that appear is the 64bits powershell shortcut. Now suppose that I'd like ps to return powershell as the first item of the search list. Currently, typing ps return all files with the .ps extension, alongs with a control panel options about recording steps but not the powershell executable itself. How can I do that?

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >