Search Results

Search found 1743 results on 70 pages for 'powershell 2 0'.

Page 24/70 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Harnessing PowerShell's String Comparison and List-Filtering Features

    When you are first learning PowerShell, it often seems to be an 'Alice through the looking-glass' world. Just the simple process of comparing and selecting strings can seem strangely obtuse. Michael turns the looking-glass into wonderland with his wall-chart of the PowerShell string-comparison operators and syntax The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Extract and convert all Excel worksheets into CSV files using PowerShell

    Can PowerShell provide an easy way to export Excel as a CSV? Yes. Tim Smith demonstrates that whether you have multiple Excel files, or just multiple worksheets in Excel, PowerShell simplifies the process. Get to grips with SQL Server replicationIn this new eBook Sebastian Meine gives a hands-on introduction to SQL Server replication, including implementation and security. Download free ebook now.

    Read the article

  • Active Directory Management with PowerShell in Windows Server 2008 R2

    One of the first things you notice with Windows Server 2008 R2 is that PowerShell 2.0 has become central to the admin function There is a powerful Active Directory module for Powershell that contains a provider and cmdlets that are designed to allow you to manage Active Directory from the command line. Now, you can also use versions for previous versions of Windows Server.

    Read the article

  • The PoSh DBA: Grown-Up PowerShell Functions

    Laerte Junior goes step-by-step through the process of tidying up and making more reusable an untidy collection of PowerShell routines, showing how pipelines and advanced functions can make PowerShell more effective in helping to automate many of the working DBA's chores. What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • Management of Windows Azure SQL Databases via PowerShell with REST APIs

    Management of Azure SQL Databases has been greatly simplified by the introduction of the Azure PowerShell module. Marcin Policht describes the principles of dealing with the Azure PowerShell module’s REST APIs directly. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • SQL Server 2012 Integration Services - Using PowerShell to Configure Project Environments

    Continuing our discussion on how to leverage the capabilities of PowerShell to automate the most basic SSIS management tasks, this article will explore more complex topics by demonstrating the use of PowerShell in implementing and utilizing project environments. ‘Disturbing Development’Grant Fritchey & the DBA Team present the latest installment of the Top 5 hard-earned lessons of a DBA – read it now

    Read the article

  • Extracting data with headers using PowerShell

    This article provides a short PowerShell script to extrace data from a database using PowerShell. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • The PoSh DBA: Solutions using PowerShell and SQL Server

    PowerShell is worth using when it is the quickest way to providing a solution. For the DBA, it is much more than getting information from SQL Server instances via PowerShell; it can also be run from SQL Server as part of a system that helps with administrative and monitoring tasks. New! SQL Backup Pro 7.2 - easy, automated backup and restoresTry out the latest features and get faster, smaller, verified backups. Download a free trial.

    Read the article

  • Parameters with default value not in PsBoundParameters?

    - by stej
    General code Consider this code: PS> function Test { param($p='default value') $PsBoundParameters } PS> Test 'some value' Key Value --- ----- p some value PS> Test # nothing I would expect that $PsBoundParameters would contain record for $p variable on both cases. Is that correct behaviour? Question I'd like to use splatting for a lot of functions that would work like this: function SomeFuncWithManyRequiredParams { param( [Parameter(Mandatory=$true)][string]$p1, [Parameter(Mandatory=$true)][string]$p2, [Parameter(Mandatory=$true)][string]$p3, ...other parameters ) ... } function SimplifiedFuncWithDefaultValues { param( [Parameter(Mandatory=$false)][string]$p1='default for p1', [Parameter(Mandatory=$false)][string]$p2='default for p2', [Parameter(Mandatory=$false)][string]$p3='default for p3', ...other parameters ) SomeFuncWithManyRequiredParams @PsBoundParameters } I have more functions like this and I don't want to call SomeFuncWithManyRequiredParams with all the params enumerated: SomeFuncWithManyRequiredParams -p1 $p1 -p2 $p2 -p3 $p3 ... Is it possible?

    Read the article

  • How to stop PowerShell from unpacking an Enumerable object?

    - by spoon16
    Working on a simple helper function in PowerShell that takes a couple of parameters and creates a custom Enumerable object and outputs that object to the pipeline. The problem I am having is that PowerShell is always outputting a System.Array that contains the objects that are enumerated by my custom Enumerable object. How can I keep PowerShell from unpacking the Enumerable object? The code: http://gist.github.com/387768

    Read the article

  • Start-job to call script from main

    - by Naveen
    I have three script , from main - 1-script , I am calling other two scripts. so that I can execute both scripts parallely because it's taking too much time in sequential order. Only variables are different in the script. How can I merge script 2 & 3 in a single script so that I can call from the main script and it will run as parallel. 1 CompareCtrlM... Completed False localhost ######################... 3 CompareCtrlM... Completed True localhost ######################... Main -1 Script Start-Job -Name "LoopComparectrlMasterModel" -filepath D:\tmp\naveen\Script\CompareCtrlMasterCtrlModel.ps1 Start-Job -Name "LoopCompareProdMasterModel" -filepath D:\idv\CA\rcm_data\tmp\work\CompareCtrlMasterProdModel.ps1 Wait-Job -Name "LoopComparectrlMasterModel" Receive-Job "LoopComparectrlMasterModel" Wait-Job -Name "LoopCompareProdMasterModel" Receive-Job "LoopCompareProdMasterModel" =============================================== Script 2- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterProdModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Prod Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterProdModel[$i-1] $cfgProdModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterProdModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } ========================================================================== Script 3- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterCtrlModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Ctrl Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterCtrlModel[$i-1] $cfgCtrlModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterCtrlModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } write-output $results Thanks a lot for help Regards Naven

    Read the article

  • How to get an variable output from remote pssession

    - by Vinith menon
    I have a script to get virtual harddisk info from vmm, im executing it remotely from a server, currently im unable to get the variable value outside of the pssession in the local host, could you please help me out with achieveing the same. PS C:\Windows\system32> enter-pssession iscvmm02 [iscvmm02]: PS C:\Users\su\Documents>Add-PSSnapin Microsoft.SystemCenter.VirtualMachineManager [iscvmm02]: PS C:\Users\su\Documents>$hide= Get-VMMServer -ComputerName "iscvmm02.corp.avanade.org" [iscvmm02]: PS C:\Users\su\Documents>$VM = Get-VM | where { $_.ComputerNameString -contains "idpsm02.corp.air.org" } [iscvmm02]: PS C:\Users\su\Documents>$harddisk=$VM.VirtualHardDisks [iscvmm02]: PS C:\Users\su\Documents>$h=$harddisk.length [iscvmm02]: PS C:\Users\su\Documents>for($i=0;$i-lt$h;$i++){ New-Variable -Name "HardDiskType_$i" -value $harddisk[$i].vhdtype New-Variable -Name "HardDiskLocation_$i" -value $harddisk[$i].Location } [iadpscvmm02]: PS C:\Users\su\Documents>Exit-PSSession PS C:\Windows\system32>$harddisktype_0 PS C:\Windows\system32>$harddisklocation_0 as you can see both the variable output's give null value, im unable to retain the values

    Read the article

  • Cannot Generate ParameterSetMetadata While Programmatically Creating A Parameter Block

    - by Steven Murawski
    I'm trying to programmatically create a parameter block for a function ( along the lines of this blog post ). I'm starting with a CommandMetadata object (from an existing function). I can create the ParameterMetadata object and set things like the ParameterType, the name, as well as some attributes. The problem I'm running into is that when I use the GetParamBlock method of the ProxyCommand class, none of my attributes that I set in the Attributes collection of the ParameterMetadata are generated. The problem this causes is that when the GetParamBlock is called, the new parameter is not annotated with the appropriate Parameter attribute. Example: function test { [CmdletBinding()] param ( [Parameter()] $InitialParameter) Write-Host "I don't matter." } $MetaData = New-Object System.Management.Automation.CommandMetaData (get-command test) $NewParameter = New-Object System.Management.Automation.ParameterMetadata 'NewParameter' $NewParameter.ParameterType = [string[]] $Attribute = New-Object System.Management.Automation.ParameterAttribute $Attribute.Position = 1 $Attribute.Mandatory = $true $Attribute.ValueFromPipeline = $true $NewParameter.Attributes.Add($Attribute) $MetaData.Parameters.Add('NewParameter', $NewParameter) [System.Management.Automation.ProxyCommand]::GetParamBlock($MetaData)

    Read the article

  • Windows 2008 unable to execute c# powershell app. Returning access exception.

    - by scope-creep
    Hi, Does anybody know why I can't access the folder where my powershell scripts are in windows 2008 Ent. When I try to create a script with textpad it craps out. When I try and execute a c# powershell app, which is stored on another win 2003 drive, it craps out with an access exception as well. I've set powershell execution policy to unrestricted for both normal users and admin users with 'run as admin' on powershell, but it doesn't seem to make a difference. There must be a policy setting, doesn't allow scripts access to a directory, but where, and how to set it. Any help would be appreciated. scope_creep

    Read the article

  • Mysterious different conversion to string[] of seemingly same input data

    - by Roman Kuzmin
    During investigation of some problem I found that the reason was unexpected different conversion to string[] of seemingly same input data. Namely, in the code below two commands both return the same two items File1.txt and File2.txt. But conversion to string[] gives different results, see the comments. Any ideas why is it? This might be a bug. If anybody also thinks so, I’ll submit it. But it would nice to understand what’s going on and avoid traps like that. # *** WARNING # *** Make sure you do not have anything in C:\TEMP\Test # *** The code creates C:\TEMP\Test with File1.txt, File2.txt # Make C:\TEMP\Test and two test files $null = mkdir C:\TEMP\Test -Force 1 | Set-Content C:\TEMP\Test\File1.txt 1 | Set-Content C:\TEMP\Test\File2.txt # This gets just file names [string[]](Get-ChildItem C:\TEMP\Test) # This gets full file paths [string[]](Get-ChildItem C:\TEMP\Test -Include *) # Output: # File1.txt # File2.txt # C:\TEMP\Test\File1.txt # C:\TEMP\Test\File2.txt

    Read the article

  • How to get a new-pssession in PowerShell to talk to my ICS-connected laptop for Remoting

    - by Scott Bilas
    If I have my laptop on the LAN, then Powershell remoting works fine from my workstation to the laptop. However, the LAN is wireless, and so sometimes I will connect on a wire to my workstation. It has two ethernet ports so I have the secondary wired up to share to the laptop using Win7's Internet Connection Sharing. (Btw I know that avoiding ICS would solve the problem, but that's not an option right now.) So my question is: what magic registry bits or command line options do I need to flip to get remoting to work to my laptop through ICS? Here's what happens when I try it: new-pssession -computername 192.168.137.161 [192.168.137.161] Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed I'm having a hard time understanding the documentation for PowerShell and WinRM. I've tried messing with allowing ports in the firewall and setting TrustedHosts to * on my workstation (don't think this is a good idea on the laptop). I have no idea where to go from here, would appreciate any help.

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • The tale of how the PowerShell CmdLets got installed with Azure SDK 1.4

    - by Enrique Lima
    I installed the Azure SDK 1.4 while rebuilding my laptop and ran the installation for the Windows Azure Service Management PowerShell CmdLets. Kicked off the installation script for the WASM PowerShell CmdLets by locating the path to which WASM PowerShell CmdLets was deployed to. Double clicked the startHere command. It will then open the WASM installation dialog. Click Next. Click Next. Notice the red x next to the Azure SDK 1.3, the problem is I have SDK 1.4 Here is the workaround, I go back to the location of the deployed WASM sources. Go into the setup path, then scripts>dependencies>check. Now, locate the CheckAzureSDK.ps1 file, and right-click, then edit. This is the content in the ps1 file, it check for the specific version of the Azure SDK, in this case, it is looking for version 1.3.11133.0038. We need for it to check for version 1.4.20227.1419 Now, save your ps1 file, go back to the open WASM install dialog, and click rescan. This time it should pass, then click next. A Command prompt window will appear, click any key. This completes the installation, click Close.

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Is there Powershell way to re-apply a restored password for the IIS IUSR account?

    - by Philippe Monnet
    On one of our IIS web servers the IUSR account suddenly expired or got corrupted, I recovered the password from the IIS metabase (using Cscript adsutil.vbs get w3svc\anonymoususerpass after switching IsSecureProperty = False). I then reset the password accordingly. Now I have to re-key that password on the Directory Security tab of all virtual directories (for the anonymous account) of all web sites on that server. Is there a way to automate this using Powershell? (I have searched so far in vain)

    Read the article

  • How to get the parent node of a given child node using PowerShell?

    - by kumar
    I am new to PowerShell. I have to write a script which can return me the first parent(node) value by passing the child node. I have the following XML. When I pass my PowerShell script the value "AAA", it should return "parent2", and when I pass "III", it should return "parent311". Can someone help me write this script? XML: <root> <parent> <parent2> <child>AAA</child> </parent2> <parent3> <child>BBB</child> <child>CCC</child> <child>DDD</child> </parent3> <parent4> <child>EEE</child> </parent4> <parent5> <child>FFF</child> </parent5> </parent> <parent21> <parent211> <child>GGG</child> </parent211> <parent311> <child>HHH</child> <child>III</child> <child>JJJ</child> </parent311> <parent411> <child>KKK</child> </parent411> <parent511> <child>LLL</child> </parent511> </parent21> </root>

    Read the article

  • What's the difference between Defaults and Properties when changing cmd.exe or PowerShell's settings?

    - by mmm bacon
    I want to change the font used by both cmd.exe and PowerShell. When I right click in the window border, I see both Defaults and Properties: What's the difference? One would think Defaults was for all sessions, and Properties was for the current session. However, changes to Properties are persisted even after relaunching cmd.exe. Another problem is that changing the font in either Defaults or Properties doesn't actually change the font. This is on Windows 8.

    Read the article

  • Is it possible to open an Active Director or Exchange Management Console user dialog directly from Powershell?

    - by Myrddin Emrys
    I'd like to be able to launch either the AD user dialog, or the EMC mailbox dialog directly from a Powershell script to open a specific user. The workflow goes something to the effect of "Does everything look correct on this user? Y/N" to continuing on, or to bringing up the account to edit. There's no reason to completely duplicate the functionality of these dialogs. I don't mind requiring that EMC or ADU&C already be open before the script is run, if necessary.

    Read the article

  • Does the powershell cmdlet add to or replace out-of-office settings in Exchange 2007?

    - by boost
    When using Powershell to set Out-of-Office in Exchange 2007 (e.g.), do multiple commands containing -StartTime and -EndTime add to some internal list that Exchange maintains or does each successive command replace the previous command? For example we have a staffer who is only in the office Tuesdays and Fridays. We'd like to set up Exchange to send an Out-of-Office message to all internal senders on those days when he's not in. How is this best done?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >