Search Results

Search found 2498 results on 100 pages for 'powershell v4 0'.

Page 41/100 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Sql Server SMO connection timeout not working

    - by Uros Calakovic
    I have the following PowerShell code: function Get-SmoConnection { param ([string] $serverName = "", [int] $connectionTimeout = 0) if($serverName.Length -eq 0) { $serverConnection = New-Object ` Microsoft.SqlServer.Management.Common.ServerConnection } else { $serverConnection = New-Object ` Microsoft.SqlServer.Management.Common.ServerConnection($serverName) } if($connectionTimeout -ne 0) { $serverConnection.ConnectTimeout = $connectionTimeout } try { $serverConnection.Connect() $serverConnection } catch [system.Management.Automation.MethodInvocationException] { $null } } $connection = get-smoconnection "ServerName" 2 if($connection -ne $null) { Write-Host $connection.ServerInstance Write-Host $connection.ConnectTimeout } else { Write-Host "Connection could not be established" } It seems to work, except for the part that attempts to set the SMO connection timeout. If the connection is successful, I can verify that ServerConnection.ConnectTimeout is set to 2 (seconds), but when I supply a bogus name for the SQL Server instance, it still attempts to connect to it for ~ 15 seconds (which is I believe the default timeout value). Does anyone have experience with setting SMO connection timeout? Thank you in advance.

    Read the article

  • Programmatically access document properties

    - by ngm
    Is there a way in which I can programmatically access the document properties of a Word 2007 document? I am open to using any language for this, but ideally it might be via a PowerShell script. My overall aim is to traverse the documents somewhere on a filesystem, parse some document properties from these documents, and then collate all of these properties back together into a new Word document. (I essentially want to automatically create a document which is a list of all documents beneath a certain folder of the filesystem; and this list would contain such things as the Title, Abstract and Author document properties; the CreateDate field; etc. for each document)

    Read the article

  • Which tool / technology: System management for databases and dependent services

    - by Filburt
    A follow-up on this system management question: Since I probably will not get much feedback on serverfault I'll give it a try here. My main concern is to reflect the dependencies between the databases, services ans tasks/jobs I'll have to manage. Besides considering Powershell, I even thought about using MSBuild because it would allow for modeling dependencies and reuse configuration targets. In other words: What technology should I use to develop a flexible solution that will allow me to stop service A, B and C on machine D in the right order and disable task E on machine F when taking down database X?

    Read the article

  • can anyone explain the why the the 1st example gets different results than the following 2

    - by klumsy
    $b = (2,3) $myarray1 = @(,$b,$b) $myarray1[0].length #this will be 1 $myarray1[1].length $myarray2 = @( ,$b ,$b ) $myarray2[0].length #this will be 2 $myarray[1].length $myarray3 = @(,$b ,$b ) $myarray3[0].length #this will be 2 $myarray3[1].length UPDATE I think on #powershell IRC we have worked it out, Here is another example that demonstrates the danger of breaking with the comma on the following line rather than the top line when listing multiple items in an array over multiple lines. $b = (1..20) $a = @( $b, $b ,$b, $b, $b ,$b) for($i=0;$i -lt $a.length;$i++) { $a[$i].length } "--------" $a = @( $b, $b ,$b ,$b, $b ,$b) for($i=0;$i -lt $a.length;$i++) { $a[$i].length } produces 20 20 20 20 20 20 -------- 20 20 20 1 20 20 I'm curious how people will explain this. I think i understand it now, but would have trouble explaining it in a concise understandable fashion, though the above example goes somewhat towards that goal.

    Read the article

  • Path with no slash after drive letter and colon - what does it point to?

    - by ya23
    I have mistyped a path and instead of c:\foo.txt wrote c:foo.txt. I expected it to either fail or to resolve to c:\foo.txt, but instead it seems to be resolved to foo.txt in a current user's home folder. Powershell returns: PS C:\> [System.IO.Path]::GetFullPath("c:\foo.txt") c:\foo.txt PS C:\> [System.IO.Path]::GetFullPath("c:foo.txt") C:\Users\Administrator\foo.txt PS C:\> [System.IO.Path]::GetFullPath("g:foo.txt") G:\foo.txt Running explorer.exe from commandline and passing it any of the above results in C:\Users\Administrator\Documents to be opened. I haven't found any documentation of that and I'm utterly confused, please explain the behaviour.

    Read the article

  • SMO restore of SQL database doesn't overwrite

    - by Tom H.
    I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten. The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed. I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned. Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me? Thanks for any help or advice that you can give! function Invoke-SqlRestore { param( [string]$backup_file_name, [string]$server_name, [string]$database_name, [switch]$norecovery=$false ) # Get a new connection to the server [Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name Write-Host "Starting restore to $database_name on $server_name." Try { $backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File") # Get local paths to the Database and Log file locations If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath } Else { $database_path = $server.Settings.DefaultFile} If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath } Else { $database_log_path = $server.Settings.DefaultLog} # Load up the Restore object settings $restore = New-Object Microsoft.SqlServer.Management.Smo.Restore $restore.Action = 'Database' $restore.Database = $database_name $restore.ReplaceDatabase = $true if ($norecovery.IsPresent) { $restore.NoRecovery = $true } Else { $restore.Norecovery = $false } $restore.Devices.Add($backup_device) # Get information from the backup file $restore_details = $restore.ReadBackupHeader($server) $data_files = $restore.ReadFileList($server) # Restore all backup files ForEach ($data_row in $data_files) { $logical_name = $data_row.LogicalName $physical_name = Get-FileName -path $data_row.PhysicalName $restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") $restore_data.LogicalFileName = $logical_name if ($data_row.Type -eq "D") { # Restore Data file $restore_data.PhysicalFileName = $database_path + "\" + $physical_name } Else { # Restore Log file $restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name } [Void]$restore.RelocateFiles.Add($restore_data) } $restore.SqlRestore($server) # If there are two files, assume the next is a Log if ($restore_details.Rows.Count -gt 1) { $restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log $restore.FileNumber = 2 $restore.SqlRestore($server) } } Catch { $ex = $_.Exception Write-Output $ex.message $ex = $ex.InnerException while ($ex.InnerException) { Write-Output $ex.InnerException.message $ex = $ex.InnerException } Throw $ex } Finally { $server.ConnectionContext.Disconnect() } Write-Host "Restore ended without any errors." }

    Read the article

  • Using HttpClient with the RightScale API

    - by Ameer Deen
    I'm trying to use the WCF Rest Starter Kit with the RightScale's Login API which seems fairly simple to use. Edit - Here's a blog entry I wrote on using Powershell to consume the API. Edit - Created a generic .NET wrapper for the RightScale API - NRightAPI It's exactly as simple as it looks while using CURL. In order for me to obtain a login cookie all I need to do is: curl -v -c rightcookie -u username:password "https://my.rightscale.com/api/acct/accountid/login?api_version=1.0" And I receive the following cookie: HTTP/1.1 204 No Content Date: Fri, 25 Dec 2009 12:29:24 GMT Server: Mongrel 1.1.3 Status: 204 No Content X-Runtime: 0.06121 Content-Type: text/html; charset=utf-8 Content-Length: 0 Cache-Control: no-cache Added cookie _session_id="488a8d9493579b9473fbcfb94b3a7b8e5e3" for domain my.rightscale.com, path /, expire 0 Set-Cookie: _session_id=488a8d9493579b9473fbcfb94b3a7b8e5e3; path=/; secure Vary: Accept-Encoding However, when I use the following C# code: HttpClient http = new HttpClient("https://my.rightscale.com/api/accountid/login?api_version=1.0"); http.TransportSettings.UseDefaultCredentials = false; http.TransportSettings.MaximumAutomaticRedirections = 0; http.TransportSettings.Credentials = new NetworkCredential("username", "password"); Console.WriteLine(http.Get().Content.ReadAsString()); Instead of a HTTP 204, I get a redirect: You are being <a> href="https://my.rightscale.com/dashboard"redirected <a> How do I get the WCF REST starter kit working with the RighScale API ?

    Read the article

  • In WMI, can I use a join (or something similar) to acquire the IisWebServer object for a site, given

    - by Precipitous
    Given a server name and a physical path, I'd like to be able to hunt down the IISWebServer object and ApplicationPool. Website url is also an acceptable input. Our technologies are IIS 6, WMI, and access via C# or Powershell 2. I'm certain this would be easier with IIS 7 its managed API. We don't have that yet. Here's what I can do: Get a list of IIS virtual directories from IISWebVirtualDirSetting and filter (offline) for the matching physical path. $theVirtualDir = gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername -authentication PacketPrivacy ` -class "IISWebVirtualDirSetting" ` | where-object {$_.Path -like $deployLocation} From the virtual directory object, I can get a name (like W3SVC/40565456/root). Given this name, I can get to other goodies, such as the IIS web server object. gwmi -Namespace "root/MicrosoftIISv2" ` -ComputerName $servername ` -authentication PacketPrivacy ` -Query "SELECT * FROM IisWebServer WHERE Name='W3SVC/40589473'" The questions, restated: 1) This is a query language. Can I join or subquery so that 1 WMI query statement gets web servers based on IISWebVirtualDir.Path? How? 2) In solving 1, you'll have to explain how to query on the Path property. Why is this an invalid query? "SELECT * FROM IISWebVirtualDirSetting WHERE Path='D:\sites\globaldominator'"

    Read the article

  • Can Win32_NetworkAdapterConfiguration.EnableStatic() be used to set more than one IP address?

    - by Andrew J. Brehm
    I ran into this problem in a Visual Basic program that uses WMI but could confirm it in PowerShell. Apparently the EnableStatic() method can only be used to set one IP address, despite taking two parameters IP address(es) and subnetmask(s) that are arrays. I.e. $a=get-wmiobject win32_networkadapterconfiguration -computername myserver This gets me an array of all network adapters on "myserver". After selecting a specific one ($a=$a[14] in this case), I can run $a.EnableStatic() which has this signature System.Management.ManagementBaseObject EnableStatic(System.String[] IPAddress, System.String[] SubnetMask) I thought this implies that I could set several IP addresses like this: $ips="192.168.1.42","192.168.1.43" $a.EnableStatic($ips,"255.255.255.0") But this call fails. However, this call works: $a.EnableStatic($ips[0],"255.255.255.0") It looks to me as if EnableStatic() really takes two strings rather than two arrays of strings as parameters. In Visual Basic it's more complicated and arrays must be passed but the method appears to take into account only the first element of each array. Am I confused again or is there some logic here?

    Read the article

  • In need of a Smarter Environmental Package Configuration

    - by Jeremy Liberman
    I am trying to set up a package template in SSIS, following the Wrox Programmer to Programmer book, SQL Server 2008 Integration Services: Problem - Design - Solution. I'm really liking this book even though it is 2008 and we're using SQL Server 2005. I've got a working package template that uses an Indirect XML package configuration to identify what environment (local developer, dev, QA, production, etc) the package is being run in. That locates the SQL Server package configuration for the environment. That set-up is great and all except for the environment variable at the very front of it all. My team would prefer it if the package could use the same environment resource locator as all our other applications and tools use, so we don't two environment markers with essentially the same information in them. Normally we look up a registry key in HKey_Local_Machine but the Registry Package Configuration type only lets you look up the HKey_Current_User registries. My first thought was to write a new Package Configuration Type class that extends the Registry type; after all we'd had such luck writing our own custom log provider. SSIS is super extendable, right? So there doesn't seem to be a way to write your own Package Configuration Types. Is there still some way I can configure my SSIS SQL Server package configuration from a HKLM registry key connection string? If this is not possible, what other workarounds are available? My idea is to write a PowerShell script that will create/modify the Environment Variable that the package will use by fetching the connection string from the registry. This way there's still two markers, but at least then it's automatically maintained and automated. Is this kind of workaround necessary? Thank you for your time.

    Read the article

  • How can I get penetration depth from Minkowski Portal Refinement / Xenocollide?

    - by Raven Dreamer
    I recently got an implementation of Minkowski Portal Refinement (MPR) successfully detecting collision. Even better, my implementation returns a good estimate (local minimum) direction for the minimum penetration depth. So I took a stab at adjusting the algorithm to return the penetration depth in an arbitrary direction, and was modestly successful - my altered method works splendidly for face-edge collision resolution! What it doesn't currently do, is correctly provide the minimum penetration depth for edge-edge scenarios, such as the case on the right: What I perceive to be happening, is that my current method returns the minimum penetration depth to the nearest vertex - which works fine when the collision is actually occurring on the plane of that vertex, but not when the collision happens along an edge. Is there a way I can alter my method to return the penetration depth to the point of collision, rather than the nearest vertex? Here's the method that's supposed to return the minimum penetration distance along a specific direction: public static Vector3 CalcMinDistance(List<Vector3> shape1, List<Vector3> shape2, Vector3 dir) { //holding variables Vector3 n = Vector3.zero; Vector3 swap = Vector3.zero; // v0 = center of Minkowski sum v0 = Vector3.zero; // Avoid case where centers overlap -- any direction is fine in this case //if (v0 == Vector3.zero) return Vector3.zero; //always pass in a valid direction. // v1 = support in direction of origin n = -dir; //get the differnce of the minkowski sum Vector3 v11 = GetSupport(shape1, -n); Vector3 v12 = GetSupport(shape2, n); v1 = v12 - v11; //if the support point is not in the direction of the origin if (v1.Dot(n) <= 0) { //Debug.Log("Could find no points this direction"); return Vector3.zero; } // v2 - support perpendicular to v1,v0 n = v1.Cross(v0); if (n == Vector3.zero) { //v1 and v0 are parallel, which means //the direction leads directly to an endpoint n = v1 - v0; //shortest distance is just n //Debug.Log("2 point return"); return n; } //get the new support point Vector3 v21 = GetSupport(shape1, -n); Vector3 v22 = GetSupport(shape2, n); v2 = v22 - v21; if (v2.Dot(n) <= 0) { //can't reach the origin in this direction, ergo, no collision //Debug.Log("Could not reach edge?"); return Vector2.zero; } // Determine whether origin is on + or - side of plane (v1,v0,v2) //tests linesegments v0v1 and v0v2 n = (v1 - v0).Cross(v2 - v0); float dist = n.Dot(v0); // If the origin is on the - side of the plane, reverse the direction of the plane if (dist > 0) { //swap the winding order of v1 and v2 swap = v1; v1 = v2; v2 = swap; //swap the winding order of v11 and v12 swap = v12; v12 = v11; v11 = swap; //swap the winding order of v11 and v12 swap = v22; v22 = v21; v21 = swap; //and swap the plane normal n = -n; } /// // Phase One: Identify a portal while (true) { // Obtain the support point in a direction perpendicular to the existing plane // Note: This point is guaranteed to lie off the plane Vector3 v31 = GetSupport(shape1, -n); Vector3 v32 = GetSupport(shape2, n); v3 = v32 - v31; if (v3.Dot(n) <= 0) { //can't enclose the origin within our tetrahedron //Debug.Log("Could not reach edge after portal?"); return Vector3.zero; } // If origin is outside (v1,v0,v3), then eliminate v2 and loop if (v1.Cross(v3).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v2 = v3; v21 = v31; v22 = v32; n = (v1 - v0).Cross(v3 - v0); continue; } // If origin is outside (v3,v0,v2), then eliminate v1 and loop if (v3.Cross(v2).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v1 = v3; v11 = v31; v12 = v32; n = (v3 - v0).Cross(v2 - v0); continue; } bool hit = false; /// // Phase Two: Refine the portal int phase2 = 0; // We are now inside of a wedge... while (phase2 < 20) { phase2++; // Compute normal of the wedge face n = (v2 - v1).Cross(v3 - v1); n.Normalize(); // Compute distance from origin to wedge face float d = n.Dot(v1); // If the origin is inside the wedge, we have a hit if (d > 0 ) { //Debug.Log("Do plane test here"); float T = n.Dot(v2) / n.Dot(dir); Vector3 pointInPlane = (dir * T); return pointInPlane; } // Find the support point in the direction of the wedge face Vector3 v41 = GetSupport(shape1, -n); Vector3 v42 = GetSupport(shape2, n); v4 = v42 - v41; float delta = (v4 - v3).Dot(n); float separation = -(v4.Dot(n)); if (delta <= kCollideEpsilon || separation >= 0) { //Debug.Log("Non-convergance detected"); //Debug.Log("Do plane test here"); return Vector3.zero; } // Compute the tetrahedron dividing face (v4,v0,v1) float d1 = v4.Cross(v1).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v2) float d2 = v4.Cross(v2).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v3) float d3 = v4.Cross(v3).Dot(v0); if (d1 < 0) { if (d2 < 0) { // Inside d1 & inside d2 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } else { // Inside d1 & outside d2 ==> eliminate v3 v3 = v4; v31 = v41; v32 = v42; } } else { if (d3 < 0) { // Outside d1 & inside d3 ==> eliminate v2 v2 = v4; v21 = v41; v22 = v42; } else { // Outside d1 & outside d3 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } } } return Vector3.zero; } }

    Read the article

  • Employee Info Starter Kit (v4.0.0) - Visual Studio 2010 and .NET 4.0 Version is Available Now

    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves most of the major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintainable web applications...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to update all the SSIS packages&rsquo; Connection Managers in a BIDS project with PowerShell

    - by Luca Zavarella
    During the development of a BI solution, we all know that 80% of the time is spent during the ETL (Extract, Transform, Load) phase. If you use the BI Stack Tool provided by Microsoft SQL Server, this step is accomplished by the development of n Integration Services (SSIS) packages. In general, the number of packages made ??in the ETL phase for a non-trivial solution of BI is quite significant. An SSIS package, therefore, extracts data from a source, it "hammers" :) the data and then transfers it to a specific destination. Very often it happens that the connection to the source data is the same for all packages. Using Integration Services, this results in having the same Connection Manager (perhaps with the same name) for all packages: The source data of my BI solution comes from an Helper database (HLP), then, for each package tha import this data, I have the HLP Connection Manager (the use of a Shared Data Source is not recommended, because the Connection String is wired and therefore you have to open the SSIS project and use the proper wizard change it...). In order to change the HLP Connection String at runtime, we could use the Package Configuration, or we could run our packages with DTLoggedExec by Davide Mauri (a must-have if you are developing with SQL Server 2005/2008). But my need was to change all the HLP connections in all packages within the SSIS Visual Studio project, because I had to version them through Team Foundation Server (TFS). A good scribe with a lot of patience should have changed by hand all the connections by double-clicking the HLP Connection Manager of each package, and then changing the referenced server/database: Not being endowed with such virtues :) I took just a little of time to write a small script in PowerShell, using the fact that a SSIS package (a .dtsx file) is nothing but an xml file, and therefore can be changed quite easily. I'm not a guru of PowerShell, but I managed more or less to put together the following lines of code: $LeftDelimiterString = "Initial Catalog=" $RightDelimiterString = ";Provider=" $ToBeReplacedString = "AstarteToBeReplaced" $ReplacingString = "AstarteReplacing" $MainFolder = "C:\MySSISPackagesFolder" $files = get-childitem "$MainFolder" *.dtsx `       | Where-Object {!($_.PSIsContainer)} foreach ($file in $files) {       (Get-Content $file.FullName) `             | % {$_ -replace "($LeftDelimiterString)($ToBeReplacedString)($RightDelimiterString)", "`$1$ReplacingString`$3"} ` | Set-Content $file.FullName; } The script above just opens any SSIS package (.dtsx) in the supplied folder, then for each of them goes in search of the following text: Initial Catalog=AstarteToBeReplaced;Provider= and it replaces the text found with this: Initial Catalog=AstarteReplacing;Provider= I don’t enter into the details of each cmdlet used. I leave the reader to search for these details. Alternatively, you can use a specific object model exposed in some .NET assemblies provided by Integration Services, or you can use the Pacman utility: Enjoy! :) P.S. Using TFS as versioning system, before running the script I checked out the packages and, after the script executed succesfully, I checked in them.

    Read the article

  • Replacement for Vern Buerg's list.com in 64 bit Windows 7

    - by Kevin
    I would like to find a replacement for list.com, specifically the ability to accept piped input. For example: p4 sync -n | list which accepts the output of the perforce command and displays the results in the viewer/editor for manipulation or saving. I know that I would send the output to a file and then open the file in the viewer/editor but I use it for temporary results. List.com doesn't work on 64 bit Windows 7.

    Read the article

  • how to suppress output in R

    - by Derek
    Hi, I would like to suppress output in R when I run my r script from the command prompt. I tried numerous options including "--slave" and "--vanilla". Those options lessens the amount of text outputted. I also tried to pipe the output to "NUL" but that didn't help. Thanks a lot, Derek

    Read the article

  • Calling via adb in Power shell

    - by Imran Nasir
    As you may know, the command for calling via adb is: .\adb.exe shell am start -a android.intent.action.CALL tel:"656565" This works well but when I use textbox, it takes garbage value... .\adb.exe shell am start -a android.intent.action.CALL tel:$textbox1.Text I have tried this also but failed $button21_Click={ #TODO: Place custom script here $textbox1.Clear .\adb.exe shell am start -a android.intent.action.CALL tel:$textbox1.Text } Please help

    Read the article

  • PS using Get-WinEvent with FilterXPath and datetime variables?

    - by Jordan W.
    I'm grabbing a handful of events from an event log in chronological order don't want to pipe to Where want to use get-winevent After I get the Event1, I need to get the 1st instance of another event that occurs some unknown amount of time after Event1. then grab Event3 that occurs sometime after Event2 etc. Basically starting with: $filterXML = @' <QueryList> <Query Id="0" Path="System"> <Select Path="System">*[System[Provider[@Name='Microsoft-Windows-Kernel-General'] and (Level=4 or Level=0) and (EventID=12)]]</Select> </Query> </QueryList> '@ $event1=(Get-WinEvent -ComputerName $PCname -MaxEvents 1 -FilterXml $filterXML).timecreated Give me the datetime of Event1. Then I want to do something like: Get-WinEvent -LogName "System" -MaxEvents 1 -FilterXPath "*[EventData[Data = 'Windows Management Instrumentation' and TimeCreated -gt $event1]]" Obviously the timecreated part bolded there doesn't work but I hope you get what I'm trying to do. any help?

    Read the article

  • Can I get Raid disk status by using PS?

    - by David.Chu.ca
    I have a HP server with Raid 5. Port 0 and 1 are used for data & OS mirroring. The software come with the Raid 5 is Intel Matrix Storage Manager and there is manager console as windows based api to view all the ports, including their status. Now they are all in Normal status. I am not sure if the OS/Windows has some APIs or .Net classes to access raid ports and get their status? If so, how can I use PS to get the information? Do I have to reference to the dlls provided by the Intel Matrix Storage Manager if not? Basically, I would like to write a PS script to get read status. In case any of port disk is not normal, a message will be sent out by growl protocol.

    Read the article

  • Start-Job Problems

    - by laerte-junior
    Hi all, Why this code not works ? function teste { begin { function lala { while ($true){ "JJJJ" | Out-File c:\Testes\teste.txt -Append } } } process { Start-Job -ScriptBlock {lala} } }

    Read the article

  • WMI Query Script as a Job

    - by Kenneth
    I have two scripts. One calls the other with a list of servers as parameters. The second query is designed to execute a WMI query. When I run it manually, it does this perfectly. When I try to run it as a job it hangs forever and I have to remove it. For the sake of space here is the relevant part of the calling script: ProcessServers.ps1 Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList $sqlsrv,$destdb,$server,$instance GetServerDetailsLight.ps1 param($sqlsrv,$destdb,$server,$instance) $password = get-content C:\SQLPS\auth.txt | convertto-securestring $credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\MYUSER",$password [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $box_id = 0; if ($sqlsrv.length -eq 0) { write-output "No data passed" break } function getinfo { param( [string]$svr, [string]$inst ) "Entered GetInfo with: $svr,$inst" $cs = get-wmiobject win32_operatingsystem -computername $svr -credential $credentials -authentication 6 -Verbose -Debug | select Name, Model, Manufacturer, Description, DNSHostName, Domain, DomainRole, PartOfDomain, NumberOfProcessors, SystemType, TotalPhysicalMemory, UserName, Workgroup write-output "WMI Results: $cs" } getinfo $server $instance write-output "Complete" Executed as a job it will show as 'running' forever: PS C:\sqlps> Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList DBSERVER,LOGDB,SERVER01,SERVER01 Id Name State HasMoreData Location Command -- ---- ----- ----------- -------- ------- 21 Job21 Running True localhost param($sqlsrv,$destdb,... GAC Version Location --- ------- -------- True v2.0.50727 C:\WINDOWS\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll getinfo MSDCHR01 MSDCHR01 Entered GetInfo with: SERVER01,SERVER01 The last output I ever get is the 'Entered GetInfo with: SERVER01,SERVER01'. If I run it manually like so: PS C:\sqlps> .\GetServerDetailsLight.ps1 DBSERVER LOGDB SERVER01 SERVER01 The WMI query executes just as expected. I am trying to determine why this is, or at least a useful way to trap errors from within jobs. Thanks!

    Read the article

  • tabexpansion doesn't fall through if overridden

    - by guillermooo
    The tabexpansion function only works partially when I override it like so: function tabexpansion { param($line, $lastWord) if ($line -eq "hey ") { "you", "Joe" } } The custom completions work as expected, but now I only get the default autocomplete behavior for cmdlet names, not parameters. So New-TAB works fine, but New-Alias -TAB doesn't. How do I get the regular completions too after overriding tabexpansion?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >