Search Results

Search found 9981 results on 400 pages for 'package managers'.

Page 47/400 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • File Watcher Task

    The task will detect changes to existing files as well as new files, both actions will cause the file to be found when available. A file is available when the task can open it exclusively. This is important for files that take a long time to be written, such as large files, or those that are just written slowly or delivered via a slow network link. It can also be set to look for existing files first (1.2.4.55). The full path of the found file is returned in up to three ways: The ExecValueVariable of the task. This can be set to any String variable. The OutputVariableName when specified. This can be set to any String variable. The FullPath variable within OnFileFoundEvent. This is a File Watcher Task specific event.   Advanced warning of a file having been detected, but not yet available is returned through the OnFileWatcherEvent. This event does not always coincide with the completion of the task, as completion and the OnFileFoundEvent is delayed until the file is ready for use. This event indicates that a file has been detected, and that file will now be monitored until it becomes available. The task will only detect and report on the first file that is created or changes, any subsequent changes will be ignored. Task properties and there usages are documented below: Property Data Type Description Filter String Default filter *.* will watch all files. Standard windows wildcards and patterns can be used to restrict the files monitored. FindExistingFiles Boolean Indicates whether the task should check for any existing files that match the path and filter criteria, before starting the file watcher. IncludeSubdirectories Boolean Indicates whether changes in subdirectories are accepted or ignored. OutputVariableName String The name of the variable into which the full file path found will be written on completion of the task. The variable specified should be of type string. Path String Path to watch for new files or changes to existing files. The path is a directory, not a full filename. For a specific file, enter the file name in the Filter property and the directory in the Path property. PathInputType FileWatcherTask.InputType Three input types are supported for the path: Connection - File connection manager, of type existing folder. Direct Input - Type the path directly into the UI or set on the property as a literal string. Variable – The name of the variable which contains the path. Timeout Integer Time in minutes to wait for a file. If no files are detected within the timeout period the task will fail. The default value of 0 means infinite, and will not expire. TimeoutAsWarning Boolean The default behaviour is to raise an error and fail the task on timeout. This property allows you to suppress the error on timeout, a warning event is raised instead, and the task succeeds. The default value is false.   Installation The task is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the task to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Control Flow Items tab, and then check the File Watcher Task in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The File Watcher Task  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. File Watcher Task for SQL Server 2005 File Watcher Task for SQL Server 2008 File Watcher Task for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.16 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.14 - Fixed user interface bug. A migration problem caused the UI type editors to reference an old SQL 2005 assembly. (17 Nov 2008) Version 2.0.0.7 - SQL Server 2008 release. (20 Oct 2008) SQL Server 2005 Version 1.2.6.100 - Fixed UI bug with TimeoutAsWarning property not saving correctly. Improved expression support in UI. File availability detection changed to use read-only lock, allowing reduced permissions to be used. Corrected installed issue which prevented installation on 64-bit machines with SSIS runtime only components. (18 Mar 2007) Version 1.2.5.73 - Added TimeoutAsWarning property. Gives the ability to suppress the error on timeout, a warning event is raised instead, and the task succeeds. (Task Version 3) (27 Sep 2006) Version 1.2.4.61 - Fixed a bug which could cause a loop condition with an unexpected exception such as incorrect file permissions. (20 Sep 2006) Version 1.2.4.55 - Added FindExistingFiles property. When true the task will check for an existing file before the file watcher itself actually starts. (Task Version 2) (8 Sep 2006) Version 1.2.3.39 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Property type validation improved. (12 Jun 2006) Version 1.2.1.0 - SQL Server 2005 IDW 16 Sept CTP. Futher UI enhancements, including expression indicator. Fixed bug caused by execution within loop Subsequent iterations detected the same file as the first iteration. Added IncludeSubdirectories property. Fixed bug when changes made in subdirectories, and folder change was detected, causing task failure. (Task Version 1) (6 Oct 2005) Version 1.2.0.0 - SQL Server 2005 IDW 15 June CTP. Changes made include an enhanced UI, the PathInputType property for greater flexibility with path input, the OutputVariableName property, and the new OnFileFoundEvent event. (7 Sep 2005) Version 1.1.2 - Public Release (16 Nov 2004) Screenshots   Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005 and SQL Server 2008. If you an error when you try and use the task along the lines of The task with the name "File Watcher Task" and the creation name ... is not registered for use on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This cache is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The task with the name "File Watcher Task" and the creation name "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask, Konesans.Dts.Tasks.FileWatcherTask, Version=1.2.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b" is not registered for use on this computer. Contact Information: File Watcher Task A similar error message can be shown when trying to edit the task if the Microsoft Exception Message Box is not installed. This useful component is installed as part of the SQL Server Management Studio tools but occasionally due to the custom options chosen during SQL Server 2005 setup it may be absent. If you get an error like Could not load file or assembly 'Microsoft.ExceptionMessageBox.. you can manually download and install the missing component. It is available as part of the Feature Pack for SQL Server 2005 release. The feature packs are occasionally updated by Microsoft so you may like to check for a more recent edition, but you can find the Microsoft Exception Message Box download links here - Feature Pack for Microsoft SQL Server 2005 - April 2006 If you encounter this problem on SQL Server 2008, please check that you have installed the SQL Server client components. The component is no longer available as a separate download for SQL Server 2008  as noted in the Microsoft documentation for Deploying an Exception Message Box Application The full error message is shown below for reference, although note that the Version will change between SQL Server 2005 and SQL Server 2008: TITLE: Microsoft Visual Studio ------------------------------ Cannot show the editor for this task. ------------------------------ ADDITIONAL INFORMATION: Could not load file or assembly 'Microsoft.ExceptionMessageBox, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. (Konesans.Dts.Tasks.FileWatcherTask) Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed. Sample Code If you wanted to use the task programmatically then here is some sample code for creating a basic package and configuring the task. It uses a variable to supply the path to watch, and also sets a variable for the OutputVariableName. Once execution is complete it writes out the file found to the console. /// <summary> /// Create a package with an File Watcher Task /// </summary> public void FileWatcherTaskBasic() { // Create the package Package package = new Package(); package.Name = "FileWatcherTaskBasic"; // Add variable for input path, the folder to look in package.Variables.Add("InputPath", false, "User", @"C:\Temp\"); // Add variable for the file found, to be used on OutputVariableName property package.Variables.Add("FileFound", false, "User", "EMPTY"); // Add the Task package.Executables.Add("Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask, " + "Konesans.Dts.Tasks.FileWatcherTask, Version=1.2.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set basic properties taskHost.Properties["PathInputType"].SetValue(taskHost, 1); // InputType.Variable taskHost.Properties["Path"].SetValue(taskHost, "User::InputPath"); taskHost.Properties["OutputVariableName"].SetValue(taskHost, "User::FileFound"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif // Display variable value before execution to check EMPTY Console.WriteLine("Result Variable: {0}", package.Variables["User::FileFound"].Value); // Execute package package.Execute(); // Display variable value after execution, e.g. C:\Temp\File.txt Console.WriteLine("Result Variable: {0}", package.Variables["User::FileFound"].Value); // Perform simple check for execution errors if (package.Errors.Count > 0) foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } else Console.WriteLine("Success - {0}", package.Name); // Clean-up package.Dispose(); } (Updated installation and troubleshooting sections, and added sample code July 2009)

    Read the article

  • Cannot install icaclient due to problem with ia32-libs

    - by Marc
    I have a problem installing the package icaclient on 13.10 Saucy Salamander 64bit. It seems that there is a problem with ia32-libs and other dependencies. marc@PinballWizard:~$ sudo dpkg -i Downloads/icaclient_12.1.0_amd64.deb [sudo] password for marc: Selecting previously unselected package icaclient. (Reading database ... 179461 files and directories currently installed.) Unpacking icaclient (from .../icaclient_12.1.0_amd64.deb) ... dpkg: dependency problems prevent configuration of icaclient: icaclient depends on ia32-libs; however: Package ia32-libs is not installed. icaclient depends on lib32z1; however: Package lib32z1 is not installed. icaclient depends on lib32asound2; however: Package lib32asound2 is not installed. dpkg: error processing icaclient (--install): dependency problems - leaving unconfigured Errors were encountered while processing: icaclient Hence, other workarounds seem not to work. I followed the instructions here - and for the last two Ubuntu releases it was surely no problem. When I try to install ia32-libs I get the following issue: marc@PinballWizard:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Package ia32-libs is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 E: Package 'ia32-libs' has no installation candidate Is there any possibility to install icaclient? The sources.list is here.

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • I have 6 updates that won't install on Ubuntu 12.04?

    - by Taylor
    I'm an Ubuntu novice, so any help here is greatly appreciated! I'm running Ubuntu 12.04, and I have six updates that just won't install. I've tried Update Manger, sudo apt-get upgrade, and sudo apt-get update. Nothing has worked so far. Here are the details I get from Update Manager: installArchives() failed: Setting up linux-image-3.2.0-24-generic-pae (3.2.0-24.37) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-24-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-27-generic-pae (3.2.0-27.43) ... No apport report written because MaxReports is reached already Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-27-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up linux-image-3.2.0-29-generic-pae (3.2.0-29.46) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-29-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up udev (175-0ubuntu9.1) ... udev stop/waiting udev start/running, process 3685 /var/lib/dpkg/info/udev.postinst: 87: /var/lib/dpkg/info/udev.postinst: update-initramfs: not found dpkg: error processing udev (--configure): subprocess installed post-installation script returned error exit status 127 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of xserver-xorg-core: xserver-xorg-core depends on udev (= 149); however: Package udev is not configured yet. dpkg: error processing xserver-xorg-core (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of fglrx: fglrx depends on xserver-xorg-core; however: Package xserver-xorg-core is not configured yet. dpkg: error processing fglrx (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-24-generic-pae; however: Package linux-image-3.2.0-24-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.24.26); however: Package linux-image-generic-pae is not configured yet. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of xserver-xorg-video-intel: xserver-xorg-video-intel depends on xorg-video-abi-11; however: Package xorg-video-abi-11 is not installed. Package xserver-xorg-core which provides xorg-video-abi-11 is not configured yet. xserver-xorg-video-intel depends on xserver-xorg-core (= 2:1.10.99.901); however: Package xserver-xorg-core is not configured yet. dpkg: error processing xserver-xorg-video-intel (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev:No apport report written because MaxReports is reached already fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic-pae linux-image-3.2.0-27-generic-pae linux-image-3.2.0-29-generic-pae udev xserver-xorg-core fglrx fglrx-amdcccle linux-image-generic-pae linux-generic-pae xserver-xorg-video-intel fglrx-dev Error in function: Setting up linux-image-3.2.0-24-generic-pae (3.2.0-24.37) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-24-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-29-generic-pae (3.2.0-29.46) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-29-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-27-generic-pae (3.2.0-27.43) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-27-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up udev (175-0ubuntu9.1) ... udev stop/waiting udev start/running, process 3782 /var/lib/dpkg/info/udev.postinst: 87: /var/lib/dpkg/info/udev.postinst: update-initramfs: not found dpkg: error processing udev (--configure): subprocess installed post-installation script returned error exit status 127 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-24-generic-pae; however: Package linux-image-3.2.0-24-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of xserver-xorg-core: xserver-xorg-core depends on udev (= 149); however: Package udev is not configured yet. dpkg: error processing xserver-xorg-core (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx: fglrx depends on xserver-xorg-core; however: Package xserver-xorg-core is not configured yet. dpkg: error processing fglrx (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.24.26); however: Package linux-image-generic-pae is not configured yet. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of xserver-xorg-video-intel: xserver-xorg-video-intel depends on xorg-video-abi-11; however: Package xorg-video-abi-11 is not installed. Package xserver-xorg-core which provides xorg-video-abi-11 is not configured yet. xserver-xorg-video-intel depends on xserver-xorg-core (= 2:1.10.99.901); however: Package xserver-xorg-core is not configured yet. dpkg: error processing xserver-xorg-video-intel (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev: fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured

    Read the article

  • What is meant by the terms CPU, Core, Die and Package?

    - by lovesh
    Now this might sound like too many previous questions, but I am really confused about these terms. I was trying to understand how "dual core" is different from "Core 2 Duo", and I came across some answers. For example, this answer states: Core 2 Duo has two cores inside a single physical package and dual core is 2 cpu in a package 2 cpu's in a die = 2 cpu's made together 2 cpu's in package = 2 cpu's on small board or linked in some way Now, is a core different from a CPU? What I understand is there is something that does all the heavy computation, decision making, math and other stuff (aka "processing") is called a CPU. Now what is a Core? And what is a processor when somebody says he has got a Core 2 Duo? And in this context what is a Package and what is a Die? I still don't understand the difference between Core 2 Duo and Dual Core. And can somebody explain hyper-threading (symmetric multi-threading) too if they are super generous?

    Read the article

  • Cannot install Crossover

    - by tech
    I can't install crossover from the package, ".deb". Here is a screenshoot of it : Here is what I got when I was trying to install with terminal: `young@jianyue:~$ cd /home/young/Desktop young@jianyue:~/Desktop$ sudo dpkg -i crossover.deb Selecting previously unselected package ia32-crossover. (Reading database ... 127804 files and directories currently installed.) Unpacking ia32-crossover (from crossover.deb) ... dpkg: dependency problems prevent configuration of ia32-crossover: ia32-crossover depends on libc6-i386; however: Package libc6-i386 is not installed. ia32-crossover depends on ia32-libs | ia32-apt-get; however: Package ia32-libs is not installed. Package ia32-apt-get is not installed. ia32-crossover depends on lib32gcc1; however: Package lib32gcc1 is not installed. ia32-crossover depends on lib32nss-mdns; however: Package lib32nss-mdns is not installed. ia32-crossover depends on lib32z1; however: Package lib32z1 is not installed. ia32-crossover depends on python-glade2; however: Package python-glade2 is not installed. ia32-crossover depends on lib32asound2; however: Package lib32asound2 is not installed. dpkg: error processing ia32-crossover (--install): dependency problems - leaving unconfigured Processing triggers for doc-base ... Processing 33 changed doc-base files, 1 added doc-base file... Errors were encountered while processing: ia32-crossover `

    Read the article

  • How can I install a package without installing some dependencies?

    - by Alex
    I'm trying to install the package LaTeXila, and the output looks like this: $ sudo apt-get install latexila --no-install-recommends Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: latexila-data latexmk luatex tex-common texlive-base texlive-binaries texlive-common texlive-doc-base texlive-latex-base Suggested packages: rubber texlive-latex-extra debhelper Recommended packages: texlive texlive-latex-recommended texlive-luatex lmodern texlive-latex-base-doc The following NEW packages will be installed: latexila latexila-data latexmk luatex tex-common texlive-base texlive-binaries texlive-common texlive-doc-base texlive-latex-base 0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded. Need to get 29.3 MB of archives. After this operation, 74.5 MB of additional disk space will be used. Do you want to continue [Y/n]? I don't want to install the texlive packages. I've installed texlive manually from http://www.tug.org/texlive/. Any suggestions?

    Read the article

  • How to preseed 12.10 desktop when the ubuntu-desktop package is missing?

    - by user183394
    I have been trying to use a preseed file to do PXE booting of a 12.10 desktop. Upon the first boot, I was greeted by a terminal with login prompt. Surprised, I checked the /var/log/installer/syslog but didn't find a trace of desktop installation. Feeling curious, I double-checked the content of the loop mounted iso file, and realized that the ubuntu-desktop package that existed up to 12.04.1 is no longer available. So, the following preseed lines from the Ubuntu preseed example no longer apply: ################################################################################ ### Package selection ### ################################################################################ # Selected packages. tasksel tasksel/first multiselect ubuntu-desktop #tasksel tasksel/first multiselect lamp-server, print-server #tasksel tasksel/first multiselect kubuntu-desktop Given such a situation, is there something that I can specify in the pre-seed file to install the entire default desktop?

    Read the article

  • Controlling server configurations with IPS

    - by barts
    I recently received a customer question regarding how they best could control which packages and which versions were used on their production Solaris 11 servers.  They had considered pointing each server at its own software repository - a common initial approach.  A simpler method leverages one of dependency mechanisms we introduced with Solaris 11, but is not immediately obvious to most people. Typically, most internal IT departments qualify particular versions for production use.  What this customer wanted to do was insure that their operations staff only installed internally qualified versions of Solaris on their servers.  The easiest way of doing this is to leverage the 'incorporate' type of dependency in a small package defined for each server type.  From the reference " Packaging and Delivering Software With the Image Packaging System in Oracle® Solaris 11.1":  The incorporate dependency specifies that if the given package is installed, it must be at the given version, to the given version accuracy. For example, if the dependent FMRI has a version of 1.4.3, then no version less than 1.4.3 or greater than or equal to 1.4.4 satisfies the dependency. Version 1.4.3.7 does satisfy this example dependency. The common way to use incorporate dependencies is to put many of them in the same package to define a surface in the package version space that is compatible. Packages that contain such sets of incorporate dependencies are often called incorporations. Incorporations are typically used to define sets of software packages that are built together and are not separately versioned. The incorporate dependency is heavily used in Oracle Solaris to ensurethat compatible versions of software are installed together. An example incorporate dependency is: depend type=incorporate fmri=pkg:/driver/network/ethernet/[email protected],5.11-0.175.0.0.0.2.1 So, to make sure only qualified versions are installed on a server, create a package that will be installed on the machines to be controlled.  This package will contain an incorporate dependency on the "entire" package, which controls the various components used to be build Solaris.  Every time a new version of Solaris has been qualified for production use, create a new version of this package specifying the new version of "entire" that was qualified.  Once this new control package is available in the repositories configured on the production server, the pkg update command will update that system to the specified version.  Unless a new version of the control package is made available, pkg update will report that no updates are available since no version of the control package can be installed that satisfies the incorporate constraint. Note that if desired, the same package can be used to specify which packages must be present on the system by adding either "require" or "group" dependencies; the latter permits removal of some of the packages, the former does not.  More details on this can be found in either the section 5 pkg man page or the previously mentioned reference document. This technique of using package dependencies to constrain system configuration leverages the SAT solver which is at the heart of IPS, and is basic to how we package Solaris itself.  

    Read the article

  • Where is the bare cygwin package list located and how do I manipulate it?

    - by matnagel
    Where is the bare cygwin package list located and how do I manipulate it programmatically or from a shell or with a different method than the gui? I know the gui (setup.exe), and I'd love to go one or more levels deeper. I can retrieve a list of selected/installed packages ( http://serverfault.com/questions/83456/cygwin-package-management ), but how do I write it back or to a different machine? What I have in mind is when I install a new windows I would like to start with my package list in text form, an apply or inject it somehow to the new system. Where is it? In the registry? In a binary file? in a local database? Or has anybody done this, is there a tool, a tutorial? The essence of what I want is to manipulate the selected package list with something else than the gui. It is ok for me to use the gui for the setup process. So I could imagein manipulating the package List and then run setup.exe and just click through it. Note: I do not want to manipulate the list of already installed packages but of packages that "should be installed". But if htis is not possible, maybe there is some workaround. E.g add an outdated version as installed and the installer will then install the new version.

    Read the article

  • How do I know when should I package my classes in Ruby?

    - by Omega
    In Ruby, I'm creating a small game development framework. Just some personal project - a very small group of friends helping. Now I am in need of handling geometric concepts. Rectangles, Circles, Polygons, Vectors, Lines, etc. So I made a class for each of these. I'm stuck deciding whether I should package such classes in a module, such as Geometry. So I'd access them like Geometry::Rectangle, or just Rectangle if I include the module. Now then, my question isn't about this specific scenario. I'd like to know, when is it suitable to package similar classes into one module in Ruby? What factors should I consider? Amount of classes? Usage frequency? Complexity?

    Read the article

  • Can you recommend a good tutorial on building custom package versions?

    - by Ivan
    After installing Ubuntu 11.04 I was disappointed by the fact there are still Scala 2.7 (when 2.8 is long ago current actual branch) and Mono 2.6 (when pretty a time has passed after 2.8 release). I am not sure I could build all the packages for Mono myself, but I'd like to try making my own custom version of Scala package (and I want my system to accept it not as a different package but a version of the original, so that if I put it into a configured repository, the system will automatically upgrade to it from currently installed original 2.7). Can you recommend a good tutorial on this subject (Ubuntu deb packages building and hacking for beginners)?

    Read the article

  • Debian package libmarkdown-php, how can I use it? [closed]

    - by JamesM-SiteGen
    Hello all, I am just wondering how do you implement libmarkdown-php in a php script? By this, I mean: What code do I run to use the markdown library? Does it simply just add one function? Does it allows me to encode markdown2html and vise versa? Where is a doc for this package, I can't find one? :( Okay, so it terns out that I found the docs, just did not match them up, the project-page did not contain any info on it being the Squeeze package libmarkdown-php, Said to know it is not in Lenny. Thanks @palhmbus for matching them. :)

    Read the article

  • Error is occured when I try to build and install Numeric-24.2, anybody, help please me

    - by vpras
    vprasopchai@vprasopchai-Aspire-4720Z:~/Downloads/Numeric-24.2$ python setup.py build WARNING: '' not a valid package name; please use only.-separated package names in setup.py WARNING: '' not a valid package name; please use only.-separated package names in setup.py running build running build_py running build_ext MA Version 12.2.0 Numeric Version 24.2 vprasopchai@vprasopchai-Aspire-4720Z:~/Downloads/Numeric-24.2$ python setup.py install WARNING: '' not a valid package name; please use only.-separated package names in setup.py WARNING: '' not a valid package name; please use only.-separated package names in setup.py running install running build running build_py running build_ext running install_lib creating /usr/local/lib/python2.7/dist-packages/Numeric error: could not create '/usr/local/lib/python2.7/dist-packages/Numeric': Permission denied

    Read the article

  • Is FFmpeg missing from the official repositories in 14.04?

    - by user254877
    I tried to install ffmpeg in trusty/Ubuntu 14.04 and got the following message: $sudo apt-get install ffmpeg Reading package lists... Done Building dependency tree Reading state information... Done Package ffmpeg is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'ffmpeg' has no installation candidate Why isn't the package available?

    Read the article

  • Is it possible to use the Raring install image as a package repo (like the old alternate CD)?

    - by jamadagni
    I use Kubuntu and recently upgraded to Raring directly from Precise. Until Precise, I always installed the OS using the alternate CD and not the desktop CD, because I could later on mount the image and use it as an offline package repo. For instance if I remove a package installed by the default installer and later I want to install it again, I can just install it from the ISO without needing to download it again. However, since Quantal the alternate CD no longer exists, so I am not sure how to set up the installed image as a local repo. I mean, doing find . -name "*.deb" inside the ISO tree after loopmounting it only shows a few packages like libc6 gcc and such and not the full set of packages that are actually installed -- I presume they are included in pre-installed form inside casper/filesystem.squashfs. Given this situation, is it or is it not possible to use the Raring install images as offline repos? If yes, how? Thank you!

    Read the article

  • How to change aif to be able to access 'it' within the macrocall without making 'it' public in the package

    - by Sim
    If you put the aif code presented in onlisp in a package and try to use it in another you run in the problem that packagename:it is not external. (in-package :packagename) (defmacro aif (test-form then-form &optional else-form) ‘(let ((it ,test-form)) (if it ,then-form ,else-form))) wanted call syntax (in-package :otherpackage) (aif (do-stuff) (FORMAT t "~a~%" it) (FORMAT t "just got nil~%")) How can I fix this behavior in code, without making the variable it external in the package declaration and beeing able to access it just by it instead of packagename:it?

    Read the article

  • Sun Fire X4270 M3 SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Two-Tier Standard Sales and Distribution (SD) Benchmark

    - by Brian
    Oracle's Sun Fire X4270 M3 server achieved 8,320 SAP SD Benchmark users running SAP enhancement package 4 for SAP ERP 6.0 with unicode software using Oracle Database 11g and Oracle Solaris 10. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat both IBM Flex System x240 and IBM System x3650 M4 server running DB2 9.7 and Windows Server 2008 R2 Enterprise Edition. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the HP ProLiant BL460c Gen8 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 6%. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat Cisco UCS C240 M3 server running SQL Server 2008 and Windows Server 2008 R2 Datacenter Edition by 9%. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the Fujitsu PRIMERGY RX300 S7 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 10%. Performance Landscape SAP-SD 2-Tier Performance Table (in decreasing performance order). SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results (benchmark version from January 2009 to April 2012) System OS Database Users SAPERP/ECCRelease SAPS SAPS/Proc Date Sun Fire X4270 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Oracle Solaris 10 Oracle Database 11g 8,320 20096.0 EP4(Unicode) 45,570 22,785 10-Apr-12 IBM Flex System x240 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,960 20096.0 EP4(Unicode) 43,520 21,760 11-Apr-12 HP ProLiant BL460c Gen8 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,865 20096.0 EP4(Unicode) 42,920 21,460 29-Mar-12 IBM System x3650 M4 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,855 20096.0 EP4(Unicode) 42,880 21,440 06-Mar-12 Cisco UCS C240 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 DE SQL Server 2008 7,635 20096.0 EP4(Unicode) 41,800 20,900 06-Mar-12 Fujitsu PRIMERGY RX300 S7 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,570 20096.0 EP4(Unicode) 41,320 20,660 06-Mar-12 Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark. Configuration and Results Summary Hardware Configuration: Sun Fire X4270 M3 2 x 2.90 GHz Intel Xeon E5-2690 processors 128 GB memory Sun StorageTek 6540 with 4 * 16 * 300GB 15Krpm 4Gb FC-AL Software Configuration: Oracle Solaris 10 Oracle Database 11g SAP enhancement package 4 for SAP ERP 6.0 (Unicode) Certified Results (published by SAP): Number of benchmark users: 8,320 Average dialog response time: 0.95 seconds Throughput: Fully processed order line: 911,330 Dialog steps/hour: 2,734,000 SAPS: 45,570 SAP Certification: 2012014 Benchmark Description The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments. SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products. See Also SAP Benchmark Website Sun Fire X4270 M3 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Two-tier SAP Sales and Distribution (SD) standard SAP SD benchmark based on SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark as of 04/11/12: Sun Fire X4270 M3 (2 processors, 16 cores, 32 threads) 8,320 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, Oracle 11g, Solaris 10, Cert# 2012014. IBM Flex System x240 (2 processors, 16 cores, 32 threads) 7,960 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012016. IBM System x3650 M4 (2 processors, 16 cores, 32 threads) 7,855 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012010. Cisco UCS C240 M3 (2 processors, 16 cores, 32 threads) 7,635 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 DE, Cert# 2012011. Fujitsu PRIMERGY RX300 S7 (2 processors, 16 cores, 32 threads) 7,570 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012008. HP ProLiant DL380p Gen8 (2 processors, 16 cores, 32 threads) 7,865 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012012. SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

    Read the article

  • What strategy do you use for package naming in Java projects and why?

    - by Tim Visher
    I thought about this awhile ago and it recently resurfaced as my shop is doing its first real Java web app. As an intro, I see two main package naming strategies. (To be clear, I'm not referring to the whole 'domain.company.project' part of this, I'm talking about the package convention beneath that.) Anyway, the package naming conventions that I see are as follows: Functional: Naming your packages according to their function architecturally rather than their identity according to the business domain. Another term for this might be naming according to 'layer'. So, you'd have a *.ui package and a *.domain package and a *.orm package. Your packages are horizontal slices rather than vertical. This is much more common than logical naming. In fact, I don't believe I've ever seen or heard of a project that does this. This of course makes me leery (sort of like thinking that you've come up with a solution to an NP problem) as I'm not terribly smart and I assume everyone must have great reasons for doing it the way they do. On the other hand, I'm not opposed to people just missing the elephant in the room and I've never heard a an actual argument for doing package naming this way. It just seems to be the de facto standard. Logical: Naming your packages according to their business domain identity and putting every class that has to do with that vertical slice of functionality into that package. I have never seen or heard of this, as I mentioned before, but it makes a ton of sense to me. I tend to approach systems vertically rather than horizontally. I want to go in and develop the Order Processing system, not the data access layer. Obviously, there's a good chance that I'll touch the data access layer in the development of that system, but the point is that I don't think of it that way. What this means, of course, is that when I receive a change order or want to implement some new feature, it'd be nice to not have to go fishing around in a bunch of packages in order to find all the related classes. Instead, I just look in the X package because what I'm doing has to do with X. From a development standpoint, I see it as a major win to have your packages document your business domain rather than your architecture. I feel like the domain is almost always the part of the system that's harder to grok where as the system's architecture, especially at this point, is almost becoming mundane in its implementation. The fact that I can come to a system with this type of naming convention and instantly from the naming of the packages know that it deals with orders, customers, enterprises, products, etc. seems pretty darn handy. It seems like this would allow you to take much better advantage of Java's access modifiers. This allows you to much more cleanly define interfaces into subsystems rather than into layers of the system. So if you have an orders subsystem that you want to be transparently persistent, you could in theory just never let anything else know that it's persistent by not having to create public interfaces to its persistence classes in the dao layer and instead packaging the dao class in with only the classes it deals with. Obviously, if you wanted to expose this functionality, you could provide an interface for it or make it public. It just seems like you lose a lot of this by having a vertical slice of your system's features split across multiple packages. I suppose one disadvantage that I can see is that it does make ripping out layers a little bit more difficult. Instead of just deleting or renaming a package and then dropping a new one in place with an alternate technology, you have to go in and change all of the classes in all of the packages. However, I don't see this is a big deal. It may be from a lack of experience, but I have to imagine that the amount of times you swap out technologies pales in comparison to the amount of times you go in and edit vertical feature slices within your system. So I guess the question then would go out to you, how do you name your packages and why? Please understand that I don't necessarily think that I've stumbled onto the golden goose or something here. I'm pretty new to all this with mostly academic experience. However, I can't spot the holes in my reasoning so I'm hoping you all can so that I can move on. Thanks in advance!

    Read the article

  • Which package from CPAN should I use to send mail?

    - by Uri
    Which package from CPAN should I use to send mail? Sometime the timtowtdi approach is very tiring. For me, especially when it comes to package selection. So all I want is to send email, potentially HTML emails. Between Mail-Sendmail, Mail-Sender, NET-SMTP (by the way - not available in PPM), Mail-SendEasy, and the 80 or so other packages that have 'Mail' in their package name - which one should I choose? And while in this subject, what is your general apprach to choose the "canonical" package for a jog. I.e. the package that "everybody is using". Is there any rating or popularity billboard somewhere?

    Read the article

  • how to get external variable value in dtsx package.

    - by Rishabh
    Hi, I am executing .dtsx package from c#, it was executing fine, if i am passing one variable value from c# code then how can i get it on .dtsx package for my ole db source query. Here is my c# code. string file = @"D:\CYNCZFuzzy\CYNCZFuzzy\Contact.dtsx"; package = app.LoadPackage(file, null); Variables vars = package.Variables; vars["User::parentContactID"].Value = 1028203; pkgResults = package.Execute(); string result = pkgResults.ToString(); I need this 1028203 value on my ole db source query, here my query. select cr.MasterContactID as ParentContactID, c.ID,C.FirstName, C.MiddleName, c.LastName, c.ID as FieldID from Contact c inner join ContactRelation cr on cr.SlaveContactID = c.ID where RelationshipID = 1 AND cr.MasterContactID = ? what I should write on ? for getting 1028203 value from c# page. Thanks in advance...

    Read the article

  • What's the syntax to import a class in a default package in Java?

    - by Lord Torgamus
    Possible Duplicate: How to access java-classes in the default-package? Is it possible to import a class in Java which is in the default package? If so, what is the syntax? For example, if you have package foo.bar; public class SomeClass { // ... in one file, you can write package baz.fonz; import foo.bar.SomeClass; public class AnotherClass { SomeClass sc = new SomeClass(); // ... in another file. But what if SomeClass.java does not contain a package declaration? How would you refer to SomeClass in AnotherClass?

    Read the article

  • Can I accesss an external file when testing an R package?

    - by Abe
    I am using the testthat package to test an R package that is within a larger repository. I would like to test the contents of a file outside of the R package. Can I reference a file that is located outside of an R package while testing? What I have tried A reproducible example can be downloaded as MyRepo.tar.gz My repository is called "myRepo", and it includes an R package, "myRpkg" and a folder full of miscellaneous scripts ~/MyRepo/ ~/MyRepo/MyRpkg ~/MyRepo/Scripts The tests in "MyRpkg" are in the /tests/ folder ~/myRepo/myRpkg/tests/test.myscript.R And I want to be able to test a file in the Scripts folder: ~/MyRepo/Scripts/myscript.sh I would like to read the script to test the contents of the first line doing something like this: check.script <- readLines("../../../Scripts/myscript.sh")[1] expect_true(grepl("echo", check.script)) This works fine if I start from the MyRepo directory: cd ~/MyRepo R CMD check MyRpkg But if I move to another directory, it fails: cd R CMD check MyRepo/MyRpkg

    Read the article

  • Azure WNS to Win8 - Push Notifications for Metro Apps

    - by JoshReuben
    Background The Windows Azure Toolkit for Windows 8 allows you to build a Windows Azure Cloud Service that can send Push Notifications to registered Metro apps via Windows Notification Service (WNS). Some configuration is required - you need to: Register the Metro app for Windows Live Application Management Provide Package SID & Client Secret to WNS Modify the Azure Cloud App cscfg file and the Metro app package.appxmanifest file to contain matching Metro package name, SID and client secret. The Mechanism: These notifications take the form of XAML Tile, Toast, Raw or Badge UI notifications. The core engine is provided via the WNS nuget recipe, which exposes an API for constructing payloads and posting notifications to WNS. An application receives push notifications by requesting a notification channel from WNS, which returns a channel URI that the application then registers with a cloud service. In the cloud service, A WnsAccessTokenProvider authenticates with WNS by providing its credentials, the package SID and secret key, and receives in return an access token that the provider caches and can reuse for multiple notification requests. The cloud service constructs a notification request by filling out a template class that contains the information that will be sent with the notification, including text and image references. Using the channel URI of a registered client, the cloud service can then send a notification whenever it has an update for the user. The package contains the NotificationSendUtils class for submitting notifications. The Windows Azure Toolkit for Windows 8 (WAT) provides the PNWorker sample pair of solutions - The Azure server side contains a WebRole & a WorkerRole. The WebRole allows submission of new push notifications into an Azure Queue which the WorkerRole extracts and processes. Further background resources: http://watwindows8.codeplex.com/ - Windows Azure Toolkit for Windows 8 http://watwindows8.codeplex.com/wikipage?title=Push%20Notification%20Worker%20Sample - WAT WNS sample setup http://watwindows8.codeplex.com/wikipage?title=Using%20the%20Windows%208%20Cloud%20Application%20Services%20Application – using Windows 8 with Cloud Application Services A bit of Configuration Register the Metro apps for Windows Live Application Management From the current app manifest of your metro app Publish tab, copy the Package Display Name and the Publisher From: https://manage.dev.live.com/Build/ Package name: <-- we need to change this Client secret: keep this Package Security Identifier (SID): keep this Verify the app here: https://manage.dev.live.com/Applications/Index - so this step is done "If you wish to send push notifications in your application, provide your Package Security Identifier (SID) and client secret to WNS." Provide Package SID & Client Secret to WNS http://msdn.microsoft.com/en-us/library/windows/apps/hh465407.aspx - How to authenticate with WNS https://appdev.microsoft.com/StorePortals/en-us/Account/Signup/PurchaseSubscription - register app with dashboard - need registration code or register a new account & pay $170 shekels http://msdn.microsoft.com/en-us/library/windows/apps/hh868184.aspx - Registering for a Windows Store developer account http://msdn.microsoft.com/en-us/library/windows/apps/hh868187.aspx - Picking a Microsoft account for the Windows Store The WNS Nuget Recipe The WNS Recipe is a nuget package that provides an API for authenticating against WNS, constructing payloads and posting notifications to WNS. After installing this package, a WnsRecipe assembly is added to project references. To send notifications using WNS, first register the application at the Windows Push Notifications & Live Connect portal to obtain Package Security Identifier (SID) and a secret key that your cloud service uses to authenticate with WNS. An application receives push notifications by requesting a notification channel from WNS, which returns a channel URI that the application then registers with a cloud service. In the cloud service, the WnsAccessTokenProvider authenticates with WNS by providing its credentials, the package SID and secret key, and receives in return an access token that the provider caches and can reuse for multiple notification requests. The cloud service constructs a notification request by filling out a template class that contains the information that will be sent with the notification, including text and image references.Using the channel URI of a registered client, the cloud service can then send a notification whenever it has an update for the user. var provider = new WnsAccessTokenProvider(clientId, clientSecret); var notification = new ToastNotification(provider) {     ToastType = ToastType.ToastText02,     Text = new List<string> { "blah"} }; notification.Send(channelUri); the WNS Recipe is instrumented to write trace information via a trace listener – configuratively or programmatically from Application_Start(): WnsDiagnostics.Enable(); WnsDiagnostics.TraceSource.Listeners.Add(new DiagnosticMonitorTraceListener()); WnsDiagnostics.TraceSource.Switch.Level = SourceLevels.Verbose; The WAT PNWorker Sample The Azure server side contains a WebRole & a WorkerRole. The WebRole allows submission of new push notifications into an Azure Queue which the WorkerRole extracts and processes. Overview of Push Notification Worker Sample The toolkit includes a sample application based on the same solution structure as the one created by theWindows 8 Cloud Application Services project template. The sample demonstrates how to off-load the job of sending Windows Push Notifications using a Windows Azure worker role. You can find the source code in theSamples\PNWorker folder. This folder contains a full version of the sample application showing how to use Windows Push Notifications using ASP.NET Membership as the authentication mechanism. The sample contains two different solution files: WATWindows.Azure.sln: This solution must be opened with Visual Studio 2010 and contains the projects related to the Windows Azure web and worker roles. WATWindows.Client.sln: This solution must be opened with Visual Studio 11 and contains the Windows Metro style application project. Only Visual Studio 2010 supports Windows Azure cloud projects so you currently need to use this edition to launch the server application. This will change in a future release of the Windows Azure tools when support for Visual Studio 11 is enabled. Important: Setting up the PNWorker Sample Before running the PNWorker sample, you need to register the application and configure it: 1. Register the app: To register your application, go to the Windows Live Application Management site for Metro style apps at https://manage.dev.live.com/build and sign in with your Windows Live ID. In the Windows Push Notifications & Live Connect page, enter the following information. Package Display Name PNWorker.Sample Publisher CN=127.0.0.1, O=TESTING ONLY, OU=Windows Azure DevFabric 2. 3. Once you register the application, make a note of the values shown in the portal for Client Secret,Package Name and Package SID. 4. Configure the app - double-click the SetupSample.cmd file located inside the Samples\PNWorker folder to launch a tool that will guide you through the process of configuring the sample. setup runs a PowerShell script that requires running with administration privileges to allow the scripts to execute in your machine. When prompted, enter the Client Secret, Package Name, and Package Security Identifier you obtained previously and wait until the tool finishes configuring your sample. Running the PNWorker Sample To run this sample, you must run both the client and the server application projects. 1. Open Visual Studio 2010 as an administrator. Open the WATWindows.Azure.sln solution. Set the start-up project of the solution as the cloud project. Run the app in the dev fabric to test. 2. Open Visual Studio 11 and open the WATWindows.Client.sln solution. Run the Metro client application. In the client application, click Reopen channel and send to server. à the application opens the channel and registers it with the cloud application, & the Output area shows the channel URI. 3. Refresh the WebRole's Push Notifications page to see the UI list the newly registered client. 4. Send notifications to the client application by clicking the Send Notification button. Setup 3 command files + 1 powershell script: SetupSample.cmd –> SetupWPNS.vbs –> SetupWPNS.cmd –> SetupWPNS.UpdateWPNSCredentialsInServiceConfiguration.ps1 appears to set PackageName – from manifest Client Id package security id (SID) – from registration Client Secret – from registration The following configs are modified: WATWindows\ServiceConfiguration.Cloud.cscfg WATWindows\ServiceConfiguration.Local.cscfg WATWindows.Client\package.appxmanifest WatWindows.Notifications A class library – it references the following WNS DLL: C:\WorkDev\CountdownValue\AzureToolkits\WATWindows8\Samples\PNWorker\packages\WnsRecipe.0.0.3.0\lib\net40\WnsRecipe.dll NotificationJobRequest A DataContract for triggering notifications:     using System.Runtime.Serialization; using Microsoft.Windows.Samples.Notifications;     [DataContract]     [KnownType(typeof(WnsAccessTokenProvider))] public class NotificationJobRequest     {               [DataMember] public bool ProcessAsync { get; set; }          [DataMember] public string Payload { get; set; }         [DataMember] public string ChannelUrl { get; set; }         [DataMember] public NotificationType NotificationType { get; set; }         [DataMember] public IAccessTokenProvider AccessTokenProvider { get; set; }         [DataMember] public NotificationSendOptions NotificationSendOptions{ get; set; }     } Investigated these types: WnsAccessTokenProvider – a DataContract that contains the client Id and client secret NotificationType – an enum that can be: Tile, Toast, badge, Raw IAccessTokenProvider – get or reset the access token NotificationSendOptions – SecondsTTL, NotificationPriority (enum), isCache, isRequestForStatus, Tag   There is also a NotificationJobSerializer class which basically wraps a DataContractSerializer serialization / deserialization of NotificationJobRequest The WNSNotificationJobProcessor class This class wraps the NotificationSendUtils API – it periodically extracts any NotificationJobRequest objects from a CloudQueue and submits them to WNS. The ProcessJobMessageRequest method – this is the punchline: it will deserialize a CloudQueueMessage into a NotificationJobRequest & send pass its contents to NotificationUtils to SendAsynchronously / SendSynchronously, (and then dequeue the message).     public override void ProcessJobMessageRequest(CloudQueueMessage notificationJobMessageRequest)         { Trace.WriteLine("Processing a new Notification Job Request", "Information"); NotificationJobRequest pushNotificationJob =                 NotificationJobSerializer.Deserialize(notificationJobMessageRequest.AsString); if (pushNotificationJob != null)             { if (pushNotificationJob.ProcessAsync)                 { Trace.WriteLine("Sending the notification asynchronously", "Information"); NotificationSendUtils.SendAsynchronously( new Uri(pushNotificationJob.ChannelUrl),                         pushNotificationJob.AccessTokenProvider,                         pushNotificationJob.Payload,                         result => this.ProcessSendResult(pushNotificationJob, result),                         result => this.ProcessSendResultError(pushNotificationJob, result),                         pushNotificationJob.NotificationType,                         pushNotificationJob.NotificationSendOptions);                 } else                 { Trace.WriteLine("Sending the notification synchronously", "Information"); NotificationSendResult result = NotificationSendUtils.Send( new Uri(pushNotificationJob.ChannelUrl),                         pushNotificationJob.AccessTokenProvider,                         pushNotificationJob.Payload,                         pushNotificationJob.NotificationType,                         pushNotificationJob.NotificationSendOptions); this.ProcessSendResult(pushNotificationJob, result);                 }             } else             { Trace.WriteLine("Could not deserialize the notification job", "Error");             } this.queue.DeleteMessage(notificationJobMessageRequest);         } Investigation of NotificationSendUtils class - This is the engine – it exposes Send and a SendAsyncronously overloads that take the following params from the NotificationJobRequest: Channel Uri AccessTokenProvider Payload NotificationType NotificationSendOptions WebRole WebRole is a large MVC project – it references WatWindows.Notifications as well as the following WNS DLL: \AzureToolkits\WATWindows8\Samples\PNWorker\packages\WnsRecipe.0.0.3.0\lib\net40\NotificationsExtensions.dll Controllers\PushNotificationController.cs Notification related namespaces:     using Notifications;     using NotificationsExtensions;     using NotificationsExtensions.BadgeContent;     using NotificationsExtensions.RawContent;     using NotificationsExtensions.TileContent;     using NotificationsExtensions.ToastContent;     using Windows.Samples.Notifications; TokenProvider – initialized from the Azure RoleEnvironment:   IAccessTokenProvider tokenProvider = new WnsAccessTokenProvider(         RoleEnvironment.GetConfigurationSettingValue("WNSPackageSID"),         RoleEnvironment.GetConfigurationSettingValue("WNSClientSecret")); SendNotification method – calls QueuePushMessage method to create and serialize a NotificationJobRequest and enqueue it in a CloudQueue [HttpPost]         public ActionResult SendNotification(             [ModelBinder(typeof(NotificationTemplateModelBinder))] INotificationContent notification,             string channelUrl,             NotificationPriority priority = NotificationPriority.Normal)         {             var payload = notification.GetContent();             var options = new NotificationSendOptions()             {                 Priority = priority             };             var notificationType =                 notification is IBadgeNotificationContent ? NotificationType.Badge :                 notification is IRawNotificationContent ? NotificationType.Raw :                 notification is ITileNotificationContent ? NotificationType.Tile :                 NotificationType.Toast;             this.QueuePushMessage(payload, channelUrl, notificationType, options);             object response = new             {                 Status = "Queued for delivery to WNS"             };             return this.Json(response);         } GetSendTemplate method: Create the cshtml partial rendering based on the notification type     [HttpPost]         public ActionResult GetSendTemplate(NotificationTemplateViewModel templateOptions)         {             PartialViewResult result = null;             switch (templateOptions.NotificationType)             {                 case "Badge":                     templateOptions.BadgeGlyphValueContent = Enum.GetNames(typeof( GlyphValue));                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;                 case "Raw":                     ViewBag.ViewData = templateOptions;                     result = PartialView("_Raw");                     break;                 case "Toast":                     templateOptions.TileImages = this.blobClient.GetAllBlobsInContainer(ConfigReader.GetConfigValue("TileImagesContainer")).OrderBy(i => i.FileName).ToList();                     templateOptions.ToastAudioContent = Enum.GetNames(typeof( ToastAudioContent));                     templateOptions.Priorities = Enum.GetNames(typeof( NotificationPriority));                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;                 case "Tile":                     templateOptions.TileImages = this.blobClient.GetAllBlobsInContainer(ConfigReader.GetConfigValue("TileImagesContainer")).OrderBy(i => i.FileName).ToList();                     ViewBag.ViewData = templateOptions;                     result = PartialView("_" + templateOptions.NotificationTemplateType);                     break;             }             return result;         } Investigated these types: ToastAudioContent – an enum of different Win8 sound effects for toast notifications GlyphValue – an enum of different Win8 icons for badge notifications · Infrastructure\NotificationTemplateModelBinder.cs WNS Namespace references     using NotificationsExtensions.BadgeContent;     using NotificationsExtensions.RawContent;     using NotificationsExtensions.TileContent;     using NotificationsExtensions.ToastContent; Various NotificationFactory derived types can server as bindable models in MVC for creating INotificationContent types. Default values are also set for IWideTileNotificationContent & IToastNotificationContent. Type factoryType = null;             switch (notificationType)             {                 case "Badge":                     factoryType = typeof(BadgeContentFactory);                     break;                 case "Tile":                     factoryType = typeof(TileContentFactory);                     break;                 case "Toast":                     factoryType = typeof(ToastContentFactory);                     break;                 case "Raw":                     factoryType = typeof(RawContentFactory);                     break;             } Investigated these types: BadgeContentFactory – CreateBadgeGlyph, CreateBadgeNumeric (???) TileContentFactory – many notification content creation methods , apparently one for every tile layout type ToastContentFactory – many notification content creation methods , apparently one for every toast layout type RawContentFactory – passing strings WorkerRole WNS Namespace references using Notifications; using Notifications.WNS; using Windows.Samples.Notifications; OnStart() Method – on Worker Role startup, initialize the NotificationJobSerializer, the CloudQueue, and the WNSNotificationJobProcessor _notificationJobSerializer = new NotificationJobSerializer(); _cloudQueueClient = this.account.CreateCloudQueueClient(); _pushNotificationRequestsQueue = _cloudQueueClient.GetQueueReference(ConfigReader.GetConfigValue("RequestQueueName")); _processor = new WNSNotificationJobProcessor(_notificationJobSerializer, _pushNotificationRequestsQueue); Run() Method – poll the Azure Queue for NotificationJobRequest messages & process them:   while (true)             { Trace.WriteLine("Checking for Messages", "Information"); try                 { Parallel.ForEach( this.pushNotificationRequestsQueue.GetMessages(this.batchSize), this.processor.ProcessJobMessageRequest);                 } catch (Exception e)                 { Trace.WriteLine(e.ToString(), "Error");                 } Trace.WriteLine(string.Format("Sleeping for {0} seconds", this.pollIntervalMiliseconds / 1000)); Thread.Sleep(this.pollIntervalMiliseconds);                                            } How I learned to appreciate Win8 There is really only one application architecture for Windows 8 apps: Metro client side and Azure backend – and that is a good thing. With WNS, tier integration is so automated that you don’t even have to leverage a HTTP push API such as SignalR. This is a pretty powerful development paradigm, and has changed the way I look at Windows 8 for RAD business apps. When I originally looked at Win8 and the WinRT API, my first opinion on Win8 dev was as follows – GOOD:WinRT, WRL, C++/CX, WinJS, XAML (& ease of Direct3D integration); BAD: low projected market penetration,.NET lobotomized (Only 8% of .NET 4.5 classes can be used in Win8 non-desktop apps - http://bit.ly/HRuJr7); UGLY:Metro pascal tiles! Perhaps my 80s teenage years gave me a punk reactionary sense of revulsion towards the Partridge Family 70s style that Metro UX seems to have appropriated: On second thought though, it simplifies UI dev to a single paradigm (although UX guys will need to change career) – you will not find an easier app dev environment. Speculation: If LightSwitch is going to support HTML5 client app generation, then its a safe guess to say that vnext will support Win8 Metro XAML - a much easier port from Silverlight XAML. Given the VS2012 LightSwitch integration as a thumbs up from the powers that be at MS, and given that Win8 C#/XAML Metro apps tend towards a streamlined 'golden straight-jacket' cookie cutter app dev style with an Azure back-end supporting Win8 push notifications... --> its easy to extrapolate than LightSwitch vnext could well be the Win8 Metro XAML to Azure RAD tool of choice! The hook is already there - :) Why else have the space next to the HTML Client box? This high level of application development abstraction will facilitate rapid app cookie-cutter architecture-infrastructure frameworks for wrapping any app. This will allow me to avoid too much XAML code-monkeying around & focus on my area of interest: Technical Computing.

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >