Search Results

Search found 9078 results on 364 pages for 'package'.

Page 86/364 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • SSIS - Connection Management Within a Loop

    - by Rob Bowman
    Hi I have the following SSIS package: The problem is that within the Foreach loop a connection is opened and closed for each iteration. On running SQL Profiler I see a series of: Audit Login RPC:Completed Audit Lout The duration for the login and the RPC that actually does the work is minimal. However, the duration for the logout is significant, running into several seconds each. This causes the JOB to run very slowly - taking many hours. I get the same problem when running either on a test server or stand-alone laptop. Could anyone please suggest how I may change the package to improve performance? Also, I have noticed that when running the package from Visual Studio, it looks as though it continues to run with the component blocks going amber then green but actually all the processing has been completed and SQL profiler has dropped silent? Thanks, Rob.

    Read the article

  • Threading errors with Application.LoadComponent (key already exists)

    - by Kellls
    MSDN says that public static members of System.Windows.Application are thread safe. But when I try to run my app with multiple threads I get the following exception: ArgumentException: An entry with the same key already exists. at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) at System.Collections.Generic.SortedList`2.Add(TKey key, TValue value) at System.IO.Packaging.Package.AddIfNoPrefixCollisionDetected(ValidatedPartUri partUri, PackagePart part) at System.IO.Packaging.Package.GetPartHelper(Uri partUri) at System.IO.Packaging.Package.GetPart(Uri partUri) at System.Windows.Application.GetResourceOrContentPart(Uri uri) at System.Windows.Application.LoadComponent(Uri resourceLocator, Boolean bSkipJournaledProperties) at System.Windows.Application.LoadComponent(Uri resourceLocator) The application works fine on a single thread and even on two or three. When I get up past 5 then I get the error every time. Am I doing something wrong? What can I do to fix this?

    Read the article

  • WiX - How do I prevent ComPlusAssembly being unregistered during uninstall?

    - by Duncan Watts
    As part of my Installer, I am adding files to an existing COM+ package. I have defined a ComPlusApplication underneath my Product element, which has the name set to a property - I then have a custom action which looks up that property - all good so far. When the installer adds the files, within the same component I have added a ComPlusAssembly which adds the assembly to the ComPlusApplication I defined above - this is also working correctly. When uninstalled however, I am receiving an error related to being unable to find the COM+ application, this is because I am not running my custom action to look up the name of the COM+ package. Basically I do not want to remove my files or unregister the component package as part of the uninstallation process - How do I achieve this? I am using WiX 3.0 with VS2008. Thanks

    Read the article

  • Playing around with my Java project in Eclipse... what the crap did I just do?

    - by Daddy Warbox
    I don't even remember how, but somehow I managed to make all of my project's source files hidden in Eclipse's Package and Project Explorer panels. Go figure. 'Show Filtered Children (alt+click)' temporarily reveals the files, and only in Package Explorer can I double-click to reopen them from this view. They go back into hiding after I select another item, though. Plus, now I'm getting other annoyances, such as all of the folded non-hidden items altogether expanding when I click on an item, and the entire folder tree of my project now being shown in these panels (including my .svn subversion folders... which shouldn't be any of Eclipse's business, presently). Long story short, my Package/Project Explorers' just blew up on me, and I want to know how to fix this. Thanks in advance. P.S. What's a good guide I can use to learn my way around this silly contraption, anyway?

    Read the article

  • MaximumErrorCount has now effect

    - by Rob Bowman
    Hi I'm quite new to SSIS - using 2008 version. I have a job that uses a few data flow tasks. On the third one I'm getting a primary key violation on the last row that it needs to insert, but only sometimes! I'd like to ignore this problem for now and let the job continue. I have set the MaximumErrorCount property to 10 for the DataFlowTaks, the SequenceContainer and for the Package but still taks fails and this causes the package to stop. Could anyone please advise how I can get the package to ignore the error? Thanks Rob.

    Read the article

  • What is __path__ useful for?

    - by Jason Baker
    I had never noticed the __path__ attribute that gets defined on some of my packages before today. According to the documentation: Packages support one more special attribute, __path__. This is initialized to be a list containing the name of the directory holding the package’s __init__.py before the code in that file is executed. This variable can be modified; doing so affects future searches for modules and subpackages contained in the package. While this feature is not often needed, it can be used to extend the set of modules found in a package. Could somebody explain to me what exactly this means and why I would ever want to use it?

    Read the article

  • Teamcity and Grails

    - by WaZ
    Hi there, My current requirement is: I have to package my grails app and use teamcity for continuous build. The only problem is the build agents don't have groovy and grails installed (don't ask why) I want to package my app with Groovy and Grails directories and check in Git. So that there is no dependency. And the package has everything to run the app. Can anybody please help. Please let me know if you want me to rephrase my question.

    Read the article

  • Can JAXB store the class name in the XML so that the the deserialize code doesn't need knowledge of the class?

    - by Andrew
    It seems the standard approach for deserializing JAXB XML is to specify the package name when creating the context. Then, JAXB looks up the class based on the root element: JAXBContext jc = JAXBContext.newInstance("com.foo"); Unmarshaller u = jc.createUnmarshaller(); Object o = u.unmarshal(new StringReader("...")); I'm looking for a more flexible approach where I don't have to specify the package name and could still deserialize any object. This would be as simple as JAXB storing the package in the XML, but I can't seem to find out how to do this. I can write the code to do it myself but that would be unpleasant. It would like JAXB to do it, if possible. BTW, I am not using schemas, just Annotations and marshal/unmarshal. Any ideas?

    Read the article

  • In Eclipse, how do I change the default modifiers in the class/type template?

    - by gustafc
    Eclipse's default template for new types (Window Preferences Code Style Code Templates New Java Files) looks like this: ${filecomment} ${package_declaration} ${typecomment} ${type_declaration} Creating a new class, it'll look something like this: package pkg; import blah.blah; public class FileName { // Class is accessible to everyone, and can be inherited } Now, I'm fervent in my belief that access should be as restricted as possible, and inheritance should be forbidden unless explicitly permitted, so I'd like to change the ${type_declaration} to declare all classes as final rather than public: package pkg; import blah.blah; final class FileName { // Class is only accessible in package, and can't be inherited } That seems easier said than done. The only thing I've found googling is a 2004 question on Eclipse's mailing list which was unanswered. So, the question in short: How can I change the default class/type modifiers in Eclipse? I'm using Eclipse Galileo (3.5) if that matters.

    Read the article

  • log file not generated while accessing a jar having log4j support.

    - by naveen garimella
    I have a x.jar which is being used by some client y.jar. Both x.jar and y.jar are at the same package level. i have put the log4j.xml at the same package level. But the log file is never generated. Can i know why? I have tried few other options as well but no luck till now. i have added log4j-1.2.16.jar to ClassPath: variable in manifest files of both x.jar and y.jar. Put log4j.xml at the class level of y.jar which actually calls the x.jar classes. package structure is as follows: x.jar --manifest.mf has a entry ClassPath:log4j-1.2.16.jar y.jar --manifest.mf has a entry ClassPath:log4j-1.2.16.jar log4j-1.2.16.jar log4j.xml --has a RollingFileAppender. Can any one suggest whether i am missing anything?

    Read the article

  • What is the right way to implement communication between java objects?

    - by imoschak
    I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes, each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?

    Read the article

  • Background audio not working in windows 8 store / metro app

    - by roryok
    I've tried setting background audio through both a mediaElement in XAML <MediaElement x:Name="MyAudio" Source="Assets/Sound.mp3" AudioCategory="BackgroundCapableMedia" AutoPlay="False" /> And programmatically async void setUpAudio() { var package = Windows.ApplicationModel.Package.Current; var installedLocation = package.InstalledLocation; var storageFile = await installedLocation.GetFileAsync("Assets\\Sound.mp3"); if (storageFile != null) { var stream = await storageFile.OpenAsync(Windows.Storage.FileAccessMode.Read); _soundEffect = new MediaElement(); _soundEffect.AudioCategory = AudioCategory.BackgroundCapableMedia; _soundEffect.AutoPlay = false; _soundEffect.SetSource(stream, storageFile.ContentType); } } // and later... _soundEffect.Play(); But neither works for me. As soon as I minimise the app the music fades out

    Read the article

  • The right way to implement communication between java objects

    - by imoschak
    I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that I has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?

    Read the article

  • Packaging and distributing an metro app?

    - by user1516781
    i want to create a package of my metro application to distribute it to my client for testing it . These are the following steps i followed :- select Store-Create App Package Select Build a package to use locally only thats it what i did, but my doubt is :- 1) Where is the exact file entire executable file present/located or created when i packaged it (like .xap in windows phone application)???? 2)which is the one particular file which is needed for me to send across so that it is deployable in any other machine and can run the app????? 3) What are the requirements the other machine needs to have for deploying my app and running , for sure i think it needs to have a windows 8 OS for running an metro app apart from the what else the other machine needs to have take my metro app deploy it and run out there locally for testing purpose ??

    Read the article

  • /usr/include/stdc-predef.h:30:26: fatal error: bits/predefs.h: No such file or directory

    - by G_T
    Im trying to locally install a program which is written in C++. I have downloaded the program and am attempting to use the "make" command to compile the program as the programs instructions dictate. However when I do I get this error: /usr/include/stdc-predef.h:30:26: fatal error: bits/predefs.h: No such file or directory compilation terminated. Looking around on the internet some people seem to address this problem by sudo apt-get install libc6-dev-i386 I checked to see if this package was installed and it was not. When I try to install it I get E: Unable to locate package libc6-dev-i386 I have already run sudo apt get update Im sure this is a rookie question but any help is appreciated, I'm running 13.10 32-bit.

    Read the article

  • Failed to load Cairo

    - by Ruben
    We're running a Ubuntu Server VM with OpenCPU (that's an API for R). Unfortunately we're unable to get the Cairo R package to play, the error message (from within R) is as follows: unable to load shared object '/usr/lib/R/library/grDevices/libs//cairo.so': /usr/local/lib/libgmodule-2.0.so.0: undefined symbol: g_rec_mutex_lock 2: In png() : failed to load cairo DLL We've tried purging and reinstalling cairo and libcairo and we both tried building the Cairo R package from source as well as using a precompiled version from Michael Rutter's ppas (all seems to work without errors). Unfortunately none of us are real Ubuntu natives and thus we probably did some pretty amateur debugging. Any push in the right direction would be very appreciated. For example, we couldn't figure out how to reinstall whatever libgmodule refers to.

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Blending the Sketchflow Action

    - by GeekAgilistMercenary
    Started a new Sketchflow Prototype in Expression Blend recently and documented each of the steps.  This blog entry covers some of those steps, which are the basic elements of any prototype.  I will have more information regarding design, prototype creation, and the process of the initial phases for development in the future.  For now, I hope you enjoy this short walk through.  Also, be sure to check out my last quick entry on Sketchflow. I started off with a Sketchflow Project, just like I did in my previous entry (more specifics in that entry about how to manipulate and build out the Sketchflow Map). Once I created the project I setup the following Sketchflow Map. The CoreNavigation is a ComponentScreen setup solely for the page navigation at the top of the screen.  The XAML markup in case you want to create a Component Screen with the same design is included below. <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" xmlns:pb="clr-namespace:Microsoft.Expression.Prototyping.Behavior;assembly=Microsoft.Expression.Prototyping.Interactivity" x:Class="RapidPrototypeSketchScreens.CoreNavigation" d:DesignWidth="624" d:DesignHeight="49" Height="49" Width="624">   <Grid x:Name="LayoutRoot"> <TextBlock HorizontalAlignment="Stretch" Margin="307,3,0,0" Style="{StaticResource TitleCenter-Sketch}" Text="Aütøchart Scorecards" TextWrapping="Wrap"> <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonDown"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1"/> </i:EventTrigger> </i:Interaction.Triggers> </TextBlock> <Button HorizontalAlignment="Left" Margin="164,8,0,11" Style="{StaticResource Button-Sketch}" Width="144" Content="Scorecard"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1_2"/> </i:EventTrigger> </i:Interaction.Triggers> </Button> <Button HorizontalAlignment="Left" Margin="8,8,0,11" Style="{StaticResource Button-Sketch}" Width="152" Content="Standard Reports"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1_1"/> </i:EventTrigger> </i:Interaction.Triggers> </Button> </Grid> </UserControl> Now that the CoreNavigation Component Screen is done I built out each of the others.  In each of those screens I included the CoreNavigation Screen (all those little green lines in the image) as the top navigation.  In order to do that, as I created each of the pages I would hover over the CoreNavigation Object in the Sketchflow Map.  When the utilities drawer (the small menu that pops down under a node when you hover over it) shows click on the third little icon and drag it onto the page node you want a navigation screen on. Once I created all the screens I setup the navigation by opening up each screen and right clicking on the objects that needed to point to somewhere else in the prototype. Once I was done with the main page, my Home Navigation Page, it looked something like this in the Expression Blend Designer. I fleshed out each of the additional screens.  Once I was done I wanted to try out the deployment package.  The way to deploy a Sketchflow Prototype is to merely click on File –> Package SketchFlow Project and a prompt will appear.  In the prompt enter what you want the package to be called. I like to see the files generated afterwards too, so I checked the box to see that.  When Expression Blend is done generating everything you’ll have a directory like the one shown below, with all the needed files for deployment. Now these files can be copied or moved to any location for viewing.  One can even copy them (such as via FTP) to a server location to share with others.  Once they are deployed and you run the "TestPage.html" the other features of the Sketchflow Package are available. In the image below I have tagged a few sections to show the Sketchflow Player Features.  To the top left is the navigation, which provides a clearly defined area of movement in a list.  To the center right is the actual prototype application.  I have placed lists of things and made edits.  On the left hand side is the highlight feature, which is available in the Feedback section of the lower left.  On the right hand list I underlined the Autochart with an orange marker, and marked out two list items with a red marker. In the lower left hand side in the Feedback section is also an area to type in your feedback.  This can be useful for time based feedback, when you post this somewhere and want people to provide subsequent follow up feedback. Overall lots of great features, that enable some fairly rapid prototyping with customers.  Once one is familiar with the steps and parts of this Sketchflow Prototype Capabilities it is easy to step through an application without even stopping.  It really is that easy.  So get hold of Expression Blend 3 and get ramped up on Sketchflow, it will pay off in the design phases to do so! Original Entry

    Read the article

  • Plesk 11: install Apache with SNI support

    - by Ueli
    If I try to update from standard Apache to Apache with SNI support with the Plesk installation program (example.com:8447), I get an error, that I have to remove apr-util-ldap-1.4.1-1.el5.x86_64 It's in german: Informationen über installierte Pakete abrufen... Installation started in background Datei wird heruntergeladen PSA_11.0.9/dist-rpm-CentOS-5-x86_64/build-11.0.9-cos5-x86_64.hdr.gz: 11%..20%..30%..40%..50%..60%..70%..81%..91%..100% fertig. Datei wird heruntergeladen PSA_11.0.9/update-rpm-CentOS-5-x86_64/update-11.0.9-cos5-x86_64.hdr.gz: 10%..20%..30%..40%..50%..60%..70%..80%..90%..100% fertig. Datei wird heruntergeladen PSA_11.0.9/thirdparty-rpm-CentOS-5-x86_64/thirdparty-11.0.9-cos5-x86_64.hdr.gz: 10%..26%..43%..77%..100% fertig. Datei wird heruntergeladen BILLING_11.0.9/thirdparty-rpm-RedHat-all-all/thirdparty-11.0.9-rhall-all.hdr.gz: 100% fertig. Datei wird heruntergeladen BILLING_11.0.9/update-rpm-RedHat-all-all/update-11.0.9-rhall-all.hdr.gz: 100% fertig. Datei wird heruntergeladen SITEBUILDER_11.0.10/thirdparty-rpm-RedHat-all-all/thirdparty-11.0.10-rhall-all.hdr.gz: 100% fertig. Datei wird heruntergeladen SITEBUILDER_11.0.10/dist-rpm-RedHat-all-all/build-11.0.10-rhall-all.hdr.gz: 10%..22%..31%..41%..51%..65%..70%..80%..90%..100% fertig. Datei wird heruntergeladen SITEBUILDER_11.0.10/update-rpm-RedHat-all-all/update-11.0.10-rhall-all.hdr.gz: 100% fertig. Datei wird heruntergeladen APACHE_2.2.22/thirdparty-rpm-CentOS-5-x86_64/thirdparty-2.2.22-rh5-x86_64.hdr.gz: 19%..25%..35%..83%..93%..100% fertig. Datei wird heruntergeladen APACHE_2.2.22/update-rpm-CentOS-5-x86_64/update-2.2.22-rh5-x86_64.hdr.gz: 100% fertig. Datei wird heruntergeladen BILLING_11.0.9/dist-rpm-RedHat-all-all/build-11.0.9-rhall-all.hdr.gz: 11%..23%..31%..41%..52%..62%..73%..83%..91%..100% fertig. Datei wird heruntergeladen APACHE_2.2.22/dist-rpm-CentOS-5-x86_64/build-2.2.22-rh5-x86_64.hdr.gz: 36%..50%..100% fertig. Pakete, die installiert werden müssen, werden ermittelt. -> Error: Mit der Installation kann erst fortgefahren werden, wenn das Paket apr-util-ldap-1.4.1-1.el5.x86_64 vom System entfernt wird. Es wurden nicht alle Pakete installiert. Bitte beheben Sie dieses Problem und versuchen Sie, die Pakete erneut zu installieren. Wenn Sie das Problem nicht selbst beheben können, wenden Sie sich bitte an den technischen Support. - «Error: The installation can be continued only if the package apr-util-ldap-1.4.1-1.el5.x86_64 is removed from the system» But I can't uninstall apr-util-ldap-1.4.1-1.el5.x86_64 without removing a lot of important packages: Dependencies Resolved ========================================================================================================================================= Package Arch Version Repository Size ========================================================================================================================================= Removing: apr-util-ldap x86_64 1.4.1-1.el5 installed 9.0 k Removing for dependencies: SSHTerm noarch 0.2.2-10.12012310 installed 4.9 M awstats noarch 7.0-11122114.swsoft installed 3.5 M httpd x86_64 2.2.23-3.el5 installed 3.4 M mailman x86_64 3:2.1.9-6.el5_6.1 installed 34 M mod-spdy-beta x86_64 0.9.3.3-386 installed 2.4 M mod_perl x86_64 2.0.4-6.el5 installed 6.8 M mod_python x86_64 3.2.8-3.1 installed 1.2 M mod_ssl x86_64 1:2.2.23-3.el5 installed 179 k perl-Apache-ASP x86_64 2.59-0.93298 installed 543 k php53 x86_64 5.3.3-13.el5_8 installed 3.4 M php53-sqlite2 x86_64 5.3.2-11041315 installed 366 k plesk-core x86_64 11.0.9-cos5.build110120608.16 installed 79 M plesk-l10n noarch 11.0.9-cos5.build110120827.16 installed 21 M pp-sitebuilder noarch 11.0.10-38572.12072100 installed 181 M psa x86_64 11.0.9-cos5.build110120608.16 installed 473 k psa-awstats-configurator noarch 11.0.9-cos5.build110120606.19 installed 0.0 psa-backup-manager x86_64 11.0.9-cos5.build110120608.16 installed 8.6 M psa-backup-manager-vz x86_64 11.0.0-cos5.build110120123.10 installed 1.6 k psa-fileserver x86_64 11.0.9-cos5.build110120608.16 installed 364 k psa-firewall x86_64 11.0.9-cos5.build110120608.16 installed 550 k psa-health-monitor noarch 11.0.9-cos5.build110120606.19 installed 2.3 k psa-horde noarch 3.3.13-cos5.build110120606.19 installed 20 M psa-hotfix1-9.3.0 x86_64 9.3.0-cos5.build93100518.16 installed 23 k psa-imp noarch 4.3.11-cos5.build110120606.19 installed 12 M psa-ingo noarch 1.2.6-cos5.build110120606.19 installed 5.1 M psa-kronolith noarch 2.3.6-cos5.build110120606.19 installed 6.3 M psa-libxml-proxy x86_64 2.7.8-0.301910 installed 1.2 M psa-mailman-configurator x86_64 11.0.9-cos5.build110120608.16 installed 5.5 k psa-migration-agents x86_64 11.0.9-cos5.build110120608.16 installed 169 k psa-migration-manager x86_64 11.0.9-cos5.build110120608.16 installed 1.1 M psa-mimp noarch 1.1.4-cos5.build110120418.19 installed 2.9 M psa-miva x86_64 1:5.06-cos5.build1013111101.14 installed 4.5 M psa-mnemo noarch 2.2.5-cos5.build110120606.19 installed 4.1 M psa-mod-fcgid-configurator x86_64 2.0.0-cos5.build1013111101.14 installed 0.0 psa-mod_aclr2 x86_64 12021319-9e86c2f installed 8.1 k psa-mod_fcgid x86_64 2.3.6-12050315 installed 222 k psa-mod_rpaf x86_64 0.6-12021310 installed 7.7 k psa-passwd noarch 3.1.3-cos5.build1013111101.14 installed 3.7 M psa-php53-configurator x86_64 1.6.2-cos5.build110120608.16 installed 6.4 k psa-rubyrails-configurator x86_64 1.1.6-cos5.build1013111101.14 installed 0.0 psa-spamassassin x86_64 11.0.9-cos5.build110120608.16 installed 167 k psa-turba noarch 2.3.6-cos5.build110120606.19 installed 6.1 M psa-updates noarch 11.0.9-cos5.build110120704.10 installed 0.0 psa-vhost noarch 11.0.9-cos5.build110120606.19 installed 160 k psa-vpn x86_64 11.0.9-cos5.build110120608.16 installed 1.9 M psa-watchdog x86_64 11.0.9-cos5.build110120608.16 installed 2.9 M webalizer x86_64 2.01_10-30.1 installed 259 k Transaction Summary ========================================================================================================================================= Remove 48 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) What should I do?

    Read the article

  • Install the Ajax Control Toolkit from NuGet

    - by Stephen Walther
    The Ajax Control Toolkit is now available from NuGet. This makes it super easy to add the latest version of the Ajax Control Toolkit to any Web Forms application. If you haven’t used NuGet yet, then you are missing out on a great tool which you can use with Visual Studio to add new features to an application. You can use NuGet with both ASP.NET MVC and ASP.NET Web Forms applications. NuGet is compatible with both Websites and Web Applications and it works with both C# and VB.NET applications. For example, I habitually use NuGet to add the latest version of ELMAH, Entity Framework, jQuery, jQuery UI, and jQuery Templates to applications that I create. To download NuGet, visit the NuGet website at: http://NuGet.org Imagine, for example, that you want to take advantage of the Ajax Control Toolkit RoundedCorners extender to create cross-browser compatible rounded corners in a Web Forms application. Follow these steps. Right click on your project in the Solution Explorer window and select the option Add Library Package Reference. In the Add Library Package Reference dialog, select the Online tab and enter AjaxControlToolkit in the search box: Click the Install button and the latest version of the Ajax Control Toolkit will be installed. Installing the Ajax Control Toolkit makes several modifications to your application. First, a reference to the Ajax Control Toolkit is added to your application. In a Web Application Project, you can see the new reference in the References folder: Installing the Ajax Control Toolkit NuGet package also updates your Web.config file. The tag prefix ajaxToolkit is registered so that you can easily use Ajax Control Toolkit controls within any page without adding a @Register directive to the page. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> You should do a rebuild of your application by selecting the Visual Studio menu option Build, Rebuild Solution so that Visual Studio picks up on the new controls (You won’t get Intellisense for the Ajax Control Toolkit controls until you do a build). After you add the Ajax Control Toolkit to your application, you can start using any of the 40 Ajax Control Toolkit controls in your application (see http://www.asp.net/ajax/ajaxcontroltoolkit/samples/ for a reference for the controls). <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="WebApplication1.WebForm1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Rounded Corners</title> <style type="text/css"> #pnl1 { background-color: gray; width: 200px; color:White; font: 14pt Verdana; } #pnl1_contents { padding: 10px; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:Panel ID="pnl1" runat="server"> <div id="pnl1_contents"> I have rounded corners! </div> </asp:Panel> <ajaxToolkit:ToolkitScriptManager ID="sm1" runat="server" /> <ajaxToolkit:RoundedCornersExtender TargetControlID="pnl1" runat="server" /> </div> </form> </body> </html> The page contains the following three controls: Panel – The Panel control named pnl1 contains the content which appears with rounded corners. ToolkitScriptManager – Every page which uses the Ajax Control Toolkit must contain a single ToolkitScriptManager. The ToolkitScriptManager loads all of the JavaScript files used by the Ajax Control Toolkit. RoundedCornersExtender – This Ajax Control Toolkit extender targets the Panel control. It makes the Panel control appear with rounded corners. You can control the “roundiness” of the corners by modifying the Radius property. Notice that you get Intellisense when typing the Ajax Control Toolkit tags. As soon as you type <ajaxToolkit, all of the available Ajax Control Toolkit controls appear: When you open the page in a browser, then the contents of the Panel appears with rounded corners. The advantage of using the RoundedCorners extender is that it is cross-browser compatible. It works great with Internet Explorer, Opera, Firefox, Chrome, and Safari even though different browsers implement rounded corners in different ways. The RoundedCorners extender even works with an ancient browser such as Internet Explorer 6. Getting the Latest Version of the Ajax Control Toolkit The Ajax Control Toolkit continues to evolve at a rapid pace. We are hard at work at fixing bugs and adding new features to the project. We plan to have a new release of the Ajax Control Toolkit each month. The easiest way to get the latest version of the Ajax Control Toolkit is to use NuGet. You can open the NuGet Add Library Package Reference dialog at any time to update the Ajax Control Toolkit to the latest version.

    Read the article

  • Why do apache2 upgrades remove and not re-install libapache2-mod-php5?

    - by nutznboltz
    We repeatedly see that when an apache2 update arrives and is installed it causes the libapache2-mod-php5 package to be removed and does not subsequently re-install it automatically. We must subsequently re-install the libapache2-mod-php5 manually in order to restore functionality to our web server. Please see the following github gist, it is a contiguous section of our server's dpkg.log showing the November 14, 2011 update to apache2: https://gist.github.com/1368361 it includes 2011-11-14 11:22:18 remove libapache2-mod-php5 5.3.2-1ubuntu4.10 5.3.2-1ubuntu4.10 Is this a known issue? Do other people see this too? I could not find any launchpad bug reports about it. Platform details: $ lsb_release -ds Ubuntu 10.04.3 LTS $ uname -srvm Linux 2.6.38-12-virtual #51~lucid1-Ubuntu SMP Thu Sep 29 20:27:50 UTC 2011 x86_64 $ dpkg -l | awk '/ii.*apache/ {print $2 " " $3 }' apache2 2.2.14-5ubuntu8.7 apache2-mpm-prefork 2.2.14-5ubuntu8.7 apache2-utils 2.2.14-5ubuntu8.7 apache2.2-bin 2.2.14-5ubuntu8.7 apache2.2-common 2.2.14-5ubuntu8.7 libapache2-mod-authnz-external 3.2.4-2+squeeze1build0.10.04.1 libapache2-mod-php5 5.3.2-1ubuntu4.10 Thanks At a high-level the update process looks like: package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end But at a lower level def install_package(name, version) run_command_with_systems_locale( :command = "apt-get -q -y#{expand_options(@new_resource.options)} install #{name}=#{version}", :environment = { "DEBIAN_FRONTEND" = "noninteractive" } ) end def upgrade_package(name, version) install_package(name, version) end So Chef is using "install" to do "update". This sort of moves the question around to "how does apt-get safe-upgrade" remember to re-install libapache-mod-php5? The exact sequence of packages that triggered this was: apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common But the code is attempting to run checks to make sure the packages in that list are installed already before attempting to "upgrade" them. case node[:platform] when 'debian', 'centos', 'fedora', 'redhat', 'scientific', 'ubuntu' # first primitive way is to define the updates in the recipe # data bags will be used later %w/ apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common /.each{ |package_name| Chef::Log.debug("is #{package_name} among local packages available for changes?") next unless node[:packages][:changes].keys.include?(package_name) Chef::Log.debug("is #{package_name} available for upgrade?") next unless node[:packages][:changes][package_name][:action] == 'upgrade' package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end tag('upgraded') } # after upgrading everything, run yum cache updater if tagged?('upgraded') # Remove old orphaned dependencies and kernel images and kernel headers etc. # Remove cached deb files. case node[:platform] when 'ubuntu' execute 'apt-get -y autoremove' execute 'apt-get clean' # Re-check what updates are available soon. when 'centos', 'fedora', 'redhat', 'scientific' node[:packages][:last_time_we_looked_at_yum] = 0 end untag('upgraded') end end But it's clear that it fails since the dpkg.log has 2011-11-14 11:22:25 install apache2-mpm-worker 2.2.14-5ubuntu8.7 on a system which does not currently have apache2-mpm-worker. I will have to discuss this with the author, thanks again.

    Read the article

  • Error caused by Dropbox in update manager

    - by Olivier Lalonde
    I am getting the following error message when the update manager runs: Apt Authentication issue Problem during package list update. The package list update failed with a authentication failure. This usually happens behind a network proxy server. Please try to click on the "Run this action now" button to correct the problem or update the list manually by running Update Manager and clicking on "Check". W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: http://linux.dropbox.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch http://linux.dropbox.com/ubuntu/dists/lucid/Release W: Some index files failed to download, they have been ignored, or old ones used instead. This error started to appear recently and for no obvious reason (maybe because I created myself a private PGP key?). I'm running Dropbox v0.7.11 on Ubuntu Lucid 10.04.

    Read the article

  • Ten - oh, wait, eleven - Eleven things you should know about the ASP.NET Fall 2012 Update

    - by Jon Galloway
    Today, just a little over two months after the big ASP.NET 4.5 / ASP.NET MVC 4 / ASP.NET Web API / Visual Studio 2012 / Web Matrix 2 release, the first preview of the ASP.NET Fall 2012 Update is out. Here's what you need to know: There are no new framework bits in this release - there's no change or update to ASP.NET Core, ASP.NET MVC or Web Forms features. This means that you can start using it without any updates to your server, upgrade concerns, etc. This update is really an update to the project templates and Visual Studio tooling, conceptually similar to the ASP.NET MVC 3 Tools Update. It's a relatively lightweight install. It's a 41MB download. I've installed it many times and usually takes 5-7 minutes; it's never required a reboot. It adds some new project templates to ASP.NET MVC: Facebook Application and Single Page Application templates. It adds a lot of cool enhancements to ASP.NET Web API. It adds some tooling that makes it easy to take advantage of features like SignalR, Friendly URLs, and Windows Azure Authentication. Most of the new features are installed via NuGet packages. Since ASP.NET is open source, nightly NuGet packages are available, and the roadmap is published, most of this has really been publicly available for a while. The official name of this drop is the ASP.NET Fall 2012 Update BUILD Prerelease. Please do not attempt to say that ten times fast. While the EULA doesn't prohibit it, it WILL legally change your first name to Scott. As with all new releases, you can find out everything you need to know about the Fall Update at http://asp.net/vnext (especially the release notes!) I'm going to be showing all of this off, assisted by special guest code monkey Scott Hanselman, this Friday at BUILD: Bleeding edge ASP.NET: See what is next for MVC, Web API, SignalR and more… (and I've heard it will be livestreamed). Let's look at some of those things in more detail. No new bits ASP.NET 4.5, MVC 4 and Web API have a lot of great core features. I see the goal of this update release as making it easier to put those features to use to solve some useful scenarios by taking advantage of NuGet packages and template code. If you create a new ASP.NET MVC application using one of the new templates, you'll see that it's using the ASP.NET MVC 4 RTM NuGet package (4.0.20710.0): This means you can install and use the Fall Update without any impact on your existing projects and no worries about upgrading or compatibility. New Facebook Application Template ASP.NET MVC 4 (and ASP.NET 4.5 Web Forms) included the ability to authenticate your users via OAuth and OpenID, so you could let users log in to your site using a Facebook account. One of the new changes in the Fall Update is a new template that makes it really easy to create full Facebook applications. You could create Facebook application in ASP.NET already, you'd just need to go through a few steps: Search around to find a good Facebook NuGet package, like the Facebook C# SDK (written by my friend Nathan Totten and some other Facebook SDK brainiacs). Read the Facebook developer documentation to figure out how to authenticate and integrate with them. Write some code, debug it and repeat until you got something working. Get started with the application you'd originally wanted to write. What this template does for you: eliminate steps 1-3. Erik Porter, Nathan and some other experts built out the Facebook Application template so it automatically pulls in and configures the Facebook NuGet package and makes it really easy to take advantage of it in an ASP.NET MVC application. One great example is the the way you access a Facebook user's information. Take a look at the following code in a File / New / MVC / Facebook Application site. First, the Home Controller Index action: [FacebookAuthorize(Permissions = "email")] public ActionResult Index(MyAppUser user, FacebookObjectList<MyAppUserFriend> userFriends) { ViewBag.Message = "Modify this template to jump-start your Facebook application using ASP.NET MVC."; ViewBag.User = user; ViewBag.Friends = userFriends.Take(5); return View(); } First, notice that there's a FacebookAuthorize attribute which requires the user is authenticated via Facebook and requires permissions to access their e-mail address. It binds to two things: a custom MyAppUser object and a list of friends. Let's look at the MyAppUser code: using Microsoft.AspNet.Mvc.Facebook.Attributes; using Microsoft.AspNet.Mvc.Facebook.Models; // Add any fields you want to be saved for each user and specify the field name in the JSON coming back from Facebook // https://developers.facebook.com/docs/reference/api/user/ namespace MvcApplication3.Models { public class MyAppUser : FacebookUser { public string Name { get; set; } [FacebookField(FieldName = "picture", JsonField = "picture.data.url")] public string PictureUrl { get; set; } public string Email { get; set; } } } You can add in other custom fields if you want, but you can also just bind to a FacebookUser and it will automatically pull in the available fields. You can even just bind directly to a FacebookUser and check for what's available in debug mode, which makes it really easy to explore. For more information and some walkthroughs on creating Facebook applications, see: Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template (Yao Huang Lin) Facebook Application Template Tutorial (Erik Porter) Single Page Application template Early releases of ASP.NET MVC 4 included a Single Page Application template, but it was removed for the official release. There was a lot of interest in it, but it was kind of complex, as it handled features for things like data management. The new Single Page Application template that ships with the Fall Update is more lightweight. It uses Knockout.js on the client and ASP.NET Web API on the server, and it includes a sample application that shows how they all work together. I think the real benefit of this application is that it shows a good pattern for using ASP.NET Web API and Knockout.js. For instance, it's easy to end up with a mess of JavaScript when you're building out a client-side application. This template uses three separate JavaScript files (delivered via a Bundle, of course): todoList.js - this is where the main client-side logic lives todoList.dataAccess.js - this defines how the client-side application interacts with the back-end services todoList.bindings.js - this is where you set up events and overrides for the Knockout bindings - for instance, hooking up jQuery validation and defining some client-side events This is a fun one to play with, because you can just create a new Single Page Application and hit F5. Quick, easy install (with one gotcha) One of the cool engineering changes for this release is a big update to the installer to make it more lightweight and efficient. I've been running nightly builds of this for a few weeks to prep for my BUILD demos, and the install has been really quick and easy to use. The install takes about 5 minutes, has never required a reboot for me, and the uninstall is just as simple. There's one gotcha, though. In this preview release, you may hit an issue that will require you to uninstall and re-install the NuGet VSIX package. The problem comes up when you create a new MVC application and see this dialog: The solution, as explained in the release notes, is to uninstall and re-install the NuGet VSIX package: Start Visual Studio 2012 as an Administrator Go to Tools->Extensions and Updates and uninstall NuGet. Close Visual Studio Navigate to the ASP.NET Fall 2012 Update installation folder: For Visual Studio 2012: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio 2012 For Visual Studio 2012 Express for Web: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio Express 2012 for Web Double click on the NuGet.Tools.vsix to reinstall NuGet This took me under a minute to do, and I was up and running. ASP.NET Web API Update Extravaganza! Uh, the Web API team is out of hand. They added a ton of new stuff: OData support, Tracing, and API Help Page generation. OData support Some people like OData. Some people start twitching when you mention it. If you're in the first group, this is for you. You can add a [Queryable] attribute to an API that returns an IQueryable<Whatever> and you get OData query support from your clients. Then, without any extra changes to your client or server code, your clients can send filters like this: /Suppliers?$filter=Name eq ‘Microsoft’ For more information about OData support in ASP.NET Web API, see Alex James' mega-post about it: OData support in ASP.NET Web API ASP.NET Web API Tracing Tracing makes it really easy to leverage the .NET Tracing system from within your ASP.NET Web API's. If you look at the \App_Start\WebApiConfig.cs file in new ASP.NET Web API project, you'll see a call to TraceConfig.Register(config). That calls into some code in the new \App_Start\TraceConfig.cs file: public static void Register(HttpConfiguration configuration) { if (configuration == null) { throw new ArgumentNullException("configuration"); } SystemDiagnosticsTraceWriter traceWriter = new SystemDiagnosticsTraceWriter() { MinimumLevel = TraceLevel.Info, IsVerbose = false }; configuration.Services.Replace(typeof(ITraceWriter), traceWriter); } As you can see, this is using the standard trace system, so you can extend it to any other trace listeners you'd like. To see how it works with the built in diagnostics trace writer, just run the application call some API's, and look at the Visual Studio Output window: iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:11147/api/Values, Message='http://localhost:11147/api/Values' iisexpress.exe Information: 0 : Message='Values', Operation=DefaultHttpControllerSelector.SelectController iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=DefaultHttpControllerActivator.Create iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=HttpControllerDescriptor.CreateController iisexpress.exe Information: 0 : Message='Selected action 'Get()'', Operation=ApiControllerActionSelector.SelectAction iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuting iisexpress.exe Information: 0 : Message='Action returned 'System.String[]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK) iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuted, Status=200 (OK) iisexpress.exe Information: 0 : Operation=ValuesController.ExecuteAsync, Status=200 (OK) iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:11147/api/Values, Message='Content-type='application/json; charset=utf-8', content-length=unknown' iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync iisexpress.exe Information: 0 : Operation=ValuesController.Dispose API Help Page When you create a new ASP.NET Web API project, you'll see an API link in the header: Clicking the API link shows generated help documentation for your ASP.NET Web API controllers: And clicking on any of those APIs shows specific information: What's great is that this information is dynamically generated, so if you add your own new APIs it will automatically show useful and up to date help. This system is also completely extensible, so you can generate documentation in other formats or customize the HTML help as much as you'd like. The Help generation code is all included in an ASP.NET MVC Area: SignalR SignalR is a really slick open source project that was started by some ASP.NET team members in their spare time to add real-time communications capabilities to ASP.NET - and .NET applications in general. It allows you to handle long running communications channels between your server and multiple connected clients using the best communications channel they can both support - websockets if available, falling back all the way to old technologies like long polling if necessary for old browsers. SignalR remains an open source project, but now it's being included in ASP.NET (also open source, hooray!). That means there's real, official ASP.NET engineering work being put into SignalR, and it's even easier to use in an ASP.NET application. Now in any ASP.NET project type, you can right-click / Add / New Item... SignalR Hub or Persistent Connection. And much more... There's quite a bit more. You can find more info at http://asp.net/vnext, and we'll be adding more content as fast as we can. Watch my BUILD talk to see as I demonstrate these and other features in the ASP.NET Fall 2012 Update, as well as some other even futurey-er stuff!

    Read the article

  • Building Awesome WM

    - by Dragan Chupacabrovic
    Hello, I am following these steps in order to build Awesome window manager on 10.04 I am building 3.4 while the tutorial is for 3.1 I installed all of the specified dependencies including cairo. After running cd awesome-3.4 && make I get the following missing dependencies error: Running cmake… -- cat - /bin/cat -- ln - /bin/ln -- grep - /bin/grep -- git - /usr/bin/git -- hostname - /bin/hostname -- gperf - /usr/bin/gperf -- asciidoc - /usr/bin/asciidoc -- xmlto - /usr/bin/xmlto -- gzip - /bin/gzip -- lua - /usr/bin/lua -- luadoc - /usr/bin/luadoc -- convert - /usr/bin/convert -- checking for modules 'glib-2.0;cairo;x11;pango=1.19.3;pangocairo=1.19.3;xcb-randr;xcb-xtest;xcb-xinerama;xcb-shape;xcb-event=0.3.6;xcb-aux=0.3.0;xcb-atom=0.3.0;xcb-keysyms=0.3.4;xcb-icccm=0.3.6;xcb-image=0.3.0;xcb-property=0.3.0;cairo-xcb;libstartup-notification-1.0=0.10;xproto=7.0.15;imlib2;libxdg-basedir=1.0.0' -- package 'xcb-xtest' not found -- package 'xcb-property=0.3.0' not found -- package 'libstartup-notification-1.0=0.10' not found -- package 'libxdg-basedir=1.0.0' not found CMake Error at /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:259 (message): A required package was not found Call Stack (most recent call first): /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:311 (_pkg_check_modules_internal) awesomeConfig.cmake:133 (pkg_check_modules) CMakeLists.txt:15 (include) CMake Error at awesomeConfig.cmake:157 (message): Call Stack (most recent call first): CMakeLists.txt:15 (include) -- Configuring incomplete, errors occurred! make: * [cmake] Error 1 I ran sudo apt-get install libxcb-xtest0 libxcb-property1 libxdg-basedir1 libstartup-notification0 but the problem is still there. It is probably because apt-get uses different names for these libraries. Please advise EDIT following enzotib's suggestion, I ran: sudo apt-get install libxcb-xtest0-dev libxcb-property1-dev libxdg-basedir-dev libstartup-notification0-dev and now it looks like I'm missing a library: awesome-3.4$ make Running cmake… -- cat - /bin/cat -- ln - /bin/ln -- grep - /bin/grep -- git - /usr/bin/git -- hostname - /bin/hostname -- gperf - /usr/bin/gperf -- asciidoc - /usr/bin/asciidoc -- xmlto - /usr/bin/xmlto -- gzip - /bin/gzip -- lua - /usr/bin/lua -- luadoc - /usr/bin/luadoc -- convert - /usr/bin/convert -- Configuring lib/naughty.lua -- Configuring lib/awful/tooltip.lua -- Configuring lib/awful/init.lua -- Configuring lib/awful/titlebar.lua -- Configuring lib/awful/key.lua -- Configuring lib/awful/mouse/init.lua -- Configuring lib/awful/mouse/finder.lua -- Configuring lib/awful/autofocus.lua -- Configuring lib/awful/screen.lua -- Configuring lib/awful/rules.lua -- Configuring lib/awful/widget/init.lua -- Configuring lib/awful/widget/taglist.lua -- Configuring lib/awful/widget/graph.lua -- Configuring lib/awful/widget/tasklist.lua -- Configuring lib/awful/widget/common.lua -- Configuring lib/awful/widget/prompt.lua -- Configuring lib/awful/widget/launcher.lua -- Configuring lib/awful/widget/button.lua -- Configuring lib/awful/widget/layoutbox.lua -- Configuring lib/awful/widget/layout/init.lua -- Configuring lib/awful/widget/layout/vertical.lua -- Configuring lib/awful/widget/layout/horizontal.lua -- Configuring lib/awful/widget/layout/default.lua -- Configuring lib/awful/widget/progressbar.lua -- Configuring lib/awful/widget/textclock.lua -- Configuring lib/awful/dbus.lua -- Configuring lib/awful/remote.lua -- Configuring lib/awful/client.lua -- Configuring lib/awful/prompt.lua -- Configuring lib/awful/completion.lua -- Configuring lib/awful/tag.lua -- Configuring lib/awful/util.lua -- Configuring lib/awful/button.lua -- Configuring lib/awful/menu.lua -- Configuring lib/awful/hooks.lua -- Configuring lib/awful/wibox.lua -- Configuring lib/awful/layout/init.lua -- Configuring lib/awful/layout/suit/init.lua -- Configuring lib/awful/layout/suit/floating.lua -- Configuring lib/awful/layout/suit/fair.lua -- Configuring lib/awful/layout/suit/spiral.lua -- Configuring lib/awful/layout/suit/magnifier.lua -- Configuring lib/awful/layout/suit/tile.lua -- Configuring lib/awful/layout/suit/max.lua -- Configuring lib/awful/placement.lua -- Configuring lib/awful/startup_notification.lua -- Configuring lib/beautiful.lua -- Configuring themes/zenburn//theme.lua -- Configuring themes/default//theme.lua -- Configuring themes/sky//theme.lua -- Configuring config.h -- Configuring awesomerc.lua -- Configuring awesome-version-internal.h -- Configuring awesome.doxygen -- Configuring done -- Generating done -- Build files have been written to: /home/druden/util/awesome-3.4/.build-vedroid-i486-linux-gnu-4.4.3 Running make Makefile… Building… [ 4%] Built target generated_sources [ 5%] Building C object CMakeFiles/awesome.dir/awesome.c.o In file included from /home/druden/util/awesome-3.4/spawn.h:25, from /home/druden/util/awesome-3.4/awesome.c:33: /home/druden/util/awesome-3.4/globalconf.h:57: error: expected specifier-qualifier-list before ‘xcb_event_handlers_t’ In file included from /home/druden/util/awesome-3.4/awesome.c:34: /home/druden/util/awesome-3.4/client.h: In function ‘client_stack’: /home/druden/util/awesome-3.4/client.h:212: error: ‘awesome_t’ has no member named ‘client_need_stack_refresh’ /home/druden/util/awesome-3.4/client.h: In function ‘client_raise’: /home/druden/util/awesome-3.4/client.h:227: error: ‘awesome_t’ has no member named ‘stack’ In file included from /home/druden/util/awesome-3.4/awesome.c:42: /home/druden/util/awesome-3.4/titlebar.h: In function ‘titlebar_update_geometry’: /home/druden/util/awesome-3.4/titlebar.h:150: error: ‘awesome_t’ has no member named ‘L’ /home/druden/util/awesome-3.4/titlebar.h:151: error: ‘awesome_t’ has no member named ‘L’ /home/druden/util/awesome-3.4/titlebar.h:152: error: ‘awesome_t’ has no member named ‘L’ In file included from /home/druden/util/awesome-3.4/awesome.c:47: /home/druden/util/awesome-3.4/common/xutil.h: In function ‘xutil_get_text_property_from_reply’: /home/druden/util/awesome-3.4/common/xutil.h:39: warning: ‘STRING’ is deprecated (declared at /usr/local/include/xcb/xcb_atom.h:83) /home/druden/util/awesome-3.4/common/xutil.h: At top level: /home/druden/util/awesome-3.4/common/xutil.h:60: error: expected ‘)’ before ‘*’ token /home/druden/util/awesome-3.4/awesome.c: In function ‘awesome_atexit’: /home/druden/util/awesome-3.4/awesome.c:65: error: ‘awesome_t’ has no member named ‘hooks’ /home/druden/util/awesome-3.4/awesome.c:66: error: ‘awesome_t’ has no member named ‘L’ /home/druden/util/awesome-3.4/awesome.c:66: error: ‘awesome_t’ has no member named ‘hooks’ /home/druden/util/awesome-3.4/awesome.c:68: error: ‘awesome_t’ has no member named ‘L’ /home/druden/util/awesome-3.4/awesome.c:73: error: ‘awesome_t’ has no member named ‘embedded’ /home/druden/util/awesome-3.4/awesome.c:76: error: ‘awesome_t’ has no member named ‘embedded’ /home/druden/util/awesome-3.4/awesome.c:77: error: ‘awesome_t’ has no member named ‘embedded’ /home/druden/util/awesome-3.4/awesome.c:89: error: ‘awesome_t’ has no member named ‘clients’ /home/druden/util/awesome-3.4/awesome.c:89: error: ‘awesome_t’ has no member named ‘clients’ /home/druden/util/awesome-3.4/awesome.c:89: error: ‘awesome_t’ has no member named ‘clients’ /home/druden/util/awesome-3.4/awesome.c:89: warning: type defaults to ‘int’ in declaration of ‘c’ /home/druden/util/awesome-3.4/awesome.c:89: error: ‘awesome_t’ has no member named ‘clients’ /home/druden/util/awesome-3.4/awesome.c:89: error: ‘awesome_t’ has no member named ‘clients’ /home/druden/util/awesome-3.4/awesome.c:89: error: ‘awesome_t’ has no member named ‘clients’ /home/druden/util/awesome-3.4/awesome.c:91: error: invalid type argument of ‘unary *’ (have ‘int’) /home/druden/util/awesome-3.4/awesome.c:92: error: invalid type argument of ‘unary *’ (have ‘int’) /home/druden/util/awesome-3.4/awesome.c:96: error: ‘awesome_t’ has no member named ‘L’ /home/druden/util/awesome-3.4/awesome.c: In function ‘a_xcb_check_cb’: /home/druden/util/awesome-3.4/awesome.c:223: warning: implicit declaration of function ‘xcb_event_handle’ /home/druden/util/awesome-3.4/awesome.c:223: error: ‘awesome_t’ has no member named ‘evenths’ /home/druden/util/awesome-3.4/awesome.c:230: error: ‘awesome_t’ has no member named ‘evenths’ /home/druden/util/awesome-3.4/awesome.c: In function ‘awesome_restart’: /home/druden/util/awesome-3.4/awesome.c:277: error: ‘awesome_t’ has no member named ‘argv’ /home/druden/util/awesome-3.4/awesome.c: In function ‘xerror’: /home/druden/util/awesome-3.4/awesome.c:305: error: ‘XCB_EVENT_ERROR_BAD_WINDOW’ undeclared (first use in this function) /home/druden/util/awesome-3.4/awesome.c:305: error: (Each undeclared identifier is reported only once /home/druden/util/awesome-3.4/awesome.c:305: error: for each function it appears in.) /home/druden/util/awesome-3.4/awesome.c:306: error: ‘XCB_EVENT_ERROR_BAD_MATCH’ undeclared (first use in this function) /home/druden/util/awesome-3.4/awesome.c:308: error: ‘XCB_EVENT_ERROR_BAD_VALUE’ undeclared (first use in this function) /home/druden/util/awesome-3.4/awesome.c: In function ‘main’: /home/druden/util/awesome-3.4/awesome.c:369: error: ‘awesome_t’ has no member named ‘keygrabber’ /home/druden/util/awesome-3.4/awesome.c:370: error: ‘awesome_t’ has no member named ‘mousegrabber’ /home/druden/util/awesome-3.4/awesome.c:376: error: ‘awesome_t’ has no member named ‘argv’ /home/druden/util/awesome-3.4/awesome.c:377: error: ‘awesome_t’ has no member named ‘argv’ /home/druden/util/awesome-3.4/awesome.c:381: error: ‘awesome_t’ has no member named ‘argv’ /home/druden/util/awesome-3.4/awesome.c:382: error: ‘awesome_t’ has no member named ‘argv’ /home/druden/util/awesome-3.4/awesome.c:424: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:425: error: ‘awesome_t’ has no member named ‘timer’ /home/druden/util/awesome-3.4/awesome.c:425: error: ‘awesome_t’ has no member named ‘timer’ /home/druden/util/awesome-3.4/awesome.c:425: error: ‘awesome_t’ has no member named ‘timer’ /home/druden/util/awesome-3.4/awesome.c:425: error: ‘awesome_t’ has no member named ‘timer’ /home/druden/util/awesome-3.4/awesome.c:425: error: ‘awesome_t’ has no member named ‘timer’ /home/druden/util/awesome-3.4/awesome.c:425: error: ‘awesome_t’ has no member named ‘timer’ /home/druden/util/awesome-3.4/awesome.c:431: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:432: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:433: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:434: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:435: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:436: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:443: error: ‘awesome_t’ has no member named ‘default_screen’ /home/druden/util/awesome-3.4/awesome.c:450: error: ‘awesome_t’ has no member named ‘have_xtest’ /home/druden/util/awesome-3.4/awesome.c:462: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:464: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:465: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:467: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:468: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:471: warning: implicit declaration of function ‘xcb_event_handlers_init’ /home/druden/util/awesome-3.4/awesome.c:471: error: ‘awesome_t’ has no member named ‘evenths’ /home/druden/util/awesome-3.4/awesome.c:472: warning: implicit declaration of function ‘xutil_error_handler_catch_all_set’ /home/druden/util/awesome-3.4/awesome.c:472: error: ‘awesome_t’ has no member named ‘evenths’ /home/druden/util/awesome-3.4/awesome.c:490: warning: implicit declaration of function ‘xcb_event_poll_for_event_loop’ /home/druden/util/awesome-3.4/awesome.c:490: error: ‘awesome_t’ has no member named ‘evenths’ /home/druden/util/awesome-3.4/awesome.c:493: error: ‘awesome_t’ has no member named ‘evenths’ /home/druden/util/awesome-3.4/awesome.c:496: error: ‘awesome_t’ has no member named ‘keysyms’ /home/druden/util/awesome-3.4/awesome.c:507: error: ‘awesome_t’ has no member named ‘colors’ /home/druden/util/awesome-3.4/awesome.c:510: error: ‘awesome_t’ has no member named ‘colors’ /home/druden/util/awesome-3.4/awesome.c:513: error: ‘awesome_t’ has no member named ‘font’ /home/druden/util/awesome-3.4/awesome.c:519: error: ‘awesome_t’ has no member named ‘keysyms’ /home/druden/util/awesome-3.4/awesome.c:519: error: ‘awesome_t’ has no member named ‘numlockmask’ /home/druden/util/awesome-3.4/awesome.c:520: error: ‘awesome_t’ has no member named ‘shiftlockmask’ /home/druden/util/awesome-3.4/awesome.c:520: error: ‘awesome_t’ has no member named ‘capslockmask’ /home/druden/util/awesome-3.4/awesome.c:521: error: ‘awesome_t’ has no member named ‘modeswitchmask’ /home/druden/util/awesome-3.4/awesome.c:563: error: ‘awesome_t’ has no member named ‘evenths’ /home/druden/util/awesome-3.4/awesome.c:572: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:575: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:576: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:577: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:578: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:579: error: ‘awesome_t’ has no member named ‘loop’ /home/druden/util/awesome-3.4/awesome.c:580: error: ‘awesome_t’ has no member named ‘loop’ make[3]: * [CMakeFiles/awesome.dir/awesome.c.o] Error 1 make[2]: [CMakeFiles/awesome.dir/all] Error 2 make[1]: [all] Error 2 make: * [cmake-build] Error 2

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >