Search Results

Search found 6376 results on 256 pages for 'launch condition'.

Page 254/256 | < Previous Page | 250 251 252 253 254 255 256  | Next Page >

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Wednesday, November 23, 2011

    CodePlex Daily Summary for Wednesday, November 23, 2011Popular ReleasesVisual Leak Detector for Visual C++ 2008/2010: v2.2.1: Enhancements: * strdup and _wcsdup functions support added. * Preliminary support for VS 11 added. Bugs Fixed: * Low performance after upgrading from VLD v2.1. * Memory leaks with static linking fixed (disabled calloc support). * Runtime error R6002 fixed because of wrong memory dump format. * version.h fixed in installer. * Some PVS studio warning fixed.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.10: 3.6.0.10 22-Nov-2011 Update: Removed PreEmptive Platform integration (PreEmptive analytics) Removed all PreEmptive attributes Removed PreEmptive.dll assembly references from all projects Added first support to ADAM/AD LDS Thanks to PatBea. Work Item 9775: http://netsqlazman.codeplex.com/workitem/9775Developer Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .SharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.ASP.net Awesome Samples (Web-Forms): 1.0 samples: Demos and Tutorials for ASP.net Awesome VS2008 are in .NET 3.5 VS2010 are in .NET 4.0 (demos for the ASP.net Awesome jQuery Ajax Controls)SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...New ProjectsActiveWorlds World Server Admin PowerShell SnapIn: The purpose of this PowerShell SnapIn is to provide a set of tools to administer the world server from PowerShell. It leverages the ActiveWorlds SDK .NET Wrapper to provide this functionality.Aigu: Enter special characters like you would on your mobile phone. For instance, if you want to type 'é', you just hold down 'e' and a menu will appear. Selected the desired character using the arrow keys and press 'enter'. Simple but powerful.Are you workaholic?: Are you a workaholic? Did your Doctor advice you not to stare at the computer monitor for a long time? Then this app is perfectly made for you. It runs in the background, and alerts you to take periodic rests for your eyes and body. What's more, It's open source (MS-PL).ATDIS PoC: privateAuto Version Web Assets: The AVWA project is an HTTP Module written in C# that is designed to allow for versioning of various web assets such as .CSS and .JS files. This allows you to publish new versions of these files without having to force the server or the client browsers to expire cache.Bachelor Thesis Algorithm Test Bed: Algorithm Test Bed for my Bachelor ThesisBase64: Simple application helps converting strings and files from or to Base64 string. You can use any encoding to convert while a sidebar previews decoded string for all other encodings.BoracayExpress: BoracayExpressC++ Framework for Test Driven Development: A testing framework for C++ written in C++.Class2Table: Class2Table aka Entity2Table. Easy tool that allows creation of SQL tables from .Net types.Code for Demos & Experiments: This is where I will post code from demos and presentationsCodeMaker: CodeMaker?????????: 1、?????????? 2、???? 3、????? 4、??Python????????? ConsoleCommand: ConsoleCommand provides certain .Net commands for access from javascript console engines. Included are commands to set the text and background colors, as well as list and extract resources compiled in a .Net dll. Converter: Character code conversion tools ???????? CryptoInator - self contained, self-encrypting, self-decrypting image viewer: Original developed to encrypt and store NemID images in Denmark. DAiBears: Something, something, botDelicious Notify Plugin: Lets you push a blog post straight to Delicious from Live WriterDeveloperFile: Compresses Javascripts using the YUI .NET project. Loops through the root folder and subfolders for files matching the debug extension and creates new files using the release extension. (File extensions must match exactly).DotNetNuke SharePoint File Explorer: A DotNetNuke SharePoint File ExplorerDouban FM: WP7 Douban FM appGame Lib: Game Library is a open-source game library to allow focusing on the fun part of a game. It is developed in C#, but will be ported to C++ and VB.net.Google reader notes to Delicious Export tool (WPF): Google reader discontinued note in reader features. Current google reader allows to export users old notes in JSON format, This App will parse the JSON file & upload it to it delicious , delicious is a good alternative for note in readerHtml Source Transmitter Control: This web control allows getting a source of a web page, that will displayed before submit. So, developer can store a view of the html page, that was before server exception. It helps to reproduce bugs and can be used with other logging systems.Ideopuzzle: A puzzle gameImageShack-Uploader: This project demonstrates how to upload files automated to imageshack.us and other image hosters with C#.Insert Acronym Tags: Lets you insert <acronym> and <abbr> tags into your blog entry more easily.Insert Quick Link: Allows you to paste a link into the Writer window and have the a window similar to the one in Writer where you can change what text is to appear, open in new window, etc.Insert Video Plugin: Allows you to insert a video into a blog entry from a multitude of different sitesIoCWrap: Provides a wrapper to the various IoC container implementations so that it is possible to switch to a different provider without changing any application code.kaveepoj: sharepoint projectKinect Quiz Engine: Fun quiz game for the Kinect.Klaverjas: Test application for testing different new technologies in .NET (WCF, DataServices, C# stuff, Entity...etc.)Man In The Middle: A cyberpunk themed action with puzzle and strategy elements. Made with XNA as part of a game development course at the IT University of Copenhagen by Bo Bendtsen, Jonas Flensbak, Daniel Kromand, Jess Rahbek & Darryl Woodford.MediaSelektor: Simple tool to select mediasMicajah Mindtouch Deki Wiki Copier: Small C# application to move data between 2 Deki Wiki installs or, more importantly, from a wik.is account to a locally installed systemMineFlagger: MineFlagger is a mine clearing game modeled after Microsoft’s Minesweeper. In addition to standard play, MineFlagger incorporates an AI for fun and training.myXbyqwrhjadsfasfhgf: myXbyqwrhjadsfasfhgfnatoop: natoopNauplius.KeyStore: Provides secure application key storage backed by SQL 2008 and Active Directory.ObjectDB: An object database written using C# 4 and Mono.Cecil.PaceR: PaceR is an attempt to encapsulate a lot of the common code functionality I use on different projects. Instead of recreating functionality from memory or worse, copying from older projects, I'd like to have a central location to maintain this common code. Parseq: Parseq is a Parser Combinator library written in C# (version 2.0).PowerShell Network Adapter Configuration module: PowerShell Network Adapter Configuration module is a PowerShell module which provides functions for managing network adapters using WMI.public traffic tracker: This is a university project for a .net course. We develop a public traffic tracker applications for Windows Phone 7 devices, that can give information about the actual positions of the nearest vehicle on a given line. The speciality is that we use only the GPS information of the users' WP7 devices, so this is a completely software solution without any hardware investment. The disatvantage is that for the real operation we would need a lot of active WP7 user.puyo: puyoRadioTroll: Projeto web Radio TrollRead Feed Community: Read Feed CommunityReviewer: Reviewer.dk - Dansk spil og anmeldelsessite.Rollout Sharepoint Solutions - ROSS: ROSS performs the following actions: - Delete sitecollection and restart services - 'Get Latest Version' from SourceSafe - Rebuild Solution - Install all wsp solutions - Create SiteCollections - Check for build en provisioning errors - Send email to developers if errors occurredSchool Management: school managementSQL File Executer: This project is a class library written in c# which is used for executing *.sql files in remote server. Simply one dll file. You include it in your web project, add using statement at the top of your page, pass the parameters inside. Rest, it will do.Startup Manager: Startup Manager launches all startup programs at a managed rate therefore meaning that your computer doesn't crash everytime it starts up and you can use it immediately.stetic: ...Test Infrastructure Guidance: The purpose of this project is to provide guidance to testers in using TFS effectively as an ALM solution. TFS is much more than a simple code repository. Used with Visual Studio it can form a powerful testing solution and remove a lot of pain in dealing with test infrastructure overhead.Tête-à-tête: Tete-a-tete is an address book with a built-in function to send electronic mail over the Internet.Tipeysh! - Add-in that helps you creating C/C++ header files on a single click: Are you also feel miserable when you need to create a new header file in your Visual Studio C/C++ project? Repeatedly choosing "new header file", then writing the annoying (but needed) "#ifndef" section, then writing the class name with it's "private", "protected" and "public" access modifiers... too much clicks and typewriting! Well, there is a solution: Tipeysh! is a simple, easy to use, very handy and configurable Visual Studio Add-In, compatible for both the 2005 and 2008 versions. Once ...UMN Dashboard Project: academic projUsersMOSS: UsersMOSS est une petite application permettant de consulter sur un serveur MOSS les sites web (SPWeb) les users (SPUser), et les groupes (SPGroup). Cette application utilise le modèle objet de MOSS pour inspecter le contenu des objets d'un serveur MOSS. Cette application est loin d'être professionnelle, ou même terminée, mais elle me rend très souvent service. Surtout ne l'utilisez pas sur un serveur de production car le gestion du GC n'est pas faite, ce qui peut provoquer des plantages de v...UtilityLibrary.Win32: UtilityLibrary.Win32UW iLearn: The iLearn activity inference platform is a suite of desktop and mobile tools for logging, modeling, and classifying sensor data for mobile devices. It was created at the University of Washington.VsDocGen: Dynamic javascript documentation generation directly from xml comment documented source code.Windows Live Spaces Photo Album plugin: This is going to be a plugin for Windows Live Writer that will allow you to browse a Windows Live Space Photo Album.Windows Live Writer Plugin for Amazon Books using CueCat: This Windows Live Writer Plugin is for users who use WLW and wish to use their CueCat to scan books. ItemLookups are run against Amazon via its AWS and book image, title, author, and publisher is returned. This project was first created by Scott Hanselman on MSDN's Coding4Fun! X7: X7 makes it easier for win7user to clean the system. You'll no longer have to delete useless stuff in your win7. It's developed in bat.xDT - Commander: Using this application, the user can assign shortcuts (short texts) for various links/URLs. These short texts will be typed into a Textbox to then launch/go to the target (similar to the "Run" program in Windows).

    Read the article

  • Error when reloading supervisord: unix:///tmp/supervisor.sock no such file

    - by Yarin
    I'm running supervisord on my CentOS 6 box like so, /usr/bin/supervisord -c /etc/supervisord.conf and when I launch supervisorctl all process status are fine, but if I try to reload using supervisorctl I get unix:///tmp/supervisor.sock no such file I'm using the same config file I've used successfully on other boxes, and im running everything as root. I can't undesrtand what the problem is... Config file: ; Sample supervisor config file. [unix_http_server] file=/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; (ip_address:port specifier, *:port for all iface) ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) [supervisord] logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log) logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) logfile_backups=10 ; (num of main logfile rotation backups;default 10) loglevel=info ; (log level;default info; others: debug,warn,trace) pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) ;umask=022 ; (process file creation umask;default 022) ;user=chrism ; (default is current user, required if root) ;identifier=supervisor ; (supervisord identifier, default is 'supervisor') ;directory=/tmp ; (default is not to cd during start) ;nocleanup=true ; (don't clean up tempfiles at start;default false) ;childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP) ;environment=KEY=value ; (key value pairs to add to environment) ;strip_ansi=false ; (strip ansi escape codes in logs; def. false) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as http_username if set ;password=123 ; should be same as http_password if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The below sample program section shows all possible program subsection values, ; create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:foo] ;command=/bin/cat [program:embed_scheduler] command=/opt/web-apps/mywebsite/custom_process.py process_name=%(program_name)s_%(process_num)d numprocs=3 ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample eventlistener section shows all possible ; eventlistener subsection values, create one or more 'real' ; eventlistener: sections to be able to handle event notifications ; sent by supervisor. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups ; # of stderr logfile backups (default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample group section shows all possible group values, ; create one or more 'real' group: sections to create "heterogeneous" ; process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini

    Read the article

  • Cisco 891w multiple VLAN configuration

    - by Jessica
    I'm having trouble getting my guest network up. I have VLAN 1 that contains all our network resources (servers, desktops, printers, etc). I have the wireless configured to use VLAN1 but authenticate with wpa2 enterprise. The guest network I just wanted to be open or configured with a simple WPA2 personal password on it's own VLAN2. I've looked at tons of documentation and it should be working but I can't even authenticate on the guest network! I've posted this on cisco's support forum a week ago but no one has really responded. I could really use some help. So if anyone could take a look at the configurations I posted and steer me in the right direction I would be extremely grateful. Thank you! version 15.0 service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESI ! boot-start-marker boot-end-marker ! logging buffered 51200 warnings ! aaa new-model ! ! aaa authentication login userauthen local aaa authorization network groupauthor local ! ! ! ! ! aaa session-id common ! ! ! clock timezone EST -5 clock summer-time EDT recurring service-module wlan-ap 0 bootimage autonomous ! crypto pki trustpoint TP-self-signed-3369945891 enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-3369945891 revocation-check none rsakeypair TP-self-signed-3369945891 ! ! crypto pki certificate chain TP-self-signed-3369945891 certificate self-signed 01 (cert is here) quit ip source-route ! ! ip dhcp excluded-address 192.168.1.1 ip dhcp excluded-address 192.168.1.5 ip dhcp excluded-address 192.168.1.2 ip dhcp excluded-address 192.168.1.200 192.168.1.210 ip dhcp excluded-address 192.168.1.6 ip dhcp excluded-address 192.168.1.8 ip dhcp excluded-address 192.168.3.1 ! ip dhcp pool ccp-pool import all network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 10.171.12.5 10.171.12.37 lease 0 2 ! ip dhcp pool guest import all network 192.168.3.0 255.255.255.0 default-router 192.168.3.1 dns-server 10.171.12.5 10.171.12.37 ! ! ip cef no ip domain lookup no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO891W-AGN-A-K9 sn FTX153085WL ! ! username ESIadmin privilege 15 secret 5 $1$g1..$JSZ0qxljZAgJJIk/anDu51 username user1 password 0 pass ! ! ! class-map type inspect match-any ccp-cls-insp-traffic match protocol cuseeme match protocol dns match protocol ftp match protocol h323 match protocol https match protocol icmp match protocol imap match protocol pop3 match protocol netshow match protocol shell match protocol realmedia match protocol rtsp match protocol smtp match protocol sql-net match protocol streamworks match protocol tftp match protocol vdolive match protocol tcp match protocol udp class-map type inspect match-all ccp-insp-traffic match class-map ccp-cls-insp-traffic class-map type inspect match-any ccp-cls-icmp-access match protocol icmp class-map type inspect match-all ccp-invalid-src match access-group 100 class-map type inspect match-all ccp-icmp-access match class-map ccp-cls-icmp-access class-map type inspect match-all ccp-protocol-http match protocol http ! ! policy-map type inspect ccp-permit-icmpreply class type inspect ccp-icmp-access inspect class class-default pass policy-map type inspect ccp-inspect class type inspect ccp-invalid-src drop log class type inspect ccp-protocol-http inspect class type inspect ccp-insp-traffic inspect class class-default drop policy-map type inspect ccp-permit class class-default drop ! zone security out-zone zone security in-zone zone-pair security ccp-zp-self-out source self destination out-zone service-policy type inspect ccp-permit-icmpreply zone-pair security ccp-zp-in-out source in-zone destination out-zone service-policy type inspect ccp-inspect zone-pair security ccp-zp-out-self source out-zone destination self service-policy type inspect ccp-permit ! ! crypto isakmp policy 1 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group 3000client key 67Nif8LLmqP_ dns 10.171.12.37 10.171.12.5 pool dynpool acl 101 ! ! crypto ipsec transform-set myset esp-3des esp-sha-hmac ! crypto dynamic-map dynmap 10 set transform-set myset ! ! crypto map clientmap client authentication list userauthen crypto map clientmap isakmp authorization list groupauthor crypto map clientmap client configuration address initiate crypto map clientmap client configuration address respond crypto map clientmap 10 ipsec-isakmp dynamic dynmap ! ! ! ! ! interface FastEthernet0 ! ! interface FastEthernet1 ! ! interface FastEthernet2 ! ! interface FastEthernet3 ! ! interface FastEthernet4 ! ! interface FastEthernet5 ! ! interface FastEthernet6 ! ! interface FastEthernet7 ! ! interface FastEthernet8 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto ! ! interface GigabitEthernet0 description $FW_OUTSIDE$$ES_WAN$ ip address 10...* 255.255.254.0 ip nat outside ip virtual-reassembly zone-member security out-zone duplex auto speed auto crypto map clientmap ! ! interface wlan-ap0 description Service module interface to manage the embedded AP ip unnumbered Vlan1 arp timeout 0 ! ! interface Wlan-GigabitEthernet0 description Internal switch interface connecting to the embedded AP switchport trunk allowed vlan 1-3,1002-1005 switchport mode trunk ! ! interface Vlan1 description $ETH-SW-LAUNCH$$INTF-INFO-FE 1$$FW_INSIDE$ ip address 192.168.1.1 255.255.255.0 ip nat inside ip virtual-reassembly zone-member security in-zone ip tcp adjust-mss 1452 crypto map clientmap ! ! interface Vlan2 description guest ip address 192.168.3.1 255.255.255.0 ip access-group 120 in ip nat inside ip virtual-reassembly zone-member security in-zone ! ! interface Async1 no ip address encapsulation slip ! ! ip local pool dynpool 192.168.1.200 192.168.1.210 ip forward-protocol nd ip http server ip http access-class 23 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! ip dns server ip nat inside source list 23 interface GigabitEthernet0 overload ip route 0.0.0.0 0.0.0.0 10.165.0.1 ! access-list 23 permit 192.168.1.0 0.0.0.255 access-list 100 remark CCP_ACL Category=128 access-list 100 permit ip host 255.255.255.255 any access-list 100 permit ip 127.0.0.0 0.255.255.255 any access-list 100 permit ip 10.165.0.0 0.0.1.255 any access-list 110 permit ip 192.168.0.0 0.0.5.255 any access-list 120 remark ESIGuest Restriction no cdp run ! ! ! ! ! ! control-plane ! ! alias exec dot11radio service-module wlan-ap 0 session Access point version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESIRouter ! no logging console enable secret 5 $1$yEH5$CxI5.9ypCBa6kXrUnSuvp1 ! aaa new-model ! ! aaa group server radius rad_eap server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa group server radius rad_acct server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa authentication login eap_methods group rad_eap aaa authentication enable default line enable aaa authorization exec default local aaa authorization commands 15 default local aaa accounting network acct_methods start-stop group rad_acct ! aaa session-id common clock timezone EST -5 clock summer-time EDT recurring ip domain name ESI ! ! dot11 syslog dot11 vlan-name one vlan 1 dot11 vlan-name two vlan 2 ! dot11 ssid one vlan 1 authentication open eap eap_methods authentication network-eap eap_methods authentication key-management wpa version 2 accounting rad_acct ! dot11 ssid two vlan 2 authentication open guest-mode ! dot11 network-map ! ! username ESIadmin privilege 15 secret 5 $1$p02C$WVHr5yKtRtQxuFxPU8NOx. ! ! bridge irb ! ! interface Dot11Radio0 no ip address no ip route-cache ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! ssid two ! antenna gain 0 station-role root ! interface Dot11Radio0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface Dot11Radio0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 bridge-group 2 subscriber-loop-control bridge-group 2 block-unknown-source no bridge-group 2 source-learning no bridge-group 2 unicast-flooding bridge-group 2 spanning-disabled ! interface Dot11Radio1 no ip address no ip route-cache shutdown ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! antenna gain 0 dfs band 3 block channel dfs station-role root ! interface Dot11Radio1.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface GigabitEthernet0 description the embedded AP GigabitEthernet 0 is an internal interface connecting AP with the host router no ip address no ip route-cache ! interface GigabitEthernet0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 no bridge-group 1 source-learning bridge-group 1 spanning-disabled ! interface GigabitEthernet0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 no bridge-group 2 source-learning bridge-group 2 spanning-disabled ! interface BVI1 ip address 192.168.1.2 255.255.255.0 no ip route-cache ! ip http server no ip http secure-server ip http help-path http://www.cisco.com/warp/public/779/smbiz/prodconfig/help/eag access-list 10 permit 192.168.1.0 0.0.0.255 radius-server host 192.168.1.5 auth-port 1812 acct-port 1813 key ***** bridge 1 route ip

    Read the article

  • Moving data files failing

    - by Miles Hayler
    Trying to migrate data from C: to D: via the SBS console is failing. The wizard starts running but drops out in the first few seconds. I'll post the full logs, but the important lines appear to be as follows: An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: GetServerBackupTaskStatus: fail to find the task --- ErrorCode:0 I've been googling for days with no luck. I have found that mscorlib is a component of .net, and I've discovered multiple instances of the file in %windir%, %windir%\winsxs, %windir%\Microsoft.net Anyone come across and fixed this one before? --------------------------------------------------------- [1516] 110315.190856.1105: Storage: Initializing...C:\Program Files\Windows Small Business Server\Bin\MoveData.exe [1516] 110315.190856.2875: Storage: Data Store to be moved: Exchange [1516] 110315.190856.5305: TaskScheduler: Exception System.IO.FileNotFoundException: [1516] 110315.190856.5605: Exception: --------------------------------------- An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Timestamp: 03/15/2011 19:08:56 Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) [1516] 110315.190856.5625: Storage: Exception Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException: [1516] 110315.190856.5635: Exception: --------------------------------------- [b]An exception of type 'Type: Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException, Common, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' has occurred.[/b] Timestamp: 03/15/2011 19:08:56 Message: Failed to find the task path Stack: at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() --------------------------------------- An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Timestamp: 03/15/2011 19:08:56 Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) [1516] 110315.190856.5665: Storage: Error Retrieving Server Backup Task Status: ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: GetServerBackupTaskStatus: fail to find the task ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException: Failed to find the task path ---> System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() at Microsoft.WindowsServerSolutions.Storage.MoveData.Helper.get_ServerBackupTaskState() [1516] 110315.190857.6216: Storage: Backup Task State: Unknown [1516] 110315.190857.9347: Storage: Launching the Move Data Wizard! [1516] 110315.190857.9397: Wizard: Admin:QueryNextPage(null) = Storage.MoveDataWizard.GettingStartedPage [1516] 110315.190857.9417: Wizard: TOC Storage.MoveDataWizard.GettingStartedPage is on ExpectedPath [1516] 110315.190857.9577: Wizard: Storage.MoveDataWizard.GettingStartedPage entered [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.GettingStartedPage) = Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190857.9657: Wizard: TOC Storage.MoveDataWizard.DiagnoseDataStorePage is on ExpectedPath [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.DiagnoseDataStorePage) = Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190857.9657: Wizard: TOC Storage.MoveDataWizard.NewDataStoreLocationPage is on ExpectedPath [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.NewDataStoreLocationPage) = null [1516] 110315.190857.9697: Wizard: ---------------------------------- [1516] 110315.190857.9697: Wizard: The pages visted: [1516] 110315.190857.9697: Wizard: Current Page := [TOC Storage.MoveDataWizard.GettingStartedPage] [1516] 110315.190857.9697: Wizard: [TOC] : TOC Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190857.9697: Wizard: [TOC] : TOC Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190857.9697: Wizard: Step 1 of 3 [1516] 110315.190907.0406: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.GettingStartedPage) = Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190907.0416: Wizard: Storage.MoveDataWizard.GettingStartedPage exited with the button: Next [1516] 110315.190907.0416: WizardChainEngine Next Clicked: Going to page {0}.: Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190907.0496: Wizard: Storage.MoveDataWizard.DiagnoseDataStorePage entered [1516] 110315.190907.0606: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.DiagnoseDataStorePage) = Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190907.0606: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.NewDataStoreLocationPage) = null [1516] 110315.190907.0606: Wizard: ---------------------------------- [1516] 110315.190907.0606: Wizard: The pages visted: [1516] 110315.190907.0606: Wizard: [TOC] visited: TOC Storage.MoveDataWizard.GettingStartedPage [1516] 110315.190907.0606: Wizard: Current Page := [TOC Storage.MoveDataWizard.DiagnoseDataStorePage] [1516] 110315.190907.0616: Wizard: [TOC] : TOC Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190907.0616: Wizard: Step 2 of 3 [19772] 110315.190907.0656: Storage: Starting System Diagnosis [19772] 110315.190907.0656: Storage: Getting Data Store Information [19772] 110315.190907.1086: Storage: Create the list of storage and DB directory path [19772] 110315.190907.1246: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks..ctor [19772] 110315.190907.1546: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.Initialize [19772] 110315.190907.1596: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190907.1606: Messaging: Exchange install path: C:\Program Files\Microsoft\Exchange Server\bin [19772] 110315.190908.4157: Messaging: E12 Monad runspace created ID: Microsoft.PowerShell [19772] 110315.190908.4237: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190908.4287: Messaging: Executed management shell command: get-exchangeserver [19772] 110315.190910.2369: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190910.2369: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190910.5699: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.GatherAdminInfo [19772] 110315.190910.5699: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190910.5719: Messaging: Executed management shell command: get-user -Identity "dmagroup.local\Administrator" [19772] 110315.190911.0870: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.0880: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.0880: Messaging: Executed management shell command: get-mailbox -Identity "d2ae2bf0-48a7-4ce9-9e72-bb3c765454ac" [19772] 110315.190911.1300: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1310: Messaging: User Administrator is mail enabled and can use MessagingManagement to send mail. [19772] 110315.190911.1310: Messaging: Email address used for user: [email protected] [19772] 110315.190911.1440: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1440: Messaging: Executed management shell command: get-group -Identity "Domain Admins" [19772] 110315.190911.1630: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1640: Messaging: User Administrator is a member of Domain Admins and can use MessagingManagement to manage Exchange. [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.GatherAdminInfo [19772] 110315.190911.1640: Messaging: MessagingManagement enabled for Exchange management: True [19772] 110315.190911.1640: Messaging: MessagingManagement enabled for mail submission: True [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.Initialize [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Tasks.TaskMoveExchangeData.CreateDataStoreDriveList [19772] 110315.190911.1670: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.1670: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1670: Messaging: Executed management shell command: get-storagegroup -Server "SERVER01" [19772] 110315.190911.2990: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.3070: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.3070: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.3070: Messaging: Executed management shell command: get-mailboxdatabase -Server "SERVER01" [19772] 110315.190911.4440: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.4520: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.4520: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.4520: Messaging: Executed management shell command: get-publicfolderdatabase -Server "SERVER01" [19772] 110315.190911.5240: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.5510: Storage: Data Store Drive/s Details:Name=C:\,Size=12675712420 [19772] 110315.190911.5510: Storage: Data Store Size Details: Current Total Size=12675712420 Required Size=12675712420 [19772] 110315.190911.5510: Storage: MoveData Task can move the Data Store=True [19772] 110315.190911.8401: Storage: An error was encountered when performing system diagnosis : ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: WMI error occurred while accessing drive ---> System.Management.ManagementException: Not found at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) at Microsoft.WindowsServerSolutions.Storage.Common.DataStoreInfo.LoadAvailableDrives() at Microsoft.WindowsServerSolutions.Storage.Common.MoveDataUtil.CanMoveData(DataStoreInfo storeInfo, MoveDataError& error) at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.DiagnoseDataStore(Object sender, DoWorkEventArgs args) [1516] 110315.190912.0331: Storage: An error occured during the execution: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: Diagnosing the Data Store failed (see the inner exception) ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: WMI error occurred while accessing drive ---> System.Management.ManagementException: Not found at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) at Microsoft.WindowsServerSolutions.Storage.Common.DataStoreInfo.LoadAvailableDrives() at Microsoft.WindowsServerSolutions.Storage.Common.MoveDataUtil.CanMoveData(DataStoreInfo storeInfo, MoveDataError& error) at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.DiagnoseDataStore(Object sender, DoWorkEventArgs args) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.backgroundWorker_RunWorkerCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardFrameView.Create() at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Storage.MoveData.MainClass.LaunchMoveDataWizard() at Microsoft.WindowsServerSolutions.Storage.MoveData.MainClass.Main(String[] args)

    Read the article

  • ServerRoot in my lighttpd.conf

    - by michael
    Hi, I have use the following example lighttpd.conf to launch my lighttpd. Can you please tell me where is my 'ServerRoot'? # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.socket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you.

    Read the article

  • Maven Error - Expected START_TAG or END_TAG not TEXT

    - by onepotato
    I am setting up a spring mvc web application + hibernate jpa + maven from scratch using Eclipse Indigo. I am stuck in this error when doing a Maven build. [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error installing artifact's metadata: Error installing metadata: Error updating group repository metadata expected START_TAG or END_TAG not TEXT (position: TEXT seen ...<extension>war</... @13:25) [INFO] ------------------------------------------------------------------------ I tried googling but can't find a solution that works for me. I even search the whole project for the text <extension>war</ and mysteriously, there is no text like this in my project. However, in the tomcat web.xml there are a lot of <extension> tag, but I doubt that it has something to do in this error because I never touched that web.xml Here is my pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany.applicationname</groupId> <artifactId>Application MVC</artifactId> <packaging>war</packaging> <version>0.0.1-SNAPSHOT</version> <name>Maven Application Webapp</name> <url>http://maven.apache.org</url> <properties> <spring.version>3.0.3.RELEASE</spring.version> </properties> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.hibernate.javax.persistence</groupId> <artifactId>hibernate-jpa-2.0-api</artifactId> <version>1.0.0.Final</version> </dependency> </dependencies> <build> <finalName>ApplicationName</finalName> </build> </project> As Funtik has suggested, I did a build with -X. Here is the stacktrace. [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error installing artifact's metadata: Error installing metadata: Error updating group repository metadata expected START_TAG or END_TAG not TEXT (position: TEXT seen ...<extension>war</... @13:25) [INFO] ------------------------------------------------------------------------ [DEBUG] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Error installing artifact's metadata: Error installing metadata: Error updating group repository metadata at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:583) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:499) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:478) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:330) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:291) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) at org.apache.maven.cli.MavenCli.main(MavenCli.java:287) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Error installing artifact's metadata: Error installing metadata: Error updating group repository metadata at org.apache.maven.plugin.install.InstallMojo.execute(InstallMojo.java:143) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:451) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:558) ... 16 more Caused by: org.apache.maven.artifact.installer.ArtifactInstallationException: Error installing artifact's metadata: Error installing metadata: Error updating group repository metadata at org.apache.maven.artifact.installer.DefaultArtifactInstaller.install(DefaultArtifactInstaller.java:91) at org.apache.maven.plugin.install.InstallMojo.execute(InstallMojo.java:105) ... 18 more Caused by: org.apache.maven.artifact.repository.metadata.RepositoryMetadataInstallationException: Error installing metadata: Error updating group repository metadata at org.apache.maven.artifact.repository.metadata.DefaultRepositoryMetadataManager.install(DefaultRepositoryMetadataManager.java:463) at org.apache.maven.artifact.installer.DefaultArtifactInstaller.install(DefaultArtifactInstaller.java:79) ... 19 more Caused by: org.apache.maven.artifact.repository.metadata.RepositoryMetadataStoreException: Error updating group repository metadata at org.apache.maven.artifact.repository.metadata.AbstractRepositoryMetadata.storeInLocalRepository(AbstractRepositoryMetadata.java:76) at org.apache.maven.artifact.repository.metadata.DefaultRepositoryMetadataManager.install(DefaultRepositoryMetadataManager.java:459) ... 20 more Caused by: org.codehaus.plexus.util.xml.pull.XmlPullParserException: expected START_TAG or END_TAG not TEXT (position: TEXT seen ...<extension>war</... @13:25) at org.codehaus.plexus.util.xml.pull.MXParser.nextTag(MXParser.java:1083) at org.apache.maven.artifact.repository.metadata.io.xpp3.MetadataXpp3Reader.parseVersioning(MetadataXpp3Reader.java:513) at org.apache.maven.artifact.repository.metadata.io.xpp3.MetadataXpp3Reader.parseMetadata(MetadataXpp3Reader.java:352) at org.apache.maven.artifact.repository.metadata.io.xpp3.MetadataXpp3Reader.read(MetadataXpp3Reader.java:866) at org.apache.maven.artifact.repository.metadata.AbstractRepositoryMetadata.updateRepositoryMetadata(AbstractRepositoryMetadata.java:98) at org.apache.maven.artifact.repository.metadata.AbstractRepositoryMetadata.storeInLocalRepository(AbstractRepositoryMetadata.java:68) ... 21 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2 seconds [INFO] Finished at: Thu Jun 27 17:36:23 SGT 2013 [INFO] Final Memory: 9M/16M [INFO] ------------------------------------------------------------------------ web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app id="WebApp_ID" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <display-name>Adjustment Tool</display-name> <servlet> <servlet-name>mvc-dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring-mvc.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>mvc-dispatcher</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> </web-app> Any ideas?

    Read the article

  • Problem with Google Web Toolkit Maven Plugin

    - by arreche
    I got an error following the PetClinic GWT application in less then 30 minutes Any idea? C:\Users\user\Desktop\petclinic>mvn -e gwt:run + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building petclinic [INFO] task-segment: [gwt:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing gwt:run [INFO] [aspectj:compile {execution: default}] [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date Downloading: http://repository.springsource.com/maven/bundles/release/org/codeha us/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.release (http://repository.sp ringsource.com/maven/bundles/release) Downloading: http://repository.springsource.com/maven/bundles/external/org/codeh aus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.external (http://repository.s pringsource.com/maven/bundles/external) Downloading: http://repository.springsource.com/maven/bundles/milestone/org/code haus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository com.springsource.repository.bundles.milestone (http://repository. springsource.com/maven/bundles/milestone) Downloading: http://maven.springframework.org/milestone/org/codehaus/plexus/plex us-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository spring-maven-milestone (http://maven.springframework.org/mileston e) Downloading: http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven/org /codehaus/plexus/plexus-components/1.1.6/plexus-components-1.1.6.pom [INFO] Unable to find resource 'org.codehaus.plexus:plexus-components:pom:1.1.6' in repository gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/g wt/maven) Downloading: http://repo1.maven.org/maven2/org/codehaus/plexus/plexus-components /1.1.6/plexus-components-1.1.6.pom [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.codehaus.plexus:plexus-components:pom:1.1.6 Reason: Cannot find parent: org.codehaus.plexus:plexus for project: org.codehaus .plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexus-compo nents:pom:1.1.6 [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Unable to get dependency information: Unable to read the metadata file for artifact 'org.codehaus.plexus :plexus-compiler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for pr oject: org.codehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus. plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:711) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.artifact.resolver.ArtifactResolutionException: Unabl e to get dependency information: Unable to read the metadata file for artifact ' org.codehaus.plexus:plexus-compiler-api:jar': Cannot find parent: org.codehaus.p lexus:plexus for project: org.codehaus.plexus:plexus-components:pom:1.1.6 for pr oject org.codehaus.plexus:plexus-components:pom:1.1.6 org.codehaus.plexus:plexus-compiler-api:jar:1.5.3 from the specified remote repositories: com.springsource.repository.bundles.release (http://repository.springsource.co m/maven/bundles/release), com.springsource.repository.bundles.milestone (http://repository.springsource. com/maven/bundles/milestone), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.c om/maven/bundles/external), spring-maven-milestone (http://maven.springframework.org/milestone), central (http://repo1.maven.org/maven2), gwt-repo (http://google-web-toolkit.googlecode.com/svn/2.1.0.M1/gwt/maven), codehaus.org (http://snapshots.repository.codehaus.org), JBoss Repo (http://repository.jboss.com/maven2) Path to dependency: 1) org.codehaus.mojo:gwt-maven-plugin:maven-plugin:1.3.1.google at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:430) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.collect(D efaultArtifactCollector.java:74) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:316) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolveTra nsitively(DefaultArtifactResolver.java:304) at org.apache.maven.plugin.DefaultPluginManager.ensurePluginContainerIsC omplete(DefaultPluginManager.java:835) at org.apache.maven.plugin.DefaultPluginManager.getConfiguredMojo(Defaul tPluginManager.java:647) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:468) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.artifact.metadata.ArtifactMetadataRetrievalException : Unable to read the metadata file for artifact 'org.codehaus.plexus:plexus-comp iler-api:jar': Cannot find parent: org.codehaus.plexus:plexus for project: org.c odehaus.plexus:plexus-components:pom:1.1.6 for project org.codehaus.plexus:plexu s-components:pom:1.1.6 at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:200) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edArtifact(MavenMetadataSource.java:94) at org.apache.maven.artifact.resolver.DefaultArtifactCollector.recurse(D efaultArtifactCollector.java:387) ... 24 more Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent : org.codehaus.plexus:plexus for project: org.codehaus.plexus:plexus-components: pom:1.1.6 for project org.codehaus.plexus:plexus-components:pom:1.1.6 at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1407) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(Def aultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromReposito ry(DefaultMavenProjectBuilder.java:255) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveRelocat edProject(MavenMetadataSource.java:163) ... 26 more Caused by: org.apache.maven.project.InvalidProjectModelException: Parse error re ading POM. Reason: expected START_TAG or END_TAG not TEXT (position: TEXT seen . ..<role>Developer</role>\n 6878/?\r</... @163:16) for project org.codehaus .plexus:plexus at C:\Users\user\.m2\repository\org\codehaus\plexus\plexus\1.0.8\ plexus-1.0.8.pom at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1610) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1571) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepo sitory(DefaultMavenProjectBuilder.java:562) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(D efaultMavenProjectBuilder.java:1392) ... 31 more Caused by: org.codehaus.plexus.util.xml.pull.XmlPullParserException: expected ST ART_TAG or END_TAG not TEXT (position: TEXT seen ...<role>Developer</role>\n 6878/?\r</... @163:16) at hidden.org.codehaus.plexus.util.xml.pull.MXParser.nextTag(MXParser.ja va:1095) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseDeveloper(MavenXp p3Reader.java:1389) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.parseModel(MavenXpp3Re ader.java:1944) at org.apache.maven.model.io.xpp3.MavenXpp3Reader.read(MavenXpp3Reader.j ava:3912) at org.apache.maven.project.DefaultMavenProjectBuilder.readModel(Default MavenProjectBuilder.java:1606) ... 34 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11 seconds [INFO] Finished at: Fri May 21 20:28:23 BST 2010 [INFO] Final Memory: 45M/205M [INFO] ------------------------------------------------------------------------

    Read the article

  • half or quarter black screen in android

    - by Mike McKeown
    I have an android activity that when I launch sometimes (about 1 in 4 times) it only draws quarter or half the screen before showing it. When I change the orientation or press a button the screen draws properly. I'm just using TextViews, Buttons, Fonts - no drawing or anything like that. All of my code for initialising is in the onCreate(). In this method I'm loading a text file, of about 40 lines long, and also getting a shared preference. Could this cause a delay so that it can't draw the intent? Thanks in advance if anyone has seen anything similar. EDIT - I tried commenting out the loading of the word list, but it didn't fix the problem. Here is the activity <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/mainRelativeLayout" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@drawable/blue_abstract_background" > <RelativeLayout android:id="@+id/relativeScore" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal" > <TextView android:id="@+id/ScoreLabel" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_centerVertical="true" android:padding="10dip" android:text="SCORE:" android:textSize="18dp" /> /> <TextView android:id="@+id/ScoreText" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerVertical="true" android:layout_toRightOf="@id/ScoreLabel" android:padding="5dip" android:text="0" android:textSize="18dp" /> <TextView android:id="@+id/GameTimerText" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentRight="true" android:layout_centerVertical="true" android:layout_marginLeft="5dp" android:layout_marginRight="5dp" android:minWidth="25dp" android:text="0" android:textSize="18dp" /> <TextView android:id="@+id/GameTimerLabel" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerVertical="true" android:layout_toLeftOf="@id/GameTimerText" android:padding="10dip" android:text="TIMER:" android:textSize="18dp" /> /> </RelativeLayout> <RelativeLayout android:id="@+id/relativeHighScore" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/relativeScore" > <TextView android:id="@+id/HighScoreLabel" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:padding="10dip" android:text="HIGH SCORE:" android:textSize="16dp" /> <TextView android:id="@+id/HighScoreText" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_toRightOf="@id/HighScoreLabel" android:padding="10dip" android:text="0" android:textSize="16dp" /> </RelativeLayout> <LinearLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_alignParentLeft="true" android:layout_below="@+id/relativeHighScore" android:orientation="vertical" > <LinearLayout android:id="@+id/linearMissing" android:layout_width="fill_parent" android:layout_height="80dp" android:layout_gravity="center" android:orientation="vertical" > <View android:layout_width="match_parent" android:layout_height="2dp" android:background="@color/yellow_text" /> <TextView android:id="@+id/MissingWordText" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_gravity="center_horizontal" android:layout_marginTop="25dp" android:text="MSSNG WRD" android:textSize="22dp" android:typeface="normal" /> </LinearLayout> <View android:layout_width="match_parent" android:layout_height="2dp" android:background="@color/yellow_text" /> <TableLayout android:id="@+id/buttonsTableLayout" android:layout_width="fill_parent" android:layout_height="wrap_content" > <TableRow android:id="@+id/tableRow3" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal" android:paddingTop="5dp" > <Button android:id="@+id/aVowelButton" android:layout_width="66dp" android:layout_height="66dp" android:layout_gravity="center_horizontal" android:layout_marginBottom="10dp" android:layout_marginRight="5dp" android:background="@drawable/blue_vowel_button" android:text="@string/a" android:textColor="@color/yellow_text" android:textSize="30dp" /> <Button android:id="@+id/eVowelButton" android:layout_width="66dp" android:layout_height="66dp" android:layout_gravity="center_horizontal" android:layout_marginBottom="10dp" android:layout_marginRight="5dp" android:background="@drawable/blue_vowel_button" android:text="@string/e" android:textColor="@color/yellow_text" android:textSize="30dp" /> </TableRow> <TableRow android:id="@+id/tableRow4" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal" > <Button android:id="@+id/iVowelButton" android:layout_width="66dp" android:layout_height="66dp" android:layout_gravity="center_horizontal" android:layout_margin="5dp" android:background="@drawable/blue_vowel_buttoni" android:text="@string/i" android:textColor="@color/yellow_text" android:textSize="30dp" /> <Button android:id="@+id/oVowelButton" android:layout_width="66dp" android:layout_height="66dp" android:layout_gravity="center_horizontal" android:layout_margin="5dp" android:background="@drawable/blue_vowel_button" android:text="@string/o" android:textColor="@color/yellow_text" android:textSize="30dp" /> <Button android:id="@+id/uVowelButton" android:layout_width="66dp" android:layout_height="66dp" android:layout_gravity="center_horizontal" android:layout_margin="5dp" android:background="@drawable/blue_vowel_button" android:text="@string/u" android:textColor="@color/yellow_text" android:textSize="30dp" /> </TableRow> </TableLayout> <TextView android:id="@+id/TimeToStartText" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:textSize="24dp" /> <Button android:id="@+id/startButton" style="@style/yellowShadowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="10dp" android:background="@drawable/blue_button_menu" android:gravity="center" android:padding="10dip" android:text="START!" android:textSize="28dp" /> </LinearLayout> </RelativeLayout> And the OnCreate() and a few methods: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_game); isNewWord = true; m_gameScore = 0; m_category = getCategoryFromExtras(); // loadWordList(m_category); initialiseTextViewField(); loadHighScoreAndDifficulty(); setFonts(); setButtons(); setStartButton(); enableDisableButtons(false); } private void loadHighScoreAndDifficulty() { //setting preferences SharedPreferences prefs = this.getSharedPreferences(m_category, Context.MODE_PRIVATE); int score = prefs.getInt(m_category, 0); //0 is the default value m_highScore = score; m_highScoreText.setText(String.valueOf(m_highScore)); prefs = this.getSharedPreferences(DIFFICULTY, Context.MODE_PRIVATE); m_difficulty = prefs.getString(DIFFICULTY, NRML_DIFFICULTY); } private void initialiseTextViewField() { m_startButton = (Button) findViewById(R.id.startButton); m_missingWordText = (TextView) findViewById(R.id.MissingWordText); m_gameScoreText = (TextView) findViewById(R.id.ScoreText); m_gameScoreLabel = (TextView) findViewById(R.id.ScoreLabel); m_gameTimerText = (TextView) findViewById(R.id.GameTimerText); m_gameTimerLabel = (TextView) findViewById(R.id.GameTimerLabel); m_timeToStartText = (TextView) findViewById(R.id.TimeToStartText); m_highScoreLabel = (TextView) findViewById(R.id.HighScoreLabel); m_highScoreText = (TextView) findViewById(R.id.HighScoreText); } private String getCategoryFromExtras() { String category = ""; Bundle extras = getIntent().getExtras(); if (extras != null) { category = extras.getString("category"); } return category; } private void enableDisableButtons(boolean enable) { Button button = (Button) findViewById(R.id.aVowelButton); button.setEnabled(enable); button = (Button) findViewById(R.id.eVowelButton); button.setEnabled(enable); button = (Button) findViewById(R.id.iVowelButton); button.setEnabled(enable); button = (Button) findViewById(R.id.oVowelButton); button.setEnabled(enable); button = (Button) findViewById(R.id.uVowelButton); button.setEnabled(enable); } private void setStartButton() { m_startButton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { m_gameScore = 0; updateScore(); m_startButton.setVisibility(View.GONE); m_timeToStartText.setVisibility(View.VISIBLE); startPreTimer(); } }); }

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • Here i m sending my code for presentmodelViewController

    - by aman-gupta
    Hi Please help me out i want to switch to first view from third view directly here is my code:-where i m using presentModelViewcontroller // // ExperimentAppDelegate.h // Experiment // // Created by Aman on 4/20/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import <UIKit/UIKit.h> @class ExperimentViewController; @class SecondView; @class ThirdView; BOOL parentScreenNo; @interface ExperimentAppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; ExperimentViewController *viewController; SecondView *ptrSecond; ThirdView *ptrThird; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet ExperimentViewController *viewController; @property (nonatomic, retain) IBOutlet SecondView *ptrSecond; @property (nonatomic, retain) IBOutlet ThirdView *ptrThird; @end ///////////////////////////////////// // // ExperimentAppDelegate.m // Experiment // // Created by Aman on 4/20/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "ExperimentAppDelegate.h" #import "ExperimentViewController.h" @implementation ExperimentAppDelegate @synthesize window; @synthesize viewController,ptrThird,ptrSecond; - (void)applicationDidFinishLaunching:(UIApplication *)application { parentScreenNo = NO; // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void)dealloc { [viewController release]; [window release]; [super dealloc]; } @end ///////////////////////////////////////// // // ExperimentViewController.h // Experiment // // Created by Aman on 4/20/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import <UIKit/UIKit.h> @interface ExperimentViewController : UIViewController { } -(IBAction)second:(id)sender; @end /////////////////////////////// // // ExperimentViewController.m // Experiment // // Created by Aman on 4/20/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "ExperimentViewController.h" #import "ExperimentAppDelegate.h" @implementation ExperimentViewController /* // The designated initializer. Override to perform setup that is required before the view is loaded. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ -(IBAction)second:(id)sender { ExperimentAppDelegate *app = (ExperimentAppDelegate *)[[UIApplication sharedApplication]delegate]; [self presentModalViewController:app.ptrSecond animated:YES]; } /* // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { } */ /* // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } */ /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end ///////////////////////////////// // // SecondView.h // Experiment // // Created by Aman on 4/20/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import <UIKit/UIKit.h> @interface SecondView : UIViewController { } -(IBAction)third:(id)sender; -(void)down; @end //////////////////////////////////////////// // // SecondView.m // Experiment // // Created by Aman on 4/20/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "SecondView.h" #import "ExperimentAppDelegate.h" @implementation SecondView /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } -(void)viewWillAppear:(BOOL)animated { if(parentScreenNo == YES) { [self performSelector:@selector(down) withObject:nil]; } [super viewWillAppear:YES]; } -(IBAction)third:(id)sender { ExperimentAppDelegate *app = (ExperimentAppDelegate *)[[UIApplication sharedApplication]delegate]; [self presentModalViewController:app.ptrThird animated:YES]; } -(void)down { [self dismissModalViewControllerAnimated:YES]; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end ////////////////////////////////// // // ThirdView.h // Experiment // // Created by Aman on 4/20/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import <UIKit/UIKit.h> @interface ThirdView : UIViewController { } -(IBAction)back:(id)sender; @end ////////////////////////////////// // // ThirdView.m // Experiment // // Created by Aman on 4/20/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "ThirdView.h" #import "ExperimentAppDelegate.h" @implementation ThirdView /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ /* // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } */ -(IBAction)back:(id)sender { [self dismissModalViewControllerAnimated:YES]; parentScreenNo = YES; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end Above is my code for calling other view please tell how i can call my first view by using dismismodelViewController from third screen Please help me out

    Read the article

  • Need help with workflow in Alfresco

    - by Scott Gartner
    Hello SO community, I haven't had any luck getting help in the Alfresco forums, and I'm hoping for more here. We are building an application based on Alfresco and jBPM and I have defined a workflow, but I have either defined it wrong or am missing something or there are bugs in Alfresco integration with jBPM and I need help figuring out which and fixing it. Here is the problem: I have an advanced workflow and I am trying to launch it from JavaScript. Here is the code I'm using to start the workflow: var nodeId = args.nodeid; var document = search.findNode("workspace://SpacesStore/" + nodeId); var workflowAction = actions.create("start-workflow"); workflowAction.parameters.workflowName = "jbpm$nmwf:MyWorkflow"; workflowAction.parameters["bpm:workflowDescription"] = "Please edit: " + document.name; workflowAction.parameters["bpm:assignees"] = [people.getPerson("admin"), people.getPerson("andyg")]; var futureDate = new Date(); futureDate.setDate(futureDate.getDate() + 7); workflowAction.parameters["bpm:workflowDueDate"] = futureDate; workflowAction.execute(document); This runs fine and e-mail sent from the start node's default transition fires just fine. However, when I go looking for the workflow in my task list it is not there, but it is in my completed task list. The default transition (the only transition) from the start node points at a task node which has four transitions. There are 8 tasks and 22 transitions in the workflow. When I use the workflow console to start the workflow and end the start task, it properly follows the default start node transition to the next task. The new task shows up in "show tasks" but does not show up in "show my tasks" (apparently because the task was marked completed for some reason, though it is not in the "end" node). The task is: task id: jbpm$111 , name: nmwf:submitInEditing , properties: 18 If I do "show transitions" it looks just as I would expect: path: jbpm$62-@ , node: In Editing , active: true task id: jbpm$111 , name: nmwf:submitInEditing, title: submitInEditing title , desc: submitInEditing description , properties: 18 transition id: Submit for Approval , title: Submit for Approval transition id: Request Copyediting Review , title: Request Copyediting Review transition id: Request Legal Review , title: Request Legal Review transition id: Request Review , title: Request Review I don't want to post the entire workflow as it's large, but here are the first two nodes: First the swimlanes: <swimlane name="initiator"></swimlane> <swimlane name="Content Providers"> <assignment actor-id="Content Providers"> <actor>#{bpm_assignees}</actor> </assignment> </swimlane> Now the nodes: <start-state name="start"> <task name="nmwf:submitTask" swimlane="initiator"/> <transition name="" to="In Editing"> <action> <runas>admin</runas> <script> /* Code to send e-mail that a new workflow was started. I get this e-mail. */ </script> </action> </transition> </start-state> <task-node name="In Editing"> <task name="nmwf:submitInEditing" swimlane="Content Providers" /> <!-- I put e-mail sending code in each of these transitions, but none are firing. --> <transition to="In Approval" name="Submit for Approval"></transition> <transition to="In Copyediting" name="Request Copyediting Review"></transition> <transition to="In Legal Review" name="Request Legal Review"></transition> <transition to="In Review" name="Request Review"></transition> </task-node> Here is the model for these two nodes: <type name="nmwf:submitTask"> <parent>bpm:startTask</parent> <mandatory-aspects> <aspect>bpm:assignees</aspect> </mandatory-aspects> </type> <type name="nmwf:submitInEditing"> <parent>bpm:workflowTask</parent> <mandatory-aspects> <aspect>bpm:assignees</aspect> </mandatory-aspects> </type> Here is a pseudo-log of running the workflow in the workflow console: :: deploy alfresco/extension/workflow/processdefinition.xml deployed definition id: jbpm$69 , name: jbpm$nmwf:MyWorkflow , title: nmwf:MyWorkflow , version: 28 :: var bpm:assignees* person admin,andyg set var {http://www.alfresco.org/model/bpm/1.0}assignees = [workspace://SpacesStore/73cf1b28-21aa-40ca-9dde-1cff492d0268, workspace://SpacesStore/03297e91-0b89-4db6-b764-5ada2d167424] :: var bpm:package package 1 set var {http://www.alfresco.org/model/bpm/1.0}package = workspace://SpacesStore/6e2bbbbd-b728-4403-be37-dfce55a83641 :: start bpm:assignees bpm:package started workflow id: jbpm$63 , def: nmwf:MyWorkflow path: jbpm$63-@ , node: start , active: true task id: jbpm$112 , name: nmwf:submitTask, title: submitTask title , desc: submitTask description , properties: 16 transition id: [default] , title: Task Done :: show transitions path: jbpm$63-@ , node: start , active: true task id: jbpm$112 , name: nmwf:submitTask, title: submitTask title , desc: submitTask description , properties: 17 transition id: [default] , title: Task Done :: end task jbpm$112 signal sent - path id: jbpm$63-@ path: jbpm$63-@ , node: In Editing , active: true task id: jbpm$113 , name: nmwf:submitInEditing, title: submitInEditing title , desc: submitInEditing description , properties: 17 transition id: Submit for Approval , title: Submit for Approval transition id: Request Copyediting Review , title: Request Copyediting Review transition id: Request Legal Review , title: Request Legal Review transition id: Request Review , title: Request Review :: show tasks task id: jbpm$113 , name: nmwf:submitInEditing , properties: 18 :: show my tasks admin: [there is no output here] I have been making the assumption that the bpm:assignees that I am setting before starting the workflow initially are getting passed to the first task node "In Editing". Clearly the assignees are on the task object and not on the workflow object. I added the assignees aspect to the start-state task so that it could hold them (after I had a problem; initially they were not there) and possibly they are still sitting there, but the start-state has ended before I even get control back from the web script (not that it would help if it wasn't ended, I need it to be in "In Editing" as the start-state is only used to log that the workflow was started). It has always confused me that the properties that I need to set on each task need to be requested before the task is entered (when you choose a transition you must provide the data for the next task before you can actually move to the next task as you have to validate that you have all of the required data first and then signal the transition). However, the code to start the workflow is asynchronous and therefore does not return either the started workflow or the current task (which in my case would be "In Editing"). So, either way you cannot set variables such as bpm:assignees and bpm:dueDate. I wonder if this is the problem with the user task list. I'm setting the assignees in the property list, but maybe those assignees are going to the start-state task and are not getting passed to the "In Editing" task? Note that this is my first jBPM workflow, so please don't assume I know what I'm doing. If you see something that looks off, it probably is and I just don't know it. Thanks in advance for any advice or help,

    Read the article

  • embedded glassfish: java.lang.NoClassDefFoundError: java/util/ServiceLoader

    - by Xinus
    I am trying to embed glassfish inside my java program using embeded api, I am using maven2 and its pom.xml is as follows <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>orh.highmark</groupId> <artifactId>glassfish-test1</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>glassfish-test1</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.glassfish.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>3.1-SNAPSHOT</version> </dependency> </dependencies> <repositories> <repository> <id>maven2-repository.dev.java.net</id> <name>Java.net Repository for Maven</name> <url>http://download.java.net/maven/2/</url> <layout>default</layout> </repository> <repository> <id>glassfish-repository</id> <name>GlassFish Nexus Repository</name> <url>http://maven.glassfish.org/content/groups/glassfish</url> </repository> </repositories> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> <mainClass>orh.highmark.App</mainClass> <arguments> <argument>argument1</argument> </arguments> </configuration> </plugin> </plugins> </build> </project> Program: public class App { public static void main( String[] args ) { System.out.println( "Hello World!" ); Server.Builder builder = new Server.Builder("test"); builder.logger(true); Server server = builder.build(); } } But for some reason it always gives me error as java.lang.NoClassDefFoundError: java/util/ServiceLoader here is the output C:\Users\sunils\glassfish-tests\glassfish-test1>mvn -e exec:java + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building glassfish-test1 [INFO] task-segment: [exec:java] [INFO] ------------------------------------------------------------------------ [INFO] Preparing exec:java [INFO] No goals needed for project - skipping [INFO] [exec:java {execution: default-cli}] Hello World! [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An exception occured while executing the Java class. null java/util/ServiceLoader [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An exception occured whi le executing the Java class. null at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An exception occured while executing the Java class. null at org.codehaus.mojo.exec.ExecJavaMojo.execute(ExecJavaMojo.java:345) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) ... 17 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:290) at java.lang.Thread.run(Thread.java:595) Caused by: java.lang.NoClassDefFoundError: java/util/ServiceLoader at org.glassfish.api.embedded.Server.getMain(Server.java:701) at org.glassfish.api.embedded.Server.<init>(Server.java:290) at org.glassfish.api.embedded.Server.<init>(Server.java:75) at org.glassfish.api.embedded.Server$Builder.build(Server.java:185) at org.glassfish.api.embedded.Server$Builder.build(Server.java:167) at orh.highmark.App.main(App.java:14) ... 6 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Sat May 08 02:55:03 IST 2010 [INFO] Final Memory: 3M/6M [INFO] ------------------------------------------------------------------------ I couldn't guess its problem with my program or with the glassfish api. Can somebody please help me understand what is happening here and how to rectify it? thanks for any clue ..

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Why do I get Detached Entity exception when upgrading Spring Boot 1.1.4 to 1.1.5

    - by mmeany
    On updating Spring Boot from 1.1.4 to 1.1.5 a simple web application started generating detached entity exceptions. Specifically, a post authentication inteceptor that bumped number of visits was causing the problem. A quick check of loaded dependencies showed that Spring Data has been updated from 1.6.1 to 1.6.2 and a further check of the change log shows a couple of issues relating to optimistic locking, version fields and JPA issues that have been fixed. Well I am using a version field and it starts out as Null following recommendation to not set in the specification. I have produced a very simple test scenario where I get detached entity exceptions if the version field starts as null or zero. If I create an entity with version 1 however then I do not get these exceptions. Is this expected behaviour or is there still something amiss? Below is the test scenario I have for this condition. In the scenario the service layer that has been annotated @Transactional. Each test case makes multiple calls to the service layer - the tests are working with detached entities as this is the scenario I am working with in the full blown application. The test case comprises four tests: Test 1 - versionNullCausesAnExceptionOnUpdate() In this test the version field in the detached object is Null. This is how I would usually create the object prior to passing to the service. This test fails with a Detached Entity exception. I would have expected this test to pass. If there is a flaw in the test then the rest of the scenario is probably moot. Test 2 - versionZeroCausesExceptionOnUpdate() In this test I have set the version to value Long(0L). This is an edge case test and included because I found reference to Zero values being used for version field in the Spring Data change log. This test fails with a Detached Entity exception. Of interest simply because the following two tests pass leaving this as an anomaly. Test 3 - versionOneDoesNotCausesExceptionOnUpdate() In this test the version field is set to value Long(1L). Not something I would usually do, but considering the notes in the Spring Data change log I decided to give it a go. This test passes. Would not usually set the version field, but this looks like a work-around until I figure out why the first test is failing. Test 4 - versionOneDoesNotCausesExceptionWithMultipleUpdates() Encouraged by the result of test 3 I pushed the scenario a step further and perform multiple updates on the entity that started life with a version of Long(1L). This test passes. Reinforcement that this may be a useable work-around. The entity: package com.mvmlabs.domain; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.Version; @Entity @Table(name="user_details") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Version private Long version; @Column(nullable = false, unique = true) private String username; @Column(nullable = false) private Integer numberOfVisits; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getVersion() { return version; } public void setVersion(Long version) { this.version = version; } public Integer getNumberOfVisits() { return numberOfVisits == null ? 0 : numberOfVisits; } public void setNumberOfVisits(Integer numberOfVisits) { this.numberOfVisits = numberOfVisits; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } } The repository: package com.mvmlabs.dao; import org.springframework.data.repository.CrudRepository; import com.mvmlabs.domain.User; public interface UserDao extends CrudRepository<User, Long>{ } The service interface: package com.mvmlabs.service; import com.mvmlabs.domain.User; public interface UserService { User save(User user); User loadUser(Long id); User registerVisit(User user); } The service implementation: package com.mvmlabs.service; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import org.springframework.transaction.support.TransactionSynchronizationManager; import com.mvmlabs.dao.UserDao; import com.mvmlabs.domain.User; @Service @Transactional(propagation=Propagation.REQUIRED, readOnly=false) public class UserServiceJpaImpl implements UserService { @Autowired private UserDao userDao; @Transactional(readOnly=true) @Override public User loadUser(Long id) { return userDao.findOne(id); } @Override public User registerVisit(User user) { user.setNumberOfVisits(user.getNumberOfVisits() + 1); return userDao.save(user); } @Override public User save(User user) { return userDao.save(user); } } The application class: package com.mvmlabs; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan @EnableAutoConfiguration public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } The POM: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mvmlabs</groupId> <artifactId>jpa-issue</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spring-boot-jpa-issue</name> <description>JPA Issue between spring boot 1.1.4 and 1.1.5</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.5.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <start-class>com.mvmlabs.Application</start-class> <java.version>1.7</java.version> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> The application properties: spring.jpa.hibernate.ddl-auto: create spring.jpa.hibernate.naming_strategy: org.hibernate.cfg.ImprovedNamingStrategy spring.jpa.database: HSQL spring.jpa.show-sql: true spring.datasource.url=jdbc:hsqldb:file:./target/testdb spring.datasource.username=sa spring.datasource.password= spring.datasource.driverClassName=org.hsqldb.jdbcDriver The test case: package com.mvmlabs; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import com.mvmlabs.domain.User; import com.mvmlabs.service.UserService; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) public class ApplicationTests { @Autowired UserService userService; @Test public void versionNullCausesAnExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Null"); user.setNumberOfVisits(0); user.setVersion(null); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionZeroCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Zero"); user.setNumberOfVisits(0); user.setVersion(0L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version One"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(2L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionWithMultipleUpdates() throws Exception { User user = new User(); user.setUsername("Version One Multiple"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); user = userService.registerVisit(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(3), user.getNumberOfVisits()); Assert.assertEquals(new Long(4L), user.getVersion()); } } The first two tests fail with detached entity exception. The last two tests pass as expected. Now change Spring Boot version to 1.1.4 and rerun, all tests pass. Are my expectations wrong? Edit: This code saved to GitHub at https://github.com/mmeany/spring-boot-detached-entity-issue

    Read the article

  • Dealing with external processes

    - by Jesse Aldridge
    I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far: Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example: import textwrap, os, time from subprocess import Popen test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) proc = Popen('python -B "%s"' % test_path) for i in range(3): print 'Hello %i' % i time.sleep(1) os.remove(test_path) I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement. If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code. I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that. A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem... Blocking processes cause the gui to become unresponsive. import textwrap, sys, os from subprocess import Popen from PyQt4.QtGui import * from PyQt4.QtCore import * test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) app = QApplication(sys.argv) button = QPushButton('Launch process') def launch_proc(): # Can't move the window until process completes proc = Popen('python -B "%s"' % test_path) proc.communicate() button.connect(button, SIGNAL('clicked()'), launch_proc) button.show() app.exec_() os.remove(test_path) Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details... Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong. Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time. I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more... Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background. Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that. It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging. Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes. This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager. F***ing quotes Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''". A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this. Can't easily return data from a subprocess. You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine. Obscure command-line flags and a crappy terminal based help system. These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do. External factors. This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior. There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?

    Read the article

  • getView only shows Flagged Backgrounds for drawn Views, It does not show Flagged Background when scroll to view more items on list

    - by Leoa
    I am trying to create a listview that receives a flagged list of items to indicate a status to the user. I have been able to create the flag display by using a yellow background (see image at bottom). In Theory, the flagged list can have many flagged items in it. However in my app, only the first three flagged backgrounds are shown. I believe this is because they are initially drawn to the screen. The Flagged background that are not drawn initially to the screen do not show. I'd like to know how to get the remaining flags to show in the list. ListView Recycling: The backgrounds in the listView are being recycled in getView(). This recycling goes from position 0 to position 9. I have flags that need to match at positions 13, 14 and so on. Those positions are not being displayed. listView.getCheckedItemPositions() for multiple selections: This method will not work in my case because the user will not selected the flags. The flags are coming from the server. setNotifyOnChange() and/or public virtual void SetNotifyOnChange (bool notifyOnChange): I'm not adding new data to the list, so I don't see how this method would work for my program. Does this method communicate to getview when it is recycling data? I was unable to find an answer to this in my research. public void registerDataSetObserver: This may be overkill for my problem, but is it possible to have an observer object that keeps track of the all the positions in my items list and in my flag list no matter if the view is recycled and match them on the screen? Code: package com.convention.notification.app; import java.util.Iterator; import java.util.ArrayList; import java.util.List; import android.app.Activity; import android.content.Context; import android.graphics.Color; import android.os.Bundle; import android.text.Html; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.view.ViewParent; import android.widget.AdapterView; import android.widget.ArrayAdapter; import android.widget.TextView; public class NewsRowAdapter extends ArrayAdapter<Item> { private Activity activity; private List<Item> items; private Item objBean; private int row; private List<Integer> disable; View view ; int disableView; public NewsRowAdapter(Activity act, int resource, List<Item> arrayList, List<Integer> disableList) { super(act, resource, arrayList); this.activity = act; this.row = resource; this.items = arrayList; this.disable=disableList; System.out.println("results of delete list a:"+disable.toString()); } public int getCount() { return items.size(); } public Item getItem(int position) { return items.get(position); } public long getItemId(int position) { return position; } @Override public int getItemViewType(int position) { for(int k =0;k < disable.size();k++){ if(position==disable.get(k)){ //System.out.println( "is "+position+" value of disable "+disable.get(k)); disableView=disable.get(k); //AdapterView.getItemAtPosition(position); } } return position; } @Override public View getView(final int position, View convertView, ViewGroup parent) { View view = convertView; ViewHolder holder; if (view == null) { LayoutInflater inflater = (LayoutInflater) activity.getSystemService(Context.LAYOUT_INFLATER_SERVICE); view = inflater.inflate(row, null); getItemViewType(position); long id=getItemId(position); if(position==disableView){ view.setBackgroundColor(Color.YELLOW); System.out.println(" background set to yellow at position "+position +" disableView is at "+disableView); }else{ view.setBackgroundColor(Color.WHITE); System.out.println(" background set to white at position "+position +" disableView is at "+disableView); } //ViewHolder is a custom class that gets TextViews by name: tvName, tvCity, tvBDate, tvGender, tvAge; holder = new ViewHolder(); /* setTag Sets the tag associated with this view. A tag can be used to * mark a view in its hierarchy and does not have to be unique * within the hierarchy. Tags can also be used to store data within * a view without resorting to another data structure. */ view.setTag(holder); } else { //the Object stored in this view as a tag holder = (ViewHolder) view.getTag(); } if ((items == null) || ((position + 1) > items.size())) return view; objBean = items.get(position); holder.tv_event_name = (TextView) view.findViewById(R.id.tv_event_name); holder.tv_event_date = (TextView) view.findViewById(R.id.tv_event_date); holder.tv_event_start = (TextView) view.findViewById(R.id.tv_event_start); holder.tv_event_end = (TextView) view.findViewById(R.id.tv_event_end); holder.tv_event_location = (TextView) view.findViewById(R.id.tv_event_location); if (holder.tv_event_name != null && null != objBean.getName() && objBean.getName().trim().length() > 0) { holder.tv_event_name.setText(Html.fromHtml(objBean.getName())); } if (holder.tv_event_date != null && null != objBean.getDate() && objBean.getDate().trim().length() > 0) { holder.tv_event_date.setText(Html.fromHtml(objBean.getDate())); } if (holder.tv_event_start != null && null != objBean.getStartTime() && objBean.getStartTime().trim().length() > 0) { holder.tv_event_start.setText(Html.fromHtml(objBean.getStartTime())); } if (holder.tv_event_end != null && null != objBean.getEndTime() && objBean.getEndTime().trim().length() > 0) { holder.tv_event_end.setText(Html.fromHtml(objBean.getEndTime())); } if (holder.tv_event_location != null && null != objBean.getLocation () && objBean.getLocation ().trim().length() > 0) { holder.tv_event_location.setText(Html.fromHtml(objBean.getLocation ())); } return view; } public class ViewHolder { public TextView tv_event_name, tv_event_date, tv_event_start, tv_event_end, tv_event_location /*tv_event_delete_flag*/; } } Logcat: 06-12 20:54:12.058: I/System.out(493): item disalbed is at postion :0 06-12 20:54:12.058: I/System.out(493): item disalbed is at postion :4 06-12 20:54:12.069: I/System.out(493): item disalbed is at postion :5 06-12 20:54:12.069: I/System.out(493): item disalbed is at postion :13 06-12 20:54:12.069: I/System.out(493): item disalbed is at postion :14 06-12 20:54:12.069: I/System.out(493): item disalbed is at postion :17 06-12 20:54:12.069: I/System.out(493): results of delete list :[0, 4, 5, 13, 14, 17] 06-12 20:54:12.069: I/System.out(493): results of delete list a:[0, 4, 5, 13, 14, 17] 06-12 20:54:12.069: I/System.out(493): set adapaer to list view called; 06-12 20:54:12.128: I/System.out(493): background set to yellow at position 0 disableView is at 0 06-12 20:54:12.628: I/System.out(493): background set to white at position 1 disableView is at 0 06-12 20:54:12.678: I/System.out(493): background set to white at position 2 disableView is at 0 06-12 20:54:12.708: I/System.out(493): background set to white at position 3 disableView is at 0 06-12 20:54:12.738: I/System.out(493): background set to yellow at position 4 disableView is at 4 06-12 20:54:12.778: I/System.out(493): background set to yellow at position 5 disableView is at 5 06-12 20:54:12.808: I/System.out(493): background set to white at position 6 disableView is at 5 06-12 20:54:12.838: I/System.out(493): background set to white at position 7 disableView is at 5 This is a link to my first question a day ago: Change Background on a specific row based on a condition in Custom Adapter I appreciate your help!

    Read the article

  • twitter4j code doent work on ICS and JellyBean help me

    - by swapnil adsure
    Hey guys i am using twitter4J to post tweet on twitter Here i Change the Code according to your suggestion . i do some google search. The problem is When i try to shift from main activity to twitter activity it show force close. Main activity is = "MainActivity" twitter activity is = "twiti_backup" I think there is problem in Manifestfile but i dont know what was it. public class twiti_backup extends Activity { private static final String TAG = "Blundell.TweetToTwitterActivity"; private static final String PREF_ACCESS_TOKEN = ""; private static final String PREF_ACCESS_TOKEN_SECRET = ""; private static final String CONSUMER_KEY = ""; private static final String CONSUMER_SECRET = ""; private static final String CALLBACK_URL = "android:///"; private SharedPreferences mPrefs; private Twitter mTwitter; private RequestToken mReqToken; private Button mLoginButton; private Button mTweetButton; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.i(TAG, "Loading TweetToTwitterActivity"); setContentView(R.layout.twite); mPrefs = getSharedPreferences("twitterPrefs", MODE_PRIVATE); mTwitter = new TwitterFactory().getInstance(); mTwitter.setOAuthConsumer(CONSUMER_KEY, CONSUMER_SECRET); mLoginButton = (Button) findViewById(R.id.login_button); mTweetButton = (Button) findViewById(R.id.tweet_button); } public void buttonLogin(View v) { Log.i(TAG, "Login Pressed"); if (mPrefs.contains(PREF_ACCESS_TOKEN)) { Log.i(TAG, "Repeat User"); loginAuthorisedUser(); } else { Log.i(TAG, "New User"); loginNewUser(); } } public void buttonTweet(View v) { Log.i(TAG, "Tweet Pressed"); tweetMessage(); } private void loginNewUser() { try { Log.i(TAG, "Request App Authentication"); mReqToken = mTwitter.getOAuthRequestToken(CALLBACK_URL); Log.i(TAG, "Starting Webview to login to twitter"); WebView twitterSite = new WebView(this); twitterSite.loadUrl(mReqToken.getAuthenticationURL()); setContentView(twitterSite); } catch (TwitterException e) { Toast.makeText(this, "Twitter Login error, try again later", Toast.LENGTH_SHORT).show(); } } private void loginAuthorisedUser() { String token = mPrefs.getString(PREF_ACCESS_TOKEN, null); String secret = mPrefs.getString(PREF_ACCESS_TOKEN_SECRET, null); // Create the twitter access token from the credentials we got previously AccessToken at = new AccessToken(token, secret); mTwitter.setOAuthAccessToken(at); Toast.makeText(this, "Welcome back", Toast.LENGTH_SHORT).show(); enableTweetButton(); } @Override protected void onNewIntent(Intent intent) { super.onNewIntent(intent); Log.i(TAG, "New Intent Arrived"); dealWithTwitterResponse(intent); } @Override protected void onResume() { super.onResume(); Log.i(TAG, "Arrived at onResume"); } private void dealWithTwitterResponse(Intent intent) { Uri uri = intent.getData(); if (uri != null && uri.toString().startsWith(CALLBACK_URL)) { // If the user has just logged in String oauthVerifier = uri.getQueryParameter("oauth_verifier"); authoriseNewUser(oauthVerifier); } } private void authoriseNewUser(String oauthVerifier) { try { AccessToken at = mTwitter.getOAuthAccessToken(mReqToken, oauthVerifier); mTwitter.setOAuthAccessToken(at); saveAccessToken(at); // Set the content view back after we changed to a webview setContentView(R.layout.twite); enableTweetButton(); } catch (TwitterException e) { Toast.makeText(this, "Twitter auth error x01, try again later", Toast.LENGTH_SHORT).show(); } } private void enableTweetButton() { Log.i(TAG, "User logged in - allowing to tweet"); mLoginButton.setEnabled(false); mTweetButton.setEnabled(true); } private void tweetMessage() { try { mTwitter.updateStatus("Test - Tweeting with @Blundell_apps #AndroidDev Tutorial using #Twitter4j http://blog.blundell-apps.com/sending-a-tweet/"); Toast.makeText(this, "Tweet Successful!", Toast.LENGTH_SHORT).show(); } catch (TwitterException e) { Toast.makeText(this, "Tweet error, try again later", Toast.LENGTH_SHORT).show(); } } private void saveAccessToken(AccessToken at) { String token = at.getToken(); String secret = at.getTokenSecret(); Editor editor = mPrefs.edit(); editor.putString(PREF_ACCESS_TOKEN, token); editor.putString(PREF_ACCESS_TOKEN_SECRET, secret); editor.commit(); } } And here is Manifest <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".MainActivity" android:label="@string/title_activity_main" android:launchMode="singleInstance" android:configChanges="orientation|screenSize"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".twiti_backup" android:launchMode="singleInstance"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="android" android:host="callback_main" /> </activity> <activity android:name=".MyTwite"/> <activity android:name=".mp3" /> <activity android:name=".myfbapp" /> </application> Here is Log cat when i try to launch twiti_backup from main activity W/dalvikvm(16357): threadid=1: thread exiting with uncaught exception (group=0x4001d5a0) E/AndroidRuntime(16357): FATAL EXCEPTION: main E/AndroidRuntime(16357): java.lang.VerifyError: com.example.uitest.twiti_backup E/AndroidRuntime(16357): at java.lang.Class.newInstanceImpl(Native Method) E/AndroidRuntime(16357): at java.lang.Class.newInstance(Class.java:1409) E/AndroidRuntime(16357): at android.app.Instrumentation.newActivity(Instrumentation.java:1040) E/AndroidRuntime(16357): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1735) E/AndroidRuntime(16357): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1842) E/AndroidRuntime(16357): at android.app.ActivityThread.access$1500(ActivityThread.java:132) E/AndroidRuntime(16357): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1038) E/AndroidRuntime(16357): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime(16357): at android.os.Looper.loop(Looper.java:143) E/AndroidRuntime(16357): at android.app.ActivityThread.main(ActivityThread.java:4263) E/AndroidRuntime(16357): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime(16357): at java.lang.reflect.Method.invoke(Method.java:507) E/AndroidRuntime(16357): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) E/AndroidRuntime(16357): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) E/AndroidRuntime(16357): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • need prefuse graph edges like arrows

    - by merve
    Hello, I did my homework and searched both google for a sample and a topic that is answered before on stackoverflow. But nothing has been found. My problem is ordinary edges who does not have a view like arrows. Here is what i do to hope there is forward arrows from target to destination: LabelRenderer nameLabel = new LabelRenderer("name"); nameLabel.setRoundedCorner(8, 8); DefaultRendererFactory rendererFactory = new DefaultRendererFactory(nameLabel); EdgeRenderer edgeRenderer; edgeRenderer = new EdgeRenderer(prefuse.Constants.EDGE_TYPE_LINE, prefuse.Constants.EDGE_ARROW_FORWARD); rendererFactory.setDefaultEdgeRenderer(edgeRenderer); vis.setRendererFactory(rendererFactory); Here is what i see about colour of edges, hoping these must not be transparent: int[] palette = new int[]{ColorLib.rgb(255, 180, 180), ColorLib.rgb(190, 190, 255)}; DataColorAction fill = new DataColorAction("socialnet.nodes", "gender", Constants.NOMINAL, VisualItem.FILLCOLOR, palette); ColorAction text = new ColorAction("socialnet.nodes", VisualItem.TEXTCOLOR, ColorLib.gray(0)); ColorAction edges = new ColorAction("socialnet.edges", VisualItem.STROKECOLOR, ColorLib.gray(200)); ColorAction arrow = new ColorAction("socialnet.edges", VisualItem.FILLCOLOR, ColorLib.gray(200)); ActionList colour = new ActionList(); colour.add(fill); colour.add(text); colour.add(edges); colour.add(arrow); vis.putAction("colour", colour); Thus, i wonder where am i wrong? Why my edges do not seem like arrows? Thanks for any idea. For more detail, i want to paste all of the code: /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package prefusedeneme; import javax.swing.JFrame; import prefuse.data.*; import prefuse.data.io.*; import prefuse.Display; import prefuse.Visualization; import prefuse.render.*; import prefuse.util.*; import prefuse.action.assignment.*; import prefuse.Constants; import prefuse.visual.*; import prefuse.action.*; import prefuse.activity.*; import prefuse.action.layout.graph.*; import prefuse.controls.*; import prefuse.data.expression.Predicate; import prefuse.data.expression.parser.ExpressionParser; public class SocialNetworkVis { public static void main(String argv[]) { // 1. Load the data Graph graph = null; /* graph will contain the core data */ try { graph = new GraphMLReader().readGraph("socialnet.xml"); /* load the data from an XML file */ } catch (DataIOException e) { e.printStackTrace(); System.err.println("Error loading graph. Exiting..."); System.exit(1); } // 2. prepare the visualization Visualization vis = new Visualization(); /* vis is the main object that will run the visualization */ vis.add("socialnet", graph); /* add our data to the visualization */ // 3. setup the renderers and the render factory // labels for name LabelRenderer nameLabel = new LabelRenderer("name"); nameLabel.setRoundedCorner(8, 8); /* nameLabel decribes how to draw the data elements labeled as "name" */ // create the render factory DefaultRendererFactory rendererFactory = new DefaultRendererFactory(nameLabel); EdgeRenderer edgeRenderer; edgeRenderer = new EdgeRenderer(prefuse.Constants.EDGE_TYPE_LINE, prefuse.Constants.EDGE_ARROW_FORWARD); rendererFactory.setDefaultEdgeRenderer(edgeRenderer); vis.setRendererFactory(rendererFactory); // 4. process the actions // colour palette for nominal data type int[] palette = new int[]{ColorLib.rgb(255, 180, 180), ColorLib.rgb(190, 190, 255)}; /* ColorLib.rgb converts the colour values to integers */ // map data to colours in the palette DataColorAction fill = new DataColorAction("socialnet.nodes", "gender", Constants.NOMINAL, VisualItem.FILLCOLOR, palette); /* fill describes what colour to draw the graph based on a portion of the data */ // node text ColorAction text = new ColorAction("socialnet.nodes", VisualItem.TEXTCOLOR, ColorLib.gray(0)); /* text describes what colour to draw the text */ // edge ColorAction edges = new ColorAction("socialnet.edges", VisualItem.STROKECOLOR, ColorLib.gray(200)); ColorAction arrow = new ColorAction("socialnet.edges", VisualItem.FILLCOLOR, ColorLib.gray(200)); /* edge describes what colour to draw the edges */ // combine the colour assignments into an action list ActionList colour = new ActionList(); colour.add(fill); colour.add(text); colour.add(edges); colour.add(arrow); vis.putAction("colour", colour); /* add the colour actions to the visualization */ // create a separate action list for the layout ActionList layout = new ActionList(Activity.INFINITY); layout.add(new ForceDirectedLayout("socialnet")); /* use a force-directed graph layout with default parameters */ layout.add(new RepaintAction()); /* repaint after each movement of the graph nodes */ vis.putAction("layout", layout); /* add the laout actions to the visualization */ // 5. add interactive controls for visualization Display display = new Display(vis); display.setSize(700, 700); display.pan(350, 350); // pan to the middle display.addControlListener(new DragControl()); /* allow items to be dragged around */ display.addControlListener(new PanControl()); /* allow the display to be panned (moved left/right, up/down) (left-drag)*/ display.addControlListener(new ZoomControl()); /* allow the display to be zoomed (right-drag) */ // 6. launch the visualizer in a JFrame JFrame frame = new JFrame("prefuse tutorial: socialnet"); /* frame is the main window */ frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.add(display); /* add the display (which holds the visualization) to the window */ frame.pack(); frame.setVisible(true); /* start the visualization working */ vis.run("colour"); vis.run("layout"); } }

    Read the article

  • Create Downloadable CSV File from PHP Script

    - by Aphex22
    How would I create a formatted version of the following PHP script as a downloadable CSV file from the code below (1.0) At the moment the fputcsv function is currently dumping the unparsed PHP/HTML code into a CSV file. This is incorrect. The downloaded CSV file should contain the columns and rows generated from the code at (1.0) as shown in the image link below. I've tried using the following code at the top of the PHP file: // output headers so that the file is downloaded rather than displayed header('Content-Type: text/csv; charset=utf-8'); header('Content-Disposition: attachment; filename=amazon.csv'); // create a file pointer connected to the output stream $output = fopen('php://output', 'w'); $mysql_hostname = ""; $mysql_user = ""; $mysql_password = ""; $mysql_database = ""; $bd = mysql_connect($mysql_hostname, $mysql_user, $mysql_password) or die("Could not connect database"); mysql_select_db($mysql_database, $bd) or die("Could not select database"); $sql = "select * from product WHERE on_amazon = 'on' AND active = 'on'"; $result = mysql_query($sql) or die ( mysql_error() ); // loop over the rows, outputting them while ($sql_result = mysql_fetch_assoc($sql)) fputcsv($output, $sql_result); 1.0 The start of the code outputs the column headings for the CSV file: // set headers echo " item_sku, external_product_id, external_product_id_type, item_name, brand_name, manufacturer, product_description, feed_product_type, update_delete, part_number, model, standard_price, list_price, currency, quantity, product_tax_code, product_site_launch_date, merchant_release_date, restock_date ... <br>"; And then follows PHP script for the column values // load all stock while ($line = mysql_fetch_assoc($result) ) { ?> <?php $size_suffix = array ("",'_chain','_con_b','_con_c'); $arrayLength = count ($size_suffix); for($y=0;$y<$arrayLength;$y++) { //Possible size array to loop through when checking quantity $con_size = array (36,365,37,375,38,385,39,395,40,405,41,415,42,425,43,435,44,445,45,455,46,465,47,475,48,485); $arrlength=count($con_size); for($x=0;$x<$arrlength;$x++) { // check if size is available if($line['quantity_c_size_'.$con_size[$x].$size_suffix[$y]] > 0 ) { ?> <!-- item sku --> <?=$line['product_id']?>, <!-- external product id --> <?=$line['code_size_'.$con_size[$x].'']?>, <? // external product id type $barcode = $line['code_size_'.$con_size[$x]]; $trim_barcode = trim($barcode); $count = strlen($trim_barcode); if ($count == 12) { echo "UPC"; } if ($count == 13) { echo "EAN"; } elseif ($count < 12) { echo " "; } ?>, <!-- item name --> <?=$line['title']?>, <? // brand_name $brand = $line['jys_brand']; echo ucfirst($brand); ?>, <? // manufacturer $brand = $line['jys_brand']; echo ucfirst($brand); ?>, <!-- product description --> <?=preg_replace('/[^\da-z]/i', ' ', $line['amazon_desc']) ?>, <!-- feed product type --> Shoes, , , , <!-- standard price --> <?=$line['price']?>, , <!-- currency --> GBP, <!-- quantity --> <?=$line['quantity_size_'.$con_size[$x].$size_suffix[$y]]?>, , <!-- product site launch date --> <?=$line['added_y']?>-<?=$line['added_m']?>-<?=$line['added_d']?>, <!-- merchat release date --> <?=$line['added_y']?>-<?=$line['added_m']?>-<?=$line['added_d']?>, , , , , <!-- item package quantity --> 1, , , , , <!-- fulfillment latency --> 2, <!-- max aggregate ship quantity --> 1, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , <!-- main image url, url1, url2, url3 --> http://www.getashoe.co.uk/full/<?=$line['product_id']?>_1.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_2.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_3.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_4.jpg, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , <!-- heel height --> <?=$line['heel']?>, , , , , , , , , , , <!-- colour name --> <?=$line['colour']?>, <!-- colour map --> <? $colour = preg_replace('/[()]/i', ' ', $line['colour']); if (preg_match( '/[\/].*/i', $colour)) { echo 'Multicolour'; } if (preg_match( '/off.*/i', $colour)) { echo 'Off-White'; } elseif( preg_match( '/white.*/i', $colour)) { echo 'White'; } elseif( preg_match( '/moro.*/i', $colour)) { echo 'Brown'; } elseif( preg_match( '/morado.*/i', $colour)) { echo 'Purple'; } elseif( preg_match( '/cream.*/i', $colour)) { echo 'Off-White'; } elseif( preg_match( '/pewter.*/i', $colour)) { echo 'Silver'; } elseif( preg_match( '/yellow.*/i', $colour)) { echo 'Yellow'; } elseif( preg_match( '/camel.*/i', $colour)) { echo 'Beige'; } elseif( preg_match( '/navy.*/i', $colour)) { echo 'Blue'; } elseif( preg_match( '/tan.*/i', $colour)) { echo 'Brown'; } elseif( preg_match( '/rainbow.*/i', $colour)) { echo 'Multicolour'; } elseif( preg_match( '/orange.*/i', $colour)) { echo 'Orange'; } elseif( preg_match( '/leopard.*/i', $colour)) { echo 'Multicolour'; } elseif( preg_match( '/red.*/i', $colour)) { echo 'Red'; } elseif( preg_match( '/pink.*/i', $colour)) { echo 'Pink'; } elseif( preg_match( '/purple.*/i', $colour)) { echo 'Purple'; } elseif( preg_match( '/blue.*/i', $colour)) { echo 'Blue'; } elseif( preg_match( '/green.*/i', $colour)) { echo 'Green'; } elseif( preg_match( '/brown.*/i', $colour)) { echo 'Brown'; } elseif( preg_match( '/grey.*/i', $colour)) { echo 'Grey'; } elseif( preg_match( '/black.*/i', $colour)) { echo 'Black'; } elseif( preg_match( '/gold.*/i', $colour)) { echo 'Gold'; } elseif( preg_match( '/silver.*/i', $colour)) { echo 'Silver'; } elseif( preg_match( '/multi.*/i', $colour)) { echo 'Multicolour'; } elseif( preg_match( '/beige.*/i', $colour)) { echo 'Beige'; } elseif( preg_match( '/nude.*/i', $colour)) { echo 'Beige'; } ?>, <!-- size name --> <? echo $con_size[$x];?>, <!-- size map --> <? if ($con_size[$x] == 36) { echo "3 UK"; } elseif ($con_size[$x] == 37 ) { echo "4 UK"; } elseif ($con_size[$x] == 38) { echo "5 UK"; } elseif ($con_size[$x] == 39 ) { echo "6 UK"; } elseif ($con_size[$x] == 40 ) { echo "7 UK"; } elseif ($con_size[$x] == 41) { echo "8 UK"; } elseif ($con_size[$x] == 42) { echo "9 UK"; } elseif ($con_size[$x] == 43) { echo "10 UK"; } elseif ($con_size[$x] == 44 ) { echo "11 UK"; } elseif ($con_size[$x] == 45 ) { echo "12 UK"; } elseif ($con_size[$x] == 46 ) { echo "13 UK"; } elseif ($con_size[$x] == 47 ) { echo "14 UK"; } elseif ($con_size[$x] == 48 ) { echo "15 UK"; } elseif ($con_size[$x] == 365) { echo "3.5 UK"; } elseif ($con_size[$x] == 375 ) { echo "4.5 UK"; } elseif ($con_size[$x] == 385) { echo "5.5 UK"; } elseif ($con_size[$x] == 395 ) { echo "6.5 UK"; } elseif ($con_size[$x] == 405 ) { echo "7.5 UK"; } elseif ($con_size[$x] == 415) { echo "8.5 UK"; } elseif ($con_size[$x] == 425) { echo "9.5 UK"; } elseif ($con_size[$x] == 435) { echo "10.5 UK"; } elseif ($con_size[$x] == 445 ) { echo "11.5 UK"; } elseif ($con_size[$x] == 455 ) { echo "12.5 UK"; } elseif ($con_size[$x] == 465 ) { echo "13.5 UK"; } elseif ($con_size[$x] == 475 ) { echo "14.5 UK"; } elseif ($con_size[$x] == 485 ) { echo "15.5 UK"; } ?>, <br> <? // finish checking if size is available } } } ?> I've included an image of how the CSV file should appear. https://i.imgur.com/ZU3IFer.png Any help would be great.

    Read the article

  • Android - doInBackground() error in AsyncTask

    - by AimanB
    What my app here basically does is it captures a photo or import from gallery, and when the Upload button is pressed, the image will be uploaded to a localhost server. Before I implemented AsyncTask into the process, it doesn't have any problem uploading whatsoever. Now that I've put AsyncTask, everything went wrong. I don't know which part that I do wrong in this phase. This is what logcat shows when I try to upload an image file: 10-28 17:23:25.989: E/AndroidRuntime(3356): FATAL EXCEPTION: AsyncTask #5 10-28 17:23:25.989: E/AndroidRuntime(3356): java.lang.RuntimeException: An error occured while executing doInBackground() 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$3.done(AsyncTask.java:299) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.setException(FutureTask.java:219) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.run(FutureTask.java:239) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.lang.Thread.run(Thread.java:856) 10-28 17:23:25.989: E/AndroidRuntime(3356): Caused by: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare() 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.Handler.<init>(Handler.java:197) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.Handler.<init>(Handler.java:111) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast$TN.<init>(Toast.java:324) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast.<init>(Toast.java:91) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast.makeText(Toast.java:238) 10-28 17:23:25.989: E/AndroidRuntime(3356): at com.aiman.webshopper.UploadImageActivity$1execMultiPostAsync.doInBackground(UploadImageActivity.java:268) 10-28 17:23:25.989: E/AndroidRuntime(3356): at com.aiman.webshopper.UploadImageActivity$1execMultiPostAsync.doInBackground(UploadImageActivity.java:1) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$2.call(AsyncTask.java:287) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.run(FutureTask.java:234) This is my code for the Upload activity: public class UploadImageActivity extends Activity implements OnItemSelectedListener { InputStream inputStream; private ImageView imageView; String the_string_response; private static final int SELECT_PICTURE = 0; private static final int CAMERA_REQUEST = 1888; private static final String SERVER_UPLOAD_URI = "...myserver.php"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_upload_image); imageView = (ImageView) findViewById(R.id.imgUpload); } public void capturePhoto(View view) { Intent cameraIntent = new Intent( android.provider.MediaStore.ACTION_IMAGE_CAPTURE); File f = new File(android.os.Environment.getExternalStorageDirectory(), "temp.jpg"); cameraIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(f)); startActivityForResult(cameraIntent, CAMERA_REQUEST); } public void pickPhoto(View view) { // TODO: launch the photo picker Intent intent = new Intent(); intent.setType("image/*"); intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser(intent, "Select Picture"), SELECT_PICTURE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) { File f = new File(Environment.getExternalStorageDirectory() .toString()); for (File temp : f.listFiles()) { if (temp.getName().equals("temp.jpg")) { f = temp; break; } } try { BitmapFactory.Options bitmapOptions = new BitmapFactory.Options(); Bitmap bitmap = BitmapFactory.decodeFile(f.getAbsolutePath(), bitmapOptions); imageView.setImageBitmap(bitmap); String path = android.os.Environment .getExternalStorageDirectory() + File.separator + "Phoenix" + File.separator + "default"; f.delete(); OutputStream outFile = null; File file = new File(path, String.valueOf(System .currentTimeMillis()) + ".jpg"); try { outFile = new FileOutputStream(file); bitmap.compress(Bitmap.CompressFormat.JPEG, 85, outFile); outFile.flush(); outFile.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } catch (Exception e) { e.printStackTrace(); } } if (requestCode == SELECT_PICTURE && resultCode == RESULT_OK) { Bitmap bitmap = getPath(data.getData()); imageView.setImageBitmap(bitmap); } } private Bitmap getPath(Uri uri) { String[] projection = { MediaStore.Images.Media.DATA }; Cursor cursor = getContentResolver().query(uri, projection, null, null, null); int column_index = cursor.getColumnIndexOrThrow(projection[0]); cursor.moveToFirst(); String filePath = cursor.getString(column_index); cursor.close(); // Convert file path into bitmap image using below line. Bitmap bitmap = BitmapFactory.decodeFile(filePath); return bitmap; } public void uploadPhoto(View view) { try { executeMultipartPost(); } catch (Exception e) { e.printStackTrace(); } } public void executeMultipartPost() throws Exception { class execMultiPostAsync extends AsyncTask<String, Void, String>{ @Override protected String doInBackground(String... params){ // Choose image here BitmapDrawable drawable = (BitmapDrawable) imageView.getDrawable(); Bitmap bitmap = drawable.getBitmap(); ByteArrayOutputStream stream = new ByteArrayOutputStream(); bitmap.compress(Bitmap.CompressFormat.JPEG, 50, stream); // compress to // which // format // you want. byte[] byte_arr = stream.toByteArray(); String image_str = Base64.encodeBytes(byte_arr); ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("image", image_str)); try { HttpClient httpclient = new DefaultHttpClient(); /* * HttpPost(parameter): Server URI */ HttpPost httppost = new HttpPost(SERVER_UPLOAD_URI); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); the_string_response = convertResponseToString(response); } catch (Exception e) { Toast.makeText(UploadImageActivity.this, "ERROR " + e.getMessage(), Toast.LENGTH_LONG).show(); System.out.println("Error in http connection " + e.toString()); } return the_string_response; } @Override protected void onPostExecute(String result) { super.onPostExecute(result); Toast.makeText(UploadImageActivity.this, "Response " + result, Toast.LENGTH_LONG) .show(); } public String convertResponseToString(HttpResponse response) throws IllegalStateException, IOException { String res = ""; StringBuffer buffer = new StringBuffer(); inputStream = response.getEntity().getContent(); int contentLength = (int) response.getEntity().getContentLength(); // getting // content // lengt Toast.makeText(UploadImageActivity.this, "contentLength : " + contentLength, Toast.LENGTH_LONG).show(); if (contentLength < 0) { } else { byte[] data = new byte[512]; int len = 0; try { while (-1 != (len = inputStream.read(data))) { buffer.append(new String(data, 0, len)); // converting to // string and // appending to // stringbuffer } } catch (IOException e) { e.printStackTrace(); } try { inputStream.close(); // closing the stream } catch (IOException e) { e.printStackTrace(); } res = buffer.toString(); // converting stringbuffer to string Toast.makeText(UploadImageActivity.this, "Result : " + res, Toast.LENGTH_LONG).show(); // System.out.println("Response => " + // EntityUtils.toString(response.getEntity())); } return res; } } execMultiPostAsync exec = new execMultiPostAsync(); exec.execute(); } } Can someone please check if I put the AsyncTask task correctly in this activity? I think I've made a mistake somewhere.

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

< Previous Page | 250 251 252 253 254 255 256  | Next Page >