Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 644/1118 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • MySQL Replication Over SSH - Last_IO_Errno: 2003 - error connecting to master

    - by Dom
    I have MySQL MASTER/SLAVE replication working on two test boxes (Centos 6.4 / MySQL 5.5.32) over LAN. Securing the connection over ssh causes connection problems from the SLAVE machine: (Sample of show slave status \G Output) Last_IO_Errno: 2003 Last_IO_Error: error connecting to master '[email protected]:3305' - retry-time: 60 I have granted the replication user the relevant privileges on the master server with both 127.0.0.1 and the network IP. I have forwarded the port from slave to master over SSH ssh -f 192.168.0.128 -L 3305:192.168.0.128:3306 -N I can connect to master MySQL from slave with mysql -urep -ppassword -h127.0.0.1 -P3305 The master server setup would seem fine, as it works without a tunnel, and the tunnel seems fine, as I can connect to MySQL between the two. Change Master Statement: CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_PORT=3305, MASTER_USER='rep', MASTER_PASSWORD='password'; Note: I know there are reasons to use SSL, instead of SSH, but I have reasons why SSH is a better choice for my setup.

    Read the article

  • "no MPM loaded", but I'm not even using mpm

    - by ezuk
    Running Apache2 on Ubuntu Precise64 in Vagrant. When I try to start it, it says: vagrant@precise64:/etc/apache2$ /etc/init.d/apache2 start * Starting web server apache2 * * The apache2 configtest failed. Output of config test was: AH00534: apache2: Configuration error: No MPM loaded. Action 'configtest' failed. The Apache error log may have more information. But the thing is, my /etc/apache2/apache2.conf file doesn't call for MPM anywhere! I would paste it here but it would make for a huge post... I tried looking up the error log, but I can't find that anywhere, either. Help?

    Read the article

  • ASP.NET MVC 3 - New Features

    - by imran_ku07
    Introduction:          ASP.NET MVC 3 just released by ASP.NET MVC team which includes some new features, some changes, some improvements and bug fixes. In this article, I will show you the new features of ASP.NET MVC 3. This will help you to get started using the new features of ASP.NET MVC 3. Full details of this announcement is available at Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix.   Description:       New Razor View Engine:              Razor view engine is one of the most coolest new feature in ASP.NET MVC 3. Razor is speeding things up just a little bit more. It is much smaller and lighter in size. Also it is very easy to learn. You can say ' write less, do more '. You can get start and learn more about Razor at Introducing “Razor” – a new view engine for ASP.NET.         Granular Request Validation:             Another biggest new feature in ASP.NET MVC 3 is Granular Request Validation. Default request validator will throw an exception when he see < followed by an exclamation(like <!) or < followed by the letters a through z(like <s) or & followed by a pound sign(like &#123) as a part of querystring, posted form, headers and cookie collection. In previous versions of ASP.NET MVC, you can control request validation using ValidateInputAttriubte. In ASP.NET MVC 3 you can control request validation at Model level by annotating your model properties with a new attribute called AllowHtmlAttribute. For details see Granular Request Validation in ASP.NET MVC 3.       Sessionless Controller Support:             Sessionless Controller is another great new feature in ASP.NET MVC 3. With Sessionless Controller you can easily control your session behavior for controllers. For example, you can make your HomeController's Session as Disabled or ReadOnly, allowing concurrent request execution for single user. For details see Concurrent Requests In ASP.NET MVC and HowTo: Sessionless Controller in MVC3 – what & and why?.       Unobtrusive Ajax and  Unobtrusive Client Side Validation is Supported:             Another cool new feature in ASP.NET MVC 3 is support for Unobtrusive Ajax and Unobtrusive Client Side Validation.  This feature allows separation of responsibilities within your web application by separating your html with your script. For details see Unobtrusive Ajax in ASP.NET MVC 3 and Unobtrusive Client Validation in ASP.NET MVC 3.       Dependency Resolver:             Dependency Resolver is another great feature of ASP.NET MVC 3. It allows you to register a dependency resolver that will be used by the framework. With this approach your application will not become tightly coupled and the dependency will be injected at run time. For details see ASP.NET MVC 3 Service Location.       New Helper Methods:             ASP.NET MVC 3 includes some helper methods of ASP.NET Web Pages technology that are used for common functionality. These helper methods includes: Chart, Crypto, WebGrid, WebImage and WebMail. For details of these helper methods, please see ASP.NET MVC 3 Release Notes. For using other helper methods of ASP.NET Web Pages see Using ASP.NET Web Pages Helpers in ASP.NET MVC.       Child Action Output Caching:             ASP.NET MVC 3 also includes another feature called Child Action Output Caching. This allows you to cache only a portion of the response when you are using Html.RenderAction or Html.Action. This cache can be varied by action name, action method signature and action method parameter values. For details see this.       RemoteAttribute:             ASP.NET MVC 3 allows you to validate a form field by making a remote server call through Ajax. This makes it very easy to perform remote validation at client side and quickly give the feedback to the user. For details see How to: Implement Remote Validation in ASP.NET MVC.       CompareAttribute:             ASP.NET MVC 3 includes a new validation attribute called CompareAttribute. CompareAttribute allows you to compare the values of two different properties of a model. For details see CompareAttribute in ASP.NET MVC 3.       Miscellaneous New Features:                    ASP.NET MVC 2 includes FormValueProvider, QueryStringValueProvider, RouteDataValueProvider and HttpFileCollectionValueProvider. ASP.NET MVC 3 adds two additional value providers, ChildActionValueProvider and JsonValueProvider(JsonValueProvider is not physically exist).  ChildActionValueProvider is used when you issue a child request using Html.Action and/or Html.RenderAction methods, so that your explicit parameter values in Html.Action and/or Html.RenderAction will always take precedence over other value providers. JsonValueProvider is used to model bind JSON data. For details see Sending JSON to an ASP.NET MVC Action Method Argument.           In ASP.NET MVC 3, a new property named FileExtensions added to the VirtualPathProviderViewEngine class. This property is used when looking up a view by path (and not by name), so that only views with a file extension contained in the list specified by this new property is considered. For details see VirtualPathProviderViewEngine.FileExtensions Property .           ASP.NET MVC 3 installation package also includes the NuGet Package Manager which will be automatically installed when you install ASP.NET MVC 3. NuGet makes it easy to install and update open source libraries and tools in Visual Studio. See this for details.           In ASP.NET MVC 2, client side validation will not trigger for overridden model properties. For example, if have you a Model that contains some overridden properties then client side validation will not trigger for overridden properties in ASP.NET MVC 2 but client side validation will work for overridden properties in ASP.NET MVC 3.           Client side validation is not supported for StringLengthAttribute.MinimumLength property in ASP.NET MVC 2. In ASP.NET MVC 3 client side validation will work for StringLengthAttribute.MinimumLength property.           ASP.NET MVC 3 includes new action results like HttpUnauthorizedResult, HttpNotFoundResult and HttpStatusCodeResult.           ASP.NET MVC 3 includes some new overloads of LabelFor and LabelForModel methods. For details see LabelExtensions.LabelForModel and LabelExtensions.LabelFor.           In ASP.NET MVC 3, IControllerFactory includes a new method GetControllerSessionBehavior. This method is used to get controller's session behavior. For details see IControllerFactory.GetControllerSessionBehavior Method.           In ASP.NET MVC 3, Controller class includes a new property ViewBag which is of type dynamic. This property allows you to access ViewData Dictionary using C # 4.0 dynamic features. For details see ControllerBase.ViewBag Property.           ModelMetadata includes a property AdditionalValues which is of type Dictionary. In ASP.NET MVC 3 you can populate this property using AdditionalMetadataAttribute. For details see AdditionalMetadataAttribute Class.           In ASP.NET MVC 3 you can also use MvcScaffolding to scaffold your Views and Controller. For details see Scaffold your ASP.NET MVC 3 project with the MvcScaffolding package.           If you want to convert your application from ASP.NET MVC 2 to ASP.NET MVC 3 then there is an excellent tool that automatically converts ASP.NET MVC 2 application to ASP.NET MVC 3 application. For details see MVC 3 Project Upgrade Tool.           In ASP.NET MVC 2 DisplayAttribute is not supported but in ASP.NET MVC 3 DisplayAttribute will work properly.           ASP.NET MVC 3 also support model level validation via the new IValidatableObject interface.           ASP.NET MVC 3 includes a new helper method Html.Raw. This helper method allows you to display unencoded HTML.     Summary:          In this article I showed you the new features of ASP.NET MVC 3. This will help you a lot when you start using ASP MVC 3. I also provide you the links where you can find further details. Hopefully you will enjoy this article too.  

    Read the article

  • the size of apt-get update lists is too big

    - by dumb906
    I ran a clean install to Ubuntu 12.04 and so far everything has been working well. I especially commend the Ubuntu team for this release. I only noticed that the size of repository update is now about ~13MB. Normally, it is about this size for the first time you run apt-get update after a clean install and then ~ 23kb - 1300kb for subsequent updates. The output from apt-get update is the same I get for previous versions of Ubuntu (its pretty normal). Its a bit too long but look at an example output I got from running apt-get update. Ign http://archive.canonical.com precise InRelease Ign http://dl.google.com stable InRelease Ign http://dl.google.com stable InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Hit http://download.virtualbox.org precise InRelease Ign http://security.ubuntu.com precise-security InRelease Ign http://linux.dropbox.com precise InRelease Ign http://extras.ubuntu.com precise InRelease Ign http://download.skype.com stable InRelease Hit http://archive.canonical.com precise Release.gpg Get:1 http://dl.google.com stable Release.gpg [198 B] Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://ppa.launchpad.net precise InRelease Get:2 http://security.ubuntu.com precise-security Release.gpg [198 B] Get:3 http://extras.ubuntu.com precise Release.gpg [72 B] Hit http://download.virtualbox.org precise/contrib i386 Packages Ign http://download.skype.com stable Release.gpg Hit http://linux.dropbox.com precise Release.gpg Ign http://us.archive.ubuntu.com precise InRelease Ign http://us.archive.ubuntu.com precise-updates InRelease Ign http://us.archive.ubuntu.com precise-backports InRelease Hit http://archive.canonical.com precise Release Get:4 http://dl.google.com stable Release.gpg [198 B] Ign http://ppa.launchpad.net oneiric InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Get:5 http://security.ubuntu.com precise-security Release [49.6 kB] Hit http://extras.ubuntu.com precise Release Ign http://download.skype.com stable Release Ign http://download.virtualbox.org precise/contrib TranslationIndex Get:6 http://us.archive.ubuntu.com precise Release.gpg [198 B] Hit http://archive.canonical.com precise/partner i386 Packages Hit http://linux.dropbox.com precise Release Get:7 http://ppa.launchpad.net precise Release.gpg [316 B] Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://extras.ubuntu.com precise/main Sources Get:8 http://ppa.launchpad.net precise Release.gpg [316 B] Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Get:9 http://us.archive.ubuntu.com precise-updates Release.gpg [198 B] Ign http://archive.canonical.com precise/partner TranslationIndex Ign http://download.skype.com stable/non-free i386 Packages/DiffIndex Get:10 http://dl.google.com stable Release [1,347 B] Hit http://linux.dropbox.com precise/main i386 Packages Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Hit http://extras.ubuntu.com precise/main i386 Packages Ign http://extras.ubuntu.com precise/main TranslationIndex Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Get:11 http://us.archive.ubuntu.com precise-backports Release.gpg [198 B] Ign http://download.skype.com stable/non-free TranslationIndex Get:12 http://dl.google.com stable Release [1,347 B] Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Ign http://linux.dropbox.com precise/main TranslationIndex Hit http://ppa.launchpad.net precise Release Ign http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Get:13 http://ppa.launchpad.net precise Release [11.9 kB] Get:14 http://us.archive.ubuntu.com precise Release [49.6 kB] Hit http://download.skype.com stable/non-free i386 Packages Get:15 http://dl.google.com stable/main i386 Packages [1,268 B] Ign http://dl.google.com stable/main TranslationIndex Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net oneiric Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net oneiric Release Get:16 http://security.ubuntu.com precise-security/main Sources [7,089 B] Hit http://ppa.launchpad.net oneiric Release Get:17 http://dl.google.com stable/main i386 Packages [769 B] Ign http://dl.google.com stable/main TranslationIndex Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Get:18 http://security.ubuntu.com precise-security/restricted Sources [14 B] Get:19 http://security.ubuntu.com precise-security/universe Sources [3,653 B] Get:20 http://security.ubuntu.com precise-security/multiverse Sources [696 B] Get:21 http://security.ubuntu.com precise-security/main i386 Packages [32.9 kB] Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Get:22 http://us.archive.ubuntu.com precise-updates Release [49.6 kB] Ign http://ppa.launchpad.net precise/main Sources/DiffIndex Ign http://ppa.launchpad.net precise/main i386 Packages/DiffIndex Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Get:23 http://security.ubuntu.com precise-security/restricted i386 Packages [14 B] Get:24 http://security.ubuntu.com precise-security/universe i386 Packages [8,594 B] Get:25 http://security.ubuntu.com precise-security/multiverse i386 Packages [1,393 B] Hit http://security.ubuntu.com precise-security/main TranslationIndex Hit http://security.ubuntu.com precise-security/multiverse TranslationIndex Hit http://security.ubuntu.com precise-security/restricted TranslationIndex Hit http://security.ubuntu.com precise-security/universe TranslationIndex Ign http://ppa.launchpad.net precise/main TranslationIndex Get:26 http://us.archive.ubuntu.com precise-backports Release [49.6 kB] Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Get:27 http://ppa.launchpad.net precise/main i386 Packages [1,276 B] Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Get:28 http://us.archive.ubuntu.com precise/main Sources [934 kB] Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main i386 Packages Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/multiverse Translation-en Hit http://security.ubuntu.com precise-security/restricted Translation-en Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main Sources Hit http://security.ubuntu.com precise-security/universe Translation-en Ign http://archive.canonical.com precise/partner Translation-en_US Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Ign http://extras.ubuntu.com precise/main Translation-en_US Ign http://download.virtualbox.org precise/contrib Translation-en_US Ign http://archive.canonical.com precise/partner Translation-en Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Ign http://extras.ubuntu.com precise/main Translation-en Ign http://download.virtualbox.org precise/contrib Translation-en Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://ppa.launchpad.net precise/main TranslationIndex Hit http://ppa.launchpad.net precise/main Sources Ign http://linux.dropbox.com precise/main Translation-en_US Hit http://ppa.launchpad.net precise/main i386 Packages Ign http://download.skype.com stable/non-free Translation-en_US Ign http://linux.dropbox.com precise/main Translation-en Ign http://download.skype.com stable/non-free Translation-en Ign http://dl.google.com stable/main Translation-en_US Ign http://dl.google.com stable/main Translation-en Ign http://dl.google.com stable/main Translation-en_US Get:29 http://us.archive.ubuntu.com precise/restricted Sources [5,470 B] Get:30 http://us.archive.ubuntu.com precise/universe Sources [5,019 kB] Ign http://dl.google.com stable/main Translation-en Get:31 http://us.archive.ubuntu.com precise/multiverse Sources [155 kB] Get:32 http://us.archive.ubuntu.com precise/main i386 Packages [1,274 kB] Get:33 http://us.archive.ubuntu.com precise/restricted i386 Packages [8,431 B] Get:34 http://us.archive.ubuntu.com precise/universe i386 Packages [4,796 kB] Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Get:35 http://us.archive.ubuntu.com precise/multiverse i386 Packages [121 kB] Hit http://us.archive.ubuntu.com precise/main TranslationIndex Hit http://us.archive.ubuntu.com precise/multiverse TranslationIndex Hit http://us.archive.ubuntu.com precise/restricted TranslationIndex Hit http://us.archive.ubuntu.com precise/universe TranslationIndex Get:36 http://us.archive.ubuntu.com precise-updates/main Sources [31.2 kB] Get:37 http://us.archive.ubuntu.com precise-updates/restricted Sources [765 B] Get:38 http://us.archive.ubuntu.com precise-updates/universe Sources [10.1 kB] Get:39 http://us.archive.ubuntu.com precise-updates/multiverse Sources [696 B] Get:40 http://us.archive.ubuntu.com precise-updates/main i386 Packages [96.5 kB] Get:41 http://us.archive.ubuntu.com precise-updates/restricted i386 Packages [770 B] Get:42 http://us.archive.ubuntu.com precise-updates/universe i386 Packages [27.7 kB] Get:43 http://us.archive.ubuntu.com precise-updates/multiverse i386 Packages [1,393 B] Hit http://us.archive.ubuntu.com precise-updates/main TranslationIndex Hit http://us.archive.ubuntu.com precise-updates/multiverse TranslationIndex Hit http://us.archive.ubuntu.com precise-updates/restricted TranslationIndex Hit http://us.archive.ubuntu.com precise-updates/universe TranslationIndex Get:44 http://us.archive.ubuntu.com precise-backports/main Sources [700 B] Get:45 http://us.archive.ubuntu.com precise-backports/restricted Sources [14 B] Get:46 http://us.archive.ubuntu.com precise-backports/universe Sources [1,680 B] Get:47 http://us.archive.ubuntu.com precise-backports/multiverse Sources [14 B] Get:48 http://us.archive.ubuntu.com precise-backports/main i386 Packages [559 B] Get:49 http://us.archive.ubuntu.com precise-backports/restricted i386 Packages [14 B] Get:50 http://us.archive.ubuntu.com precise-backports/universe i386 Packages [1,391 B] Get:51 http://us.archive.ubuntu.com precise-backports/multiverse i386 Packages [14 B] Hit http://us.archive.ubuntu.com precise-backports/main TranslationIndex Hit http://us.archive.ubuntu.com precise-backports/multiverse TranslationIndex Hit http://us.archive.ubuntu.com precise-backports/restricted TranslationIndex Hit http://us.archive.ubuntu.com precise-backports/universe TranslationIndex Hit http://us.archive.ubuntu.com precise/main Translation-en Hit http://us.archive.ubuntu.com precise/multiverse Translation-en Hit http://us.archive.ubuntu.com precise/restricted Translation-en Hit http://us.archive.ubuntu.com precise/universe Translation-en Hit http://us.archive.ubuntu.com precise-updates/main Translation-en Hit http://us.archive.ubuntu.com precise-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com precise-updates/restricted Translation-en Hit http://us.archive.ubuntu.com precise-updates/universe Translation-en Hit http://us.archive.ubuntu.com precise-backports/main Translation-en Hit http://us.archive.ubuntu.com precise-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com precise-backports/restricted Translation-en Hit http://us.archive.ubuntu.com precise-backports/universe Translation-en Fetched 12.8 MB in 1min 33s (137 kB/s) Is this a new feature in 12.04? Or, if it is unintended, is there a way I can fix this? Thanks.

    Read the article

  • Creating Key Flex Field (KFF) Bean in OAF

    - by Manoj Madhusoodanan
    This blog describes how to create KFF in OAF Page.Here I am going to demonstrate with standard Job KFF. I have created a new structure in Job KFF which I am going to use it in my custom OAF page.Please see the below pic. In the above created structure I have created following segments.You can see the valuesets also. In the custom page I have created an item with following properties. In the Segment List property you can give which segments are going to use in the KFF. The syntax for defining it is  KFF Code|Segment1|Segment2|Segment3|Segment4|Segment N In the table just create a field to hold the code combination.Please click here to see table script. Code combination will goes into JOB_ID field of the table. In the processRequest method of the page controller add following code snippet to attach the KFF structure and CCID column to the KFF bean. Deploy the files belonging to this solution to the server. Following is the output of the above solution. If you click on JobId you can see the KFF window.   Code Combination Id has created in the table.      

    Read the article

  • Web site not responding

    - by Subhransu
    I have website working fine before. But now its not able to connect to the server(I believe that is the problem). But its strange that the message not able to connect to the server is not coming and its keep connecting... for infinite time. Here is the screenshot. Here are some of the useful details about the status of the server. Application starts when server wakes up are: cd /etc/init.d/ Application server running in my server : Traceroute: UPDATE: ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 19204 744 ? Ss Aug07 0:01 /sbin/init root 2 0.0 0.0 0 0 ? S Aug07 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S Aug07 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S Aug07 7:15 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S Aug07 0:00 [migration/0] root 6 0.0 0.0 0 0 ? S Aug07 0:00 [watchdog/0] root 7 0.0 0.0 0 0 ? S Aug07 0:05 [events/0] root 8 0.0 0.0 0 0 ? S Aug07 0:00 [cpuset] root 9 0.0 0.0 0 0 ? S Aug07 0:00 [khelper] root 10 0.0 0.0 0 0 ? S Aug07 0:00 [netns] root 11 0.0 0.0 0 0 ? S Aug07 0:00 [async/mgr] root 12 0.0 0.0 0 0 ? S Aug07 0:00 [pm] root 13 0.0 0.0 0 0 ? S Aug07 0:00 [sync_supers] root 14 0.0 0.0 0 0 ? S Aug07 0:00 [bdi-default] root 15 0.0 0.0 0 0 ? S Aug07 0:00 [kintegrityd/0] root 16 0.0 0.0 0 0 ? S Aug07 0:24 [kblockd/0] root 17 0.0 0.0 0 0 ? S Aug07 0:00 [kacpid] root 18 0.0 0.0 0 0 ? S Aug07 0:00 [kacpi_notify] root 19 0.0 0.0 0 0 ? S Aug07 0:00 [kacpi_hotplug] root 20 0.0 0.0 0 0 ? S Aug07 0:00 [ata/0] root 21 0.0 0.0 0 0 ? S Aug07 0:00 [ata_aux] root 22 0.0 0.0 0 0 ? S Aug07 0:00 [ksuspend_usbd] root 23 0.0 0.0 0 0 ? S Aug07 0:00 [khubd] root 24 0.0 0.0 0 0 ? S Aug07 0:00 [kseriod] root 25 0.0 0.0 0 0 ? S Aug07 0:00 [md/0] root 26 0.0 0.0 0 0 ? S Aug07 0:00 [md_misc/0] root 27 0.0 0.0 0 0 ? S Aug07 0:00 [khungtaskd] root 28 0.0 0.0 0 0 ? S Aug07 0:19 [kswapd0] root 29 0.0 0.0 0 0 ? SN Aug07 0:00 [ksmd] root 30 0.0 0.0 0 0 ? SN Aug07 1:36 [khugepaged] root 31 0.0 0.0 0 0 ? S Aug07 0:00 [aio/0] root 32 0.0 0.0 0 0 ? S Aug07 0:00 [crypto/0] root 37 0.0 0.0 0 0 ? S Aug07 0:00 [kthrotld/0] root 38 0.0 0.0 0 0 ? S Aug07 0:00 [pciehpd] root 40 0.0 0.0 0 0 ? S Aug07 0:00 [kpsmoused] root 41 0.0 0.0 0 0 ? S Aug07 0:00 [usbhid_resumer] root 71 0.0 0.0 0 0 ? S Aug07 0:00 [kstriped] root 203 0.0 0.0 0 0 ? S Aug07 0:00 [scsi_eh_0] root 206 0.0 0.0 0 0 ? S Aug07 0:00 [scsi_eh_1] root 213 0.0 0.0 0 0 ? S Aug07 0:00 [mpt_poll_0] root 214 0.0 0.0 0 0 ? S Aug07 0:00 [mpt/0] root 215 0.0 0.0 0 0 ? S Aug07 0:00 [scsi_eh_2] root 317 0.0 0.0 0 0 ? S Aug07 0:00 [kdmflush] root 319 0.0 0.0 0 0 ? S Aug07 0:00 [kdmflush] root 338 0.0 0.0 0 0 ? S Aug07 4:30 [jbd2/dm-0-8] root 339 0.0 0.0 0 0 ? S Aug07 0:00 [ext4-dio-unwrit] root 411 0.0 0.0 11060 224 ? S<s Aug07 0:00 /sbin/udevd -d root 591 0.0 0.0 0 0 ? S Aug07 0:00 [vmmemctl] root 732 0.0 0.0 0 0 ? S Aug07 0:00 [jbd2/sda1-8] root 733 0.0 0.0 0 0 ? S Aug07 0:00 [ext4-dio-unwrit] root 770 0.0 0.0 0 0 ? S Aug07 0:00 [kauditd] root 907 0.0 0.0 0 0 ? S Aug07 0:02 [flush-253:0] root 963 0.0 0.0 93180 528 ? S<sl Aug07 0:00 auditd root 979 0.0 0.0 248680 1132 ? Sl Aug07 0:04 /sbin/rsyslogd -i /var/run/syslogd.pid -c 4 dbus 991 0.0 0.0 31740 348 ? Ssl Aug07 0:00 dbus-daemon --system root 1023 0.0 0.0 64032 456 ? Ss Aug07 0:01 /usr/sbin/sshd root 1031 0.0 0.0 22076 592 ? Ss Aug07 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 1107 0.0 0.0 78652 744 ? Ss Aug07 0:01 /usr/libexec/postfix/master postfix 1116 0.0 0.0 78904 852 ? S Aug07 0:00 qmgr -l -t fifo -u qpidd 1129 0.0 0.0 234596 1488 ? Ssl Aug07 1:54 /usr/sbin/qpidd --data-dir /var/lib/qpidd --daemon root 1181 0.0 0.0 117176 532 ? Ss Aug07 0:04 crond root 1217 0.0 0.0 108152 412 ? S Aug07 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/m mysql 1306 0.0 1.8 792636 72640 ? Sl Aug07 6:51 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log- root 1334 0.0 0.1 739156 5520 ? Ssl Aug07 0:34 /usr/sbin/shibd -p /var/run/shibboleth/shibd.pid -f -w 30 root 1355 0.0 0.0 4048 272 tty2 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty2 root 1357 0.0 0.0 4048 272 tty3 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty3 root 1360 0.0 0.0 12336 264 ? S< Aug07 0:00 /sbin/udevd -d root 1361 0.0 0.0 12336 240 ? S< Aug07 0:00 /sbin/udevd -d root 1362 0.0 0.0 4048 272 tty4 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty4 root 1364 0.0 0.0 4048 272 tty5 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty5 root 1366 0.0 0.0 4048 272 tty6 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty6 root 1394 0.0 0.0 574892 436 ? Sl Aug07 0:00 /usr/sbin/console-kit-daemon --no-daemon root 1495 0.0 0.0 4048 264 tty1 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty1 root 7665 0.0 0.1 296304 6244 ? Ss Aug16 2:33 /usr/sbin/httpd apache 10298 0.0 0.2 457756 10472 ? Sl Sep07 3:35 /usr/sbin/httpd apache 11684 0.0 0.5 465352 20708 ? Sl Sep12 0:02 /usr/sbin/httpd apache 14570 0.0 0.7 475592 30628 ? Sl Sep12 0:02 /usr/sbin/httpd apache 14877 0.0 0.5 467868 22696 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15128 0.0 0.4 464628 19096 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15151 0.0 0.4 464624 18980 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15169 0.0 0.6 470268 24636 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15238 0.0 0.4 464628 19108 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15266 0.0 0.4 464624 18920 ? Sl Sep12 0:02 /usr/sbin/httpd apache 15312 0.0 0.4 464624 18724 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15427 0.0 0.6 470268 24644 ? Sl Sep12 0:00 /usr/sbin/httpd apache 15814 0.0 0.4 464884 19296 ? Sl 00:14 0:01 /usr/sbin/httpd apache 15830 0.0 0.4 464628 19028 ? Sl 00:24 0:00 /usr/sbin/httpd apache 15859 0.0 0.7 475524 30320 ? Sl 00:31 0:00 /usr/sbin/httpd apache 15897 0.0 0.6 471876 26056 ? Sl 00:42 0:00 /usr/sbin/httpd apache 15926 0.0 0.4 464884 18936 ? Sl 00:46 0:01 /usr/sbin/httpd apache 15970 0.0 0.6 470268 24216 ? Sl 00:57 0:00 /usr/sbin/httpd apache 16010 0.0 0.4 464884 18912 ? Sl 01:04 0:00 /usr/sbin/httpd apache 16023 0.0 0.3 457756 12300 ? Sl 01:05 0:02 /usr/sbin/httpd apache 16176 0.0 0.4 464624 18568 ? Sl 02:01 0:01 /usr/sbin/httpd apache 16213 0.0 0.4 464624 18900 ? Sl 02:22 0:01 /usr/sbin/httpd apache 16240 0.0 0.4 464884 18828 ? Sl 02:35 0:00 /usr/sbin/httpd root 16313 0.0 0.0 19372 968 ? Ss 03:01 0:00 /usr/sbin/anacron -s apache 16361 0.0 0.4 464624 18572 ? Sl 03:17 0:00 /usr/sbin/httpd apache 16364 0.0 0.4 464884 19284 ? Sl 03:19 0:01 /usr/sbin/httpd root 16421 0.0 0.0 9180 1300 ? SN 03:37 0:00 /bin/bash /usr/bin/run-parts /etc/cron.daily root 16426 0.0 0.0 9312 1404 ? SN 03:37 0:00 /bin/bash /etc/cron.daily/backupdb root 16427 0.0 0.0 9064 820 ? SN 03:37 0:00 awk -v progname /etc/cron.daily/backupdb progname {????? print progname ":\n" root 16434 0.0 0.0 50776 2420 ? SN 03:37 0:00 mysqldump --opt --quote-names -u root -px xxx inamiriziv_dokeos_user personal_a root 16435 0.0 0.0 4280 536 ? SN 03:37 0:00 gzip --rsyncable apache 16484 0.0 0.2 457584 11432 ? Sl 03:55 0:04 /usr/sbin/httpd apache 16492 0.0 0.4 464884 19320 ? Sl 03:58 0:02 /usr/sbin/httpd apache 16496 0.0 0.4 464624 18704 ? Sl 04:00 0:02 /usr/sbin/httpd apache 16529 0.0 0.6 470268 24608 ? Sl 04:06 0:02 /usr/sbin/httpd apache 16533 0.0 0.4 464624 18532 ? Sl 04:10 0:00 /usr/sbin/httpd apache 16536 0.0 0.4 464884 18908 ? Sl 04:10 0:00 /usr/sbin/httpd apache 16556 0.0 0.4 464884 18924 ? Sl 04:18 0:02 /usr/sbin/httpd apache 16563 0.0 0.3 457756 12384 ? Sl 04:19 0:07 /usr/sbin/httpd apache 16598 0.0 0.3 457756 12344 ? Sl 04:28 0:02 /usr/sbin/httpd apache 16633 0.0 0.4 464624 18492 ? Sl 04:41 0:00 /usr/sbin/httpd apache 16637 0.0 0.6 470268 24300 ? Sl 04:41 0:02 /usr/sbin/httpd apache 16654 0.0 0.3 457756 12296 ? Sl 04:47 0:02 /usr/sbin/httpd apache 16665 0.0 0.6 470268 24308 ? Sl 04:50 0:03 /usr/sbin/httpd apache 16738 0.0 0.6 470268 24312 ? Sl 05:10 0:02 /usr/sbin/httpd apache 17388 0.0 0.2 457584 11440 ? Sl 08:56 0:01 /usr/sbin/httpd apache 17391 0.0 0.3 457756 12296 ? Sl 08:57 0:00 /usr/sbin/httpd apache 17397 0.0 0.3 457756 12312 ? Sl 08:59 0:00 /usr/sbin/httpd apache 17401 0.0 0.3 457756 12284 ? Sl 09:00 0:00 /usr/sbin/httpd apache 17420 0.0 0.2 457584 11436 ? Sl 09:04 0:01 /usr/sbin/httpd apache 17426 0.0 0.3 457756 12324 ? Sl 09:07 0:01 /usr/sbin/httpd apache 17431 0.0 0.3 457756 12276 ? Sl 09:08 0:03 /usr/sbin/httpd apache 17434 0.0 0.3 457756 12308 ? Sl 09:08 0:00 /usr/sbin/httpd apache 17437 0.0 0.2 457584 11440 ? Sl 09:09 0:01 /usr/sbin/httpd apache 17442 0.0 0.2 457584 11436 ? Sl 09:10 0:01 /usr/sbin/httpd apache 17445 0.0 0.3 457756 12328 ? Sl 09:11 0:01 /usr/sbin/httpd apache 17449 0.0 0.3 457756 12292 ? Sl 09:12 0:01 /usr/sbin/httpd apache 17454 0.0 0.2 457584 11444 ? Sl 09:15 0:01 /usr/sbin/httpd apache 17457 0.0 0.2 457584 11436 ? Sl 09:15 0:01 /usr/sbin/httpd apache 17461 0.0 0.3 457756 12304 ? Sl 09:16 0:01 /usr/sbin/httpd apache 17465 0.0 0.2 457584 11444 ? Sl 09:18 0:01 /usr/sbin/httpd apache 17468 0.0 0.2 457584 11436 ? Sl 09:18 0:01 /usr/sbin/httpd apache 17473 0.0 0.4 464884 18940 ? Sl 09:19 0:00 /usr/sbin/httpd apache 17476 0.0 0.4 464628 18736 ? Sl 09:20 0:00 /usr/sbin/httpd apache 17479 0.0 0.2 457584 11440 ? Sl 09:20 0:01 /usr/sbin/httpd apache 17483 0.0 0.2 457584 11416 ? Sl 09:21 0:00 /usr/sbin/httpd apache 17486 0.0 0.3 457756 12296 ? Sl 09:21 0:01 /usr/sbin/httpd apache 17489 0.0 0.4 464884 18928 ? Sl 09:21 0:00 /usr/sbin/httpd apache 17492 0.0 0.2 457584 11260 ? Sl 09:22 0:00 /usr/sbin/httpd apache 17496 0.0 0.3 457756 12372 ? Sl 09:22 0:01 /usr/sbin/httpd apache 17500 0.0 0.2 457584 11428 ? Sl 09:23 0:00 /usr/sbin/httpd apache 17504 0.0 0.2 457584 11432 ? Sl 09:25 0:00 /usr/sbin/httpd apache 17509 0.0 0.3 457756 12336 ? Sl 09:27 0:01 /usr/sbin/httpd apache 17513 0.0 0.2 457584 11436 ? Sl 09:29 0:01 /usr/sbin/httpd apache 17517 0.0 0.2 457584 11448 ? Sl 09:31 0:00 /usr/sbin/httpd apache 17520 0.0 0.3 457584 12128 ? Sl 09:32 0:00 /usr/sbin/httpd apache 17525 0.0 0.4 464884 18960 ? Sl 09:34 0:00 /usr/sbin/httpd apache 17529 0.0 0.2 457584 11420 ? Sl 09:36 0:00 /usr/sbin/httpd apache 17533 0.0 0.2 457584 11436 ? Sl 09:38 0:00 /usr/sbin/httpd apache 17537 0.0 0.2 457584 11436 ? Sl 09:38 0:00 /usr/sbin/httpd apache 17542 0.0 0.4 464884 18840 ? Sl 09:40 0:00 /usr/sbin/httpd apache 17546 0.0 0.3 457756 12320 ? Sl 09:41 0:00 /usr/sbin/httpd apache 17550 0.0 0.2 457584 11440 ? Sl 09:42 0:00 /usr/sbin/httpd apache 17554 0.0 0.2 457584 11436 ? Sl 09:43 0:00 /usr/sbin/httpd apache 17557 0.0 0.2 457584 11436 ? Sl 09:44 0:00 /usr/sbin/httpd apache 17560 0.0 0.2 457584 11428 ? Sl 09:44 0:01 /usr/sbin/httpd apache 17568 0.0 0.4 464884 18824 ? Sl 09:48 0:00 /usr/sbin/httpd apache 17572 0.0 0.2 457584 11428 ? Sl 09:48 0:00 /usr/sbin/httpd apache 17575 0.0 0.2 457584 11428 ? Sl 09:48 0:01 /usr/sbin/httpd apache 17583 0.0 0.2 457584 11432 ? Sl 09:50 0:00 /usr/sbin/httpd apache 17586 0.0 0.3 457756 12264 ? Sl 09:50 0:00 /usr/sbin/httpd apache 17589 0.0 0.2 457584 11420 ? Sl 09:51 0:00 /usr/sbin/httpd apache 17597 0.0 0.2 457584 11420 ? Sl 09:53 0:02 /usr/sbin/httpd apache 17600 0.0 0.3 457756 12376 ? Sl 09:54 0:00 /usr/sbin/httpd apache 17604 0.0 0.2 457584 11436 ? Sl 09:55 0:00 /usr/sbin/httpd apache 17610 0.0 0.2 457584 11420 ? Sl 09:59 0:00 /usr/sbin/httpd apache 17615 0.0 0.2 457584 11424 ? Sl 10:00 0:00 /usr/sbin/httpd apache 17618 0.0 0.4 464884 19288 ? Sl 10:00 0:00 /usr/sbin/httpd apache 17635 0.0 0.2 457584 11416 ? Sl 10:01 0:00 /usr/sbin/httpd apache 17639 0.0 0.2 457584 11440 ? Sl 10:02 0:00 /usr/sbin/httpd apache 17643 0.0 0.2 457584 11448 ? Sl 10:03 0:00 /usr/sbin/httpd apache 17648 0.0 0.4 464884 18868 ? Sl 10:06 0:00 /usr/sbin/httpd apache 17651 0.0 0.2 457584 11416 ? Sl 10:07 0:00 /usr/sbin/httpd apache 17655 0.0 0.3 457756 12268 ? Sl 10:08 0:01 /usr/sbin/httpd apache 17658 0.0 0.2 457584 11440 ? Sl 10:08 0:00 /usr/sbin/httpd apache 17663 0.0 0.3 457756 12292 ? Sl 10:11 0:00 /usr/sbin/httpd apache 17666 0.0 0.2 457584 11432 ? Sl 10:11 0:00 /usr/sbin/httpd apache 17672 0.0 0.2 457584 11428 ? Sl 10:14 0:00 /usr/sbin/httpd apache 17676 0.0 0.2 457584 11424 ? Sl 10:16 0:00 /usr/sbin/httpd apache 17680 0.0 0.4 464884 18884 ? Sl 10:16 0:00 /usr/sbin/httpd apache 17683 0.0 0.2 457584 11420 ? Sl 10:19 0:00 /usr/sbin/httpd apache 17689 0.0 0.2 457584 11424 ? Sl 10:23 0:00 /usr/sbin/httpd apache 17692 0.0 0.2 457584 11428 ? Sl 10:23 0:00 /usr/sbin/httpd apache 17696 0.0 0.3 457584 11980 ? Sl 10:25 0:00 /usr/sbin/httpd apache 17699 0.0 0.2 457584 11436 ? Sl 10:25 0:00 /usr/sbin/httpd apache 17704 0.0 0.2 457584 11232 ? Sl 10:27 0:00 /usr/sbin/httpd apache 17711 0.0 0.2 457584 11412 ? Sl 10:30 0:01 /usr/sbin/httpd postfix 17714 0.0 0.0 78732 3216 ? S 10:30 0:00 pickup -l -t fifo -u apache 17715 0.0 0.2 457584 11436 ? Sl 10:30 0:00 /usr/sbin/httpd apache 17718 0.0 0.2 457584 11428 ? Sl 10:31 0:00 /usr/sbin/httpd apache 17726 0.0 0.2 457584 11420 ? Sl 10:36 0:00 /usr/sbin/httpd apache 17731 0.0 0.2 457584 11168 ? Sl 10:37 0:00 /usr/sbin/httpd apache 17734 0.0 0.4 464884 18796 ? Sl 10:37 0:00 /usr/sbin/httpd apache 17743 0.0 0.2 457584 11220 ? Sl 10:43 0:00 /usr/sbin/httpd apache 17746 0.0 0.2 457584 11172 ? Sl 10:44 0:00 /usr/sbin/httpd apache 17750 0.0 0.3 457756 12288 ? Sl 10:44 0:00 /usr/sbin/httpd apache 17753 0.0 0.2 457584 11220 ? Sl 10:45 0:00 /usr/sbin/httpd apache 17756 0.0 0.2 457584 11424 ? Sl 10:46 0:00 /usr/sbin/httpd apache 17763 0.0 0.3 457756 12204 ? Sl 10:51 0:00 /usr/sbin/httpd apache 17766 0.0 0.2 457584 11428 ? Sl 10:51 0:00 /usr/sbin/httpd apache 17771 0.0 0.2 457584 11180 ? Sl 10:54 0:00 /usr/sbin/httpd apache 17774 0.0 0.2 457584 11416 ? Sl 10:54 0:00 /usr/sbin/httpd apache 17779 0.0 0.2 457584 11428 ? Sl 10:58 0:00 /usr/sbin/httpd apache 17784 0.0 0.2 457584 11380 ? Sl 11:00 0:00 /usr/sbin/httpd apache 17805 0.0 0.2 457584 11380 ? Sl 11:05 0:00 /usr/sbin/httpd apache 17818 0.0 0.2 457584 11156 ? Sl 11:11 0:00 /usr/sbin/httpd apache 17823 0.0 0.2 457584 11416 ? Sl 11:12 0:00 /usr/sbin/httpd apache 17827 0.0 0.2 457584 11412 ? Sl 11:13 0:00 /usr/sbin/httpd apache 17831 0.0 0.2 457584 11132 ? Sl 11:13 0:00 /usr/sbin/httpd root 17835 0.0 0.0 97780 3792 ? S 11:14 0:00 sshd: smaity [priv] smaity 17839 0.0 0.0 97780 1748 ? S 11:15 0:00 sshd: smaity@pts/0 smaity 17840 0.0 0.0 108288 1928 pts/0 Ss 11:15 0:00 -bash apache 17858 0.0 0.4 464884 18856 ? Sl 11:16 0:00 /usr/sbin/httpd apache 17862 0.0 0.3 457584 11904 ? Sl 11:17 0:00 /usr/sbin/httpd apache 17866 0.0 0.2 457584 11212 ? Sl 11:19 0:00 /usr/sbin/httpd apache 17871 0.0 0.2 457584 11144 ? Sl 11:20 0:00 /usr/sbin/httpd apache 17875 0.0 0.2 457584 11416 ? Sl 11:23 0:00 /usr/sbin/httpd apache 17880 0.0 0.2 457584 11408 ? Sl 11:23 0:00 /usr/sbin/httpd apache 17883 0.0 0.2 457584 11412 ? Sl 11:24 0:00 /usr/sbin/httpd apache 17888 0.0 0.2 457584 11412 ? Sl 11:25 0:00 /usr/sbin/httpd apache 17891 0.0 0.2 457584 11140 ? Sl 11:26 0:00 /usr/sbin/httpd apache 17899 0.0 0.2 457584 10984 ? Sl 11:32 0:00 /usr/sbin/httpd apache 17902 0.0 0.2 457584 11680 ? Sl 11:33 0:00 /usr/sbin/httpd apache 17906 0.0 0.2 457584 10980 ? Sl 11:33 0:00 /usr/sbin/httpd Output of wget http://mydomain.com/ --2012-09-13 13:35:17-- http://mydomain.com/ Resolving mydomain.com... 127.0.0.1 Connecting to mydomain.com|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45 [text/html] Saving to: “index.html” 0% [ ] 0 --.-K/s in 0s Cannot write to “index.html” (No space left on device). UPDATE3: output of df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_inamivm-lv_root 18G 17G 0 100% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 485M 71M 389M 16% /boot output of wget -O /dev/null http://127.0.0.1/ --2012-09-13 13:47:49-- http://127.0.0.1/ Connecting to 127.0.0.1:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45 [text/html] Saving to: “/dev/null” 100%[======================================================================================================>] 45 --.-K/s in 0s 2012-09-13 13:47:54 (8.57 MB/s) - “/dev/null” saved [45/45]

    Read the article

  • iptables is not allowing me to contact my dns nameservers

    - by user1272737
    I have the follwing iptables rules: Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- localhost.localdomain anywhere tcp dpt:mysql ACCEPT tcp -- anywhere anywhere tcp dpt:14443 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:xxxxxxx Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination When I turn off iptables I am able to use wget and all other commands. When these rules are enabled I cannot connect to any address. Any idea why this would be?

    Read the article

  • Why does ssh hang after "debug1: loaded 3 keys"

    - by James Moore
    Trying to log in to an Amazon EC2 instance running Ubuntu 10.04.1. I can log in just fine, no issues. A different user, coming from a different network just gets this: OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to xxxx [xxxx] port 80. debug1: Connection established. debug1: identity file /.ssh/identity type -1 debug1: identity file /.ssh/id_rsa type -1 debug1: identity file /.ssh/id_dsa type -1 debug1: loaded 3 keys And then it hangs. We've tried running sshd on port 22 and port 80 I'm guessing that it's not a firewall problem since the verbose output reports that the connection is established. I don't see anything in /var/log/auth.log when the failing user connects. I do see entries when I log in successfully.

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • least privilege account for WinRM remote calls on Windows 2008 Server

    - by aldrin
    ServerFault Windows experts: please consider the following use case: I have 2 Windows 2008 Server SP2 boxes let’s call them – SOURCE, CLIENT. On SOURCE: I create a new user called 'normal'. Just a plain user - no special privileges. On CLIENT: I run the following from a command prompt winrm get wmi/root/cimv2/Win32_UTCTime -r:SOURCE -u:normal -p:NormalPassword I get an output containing WSManFault: Message = Access is denied. On CLIENT: I repeat step 3 with the administrator identity, i.e. winrm get wmi/root/cimv2/Win32_UTCTime -r:SOURCE -u:Administrator -p:AdminPassword I get the current UTC time at SOURCE. The question is, what are the least privileges I need to assign to the user 'normal' to ensure that Step 3 behaves like Step 5. In other words, what's the least privilege to enable WinRM access for a non-Admin account?

    Read the article

  • How do I install a driver for an Atheros AR9285?

    - by Fernando
    How to install the driver for Atheros AR9285 in Ubuntu 11.10. Still no package for 11.10 according to this https://help.ubuntu.com/community/WifiDocs/Device/Atheros/AR9285 Here is the output of the commands marc@fer-VPCYA1V9E:~$ sudo lshw -class network *-network DISABLED description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 4c:0f:6e:d6:65:cc width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.0.0-12-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:16 memory:d3400000-d340ffff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: c0 serial: 54:42:49:a2:1f:bc capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:43 memory:d2400000-d243ffff ioport:1000(size=128) And the second command marc@fer-VPCYA1V9E:~$ rfkill list 0: sony-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: sony-bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 3: hci0: Bluetooth Soft blocked: no Hard blocked: no 4: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no Is there a way to make it work?

    Read the article

  • Zimbra server status showing red in control panel

    - by Debianuser
    I have been having a weird problem with Zimbra(7.1.4_GA_2555.DEBIAN5) lately: On the (web)control panel the status keep changing to red every few days. When this is happens the output of zmcontrol status still shows running: antispam Running antivirus Running imapproxy Running ldap Running logger Running mailbox Running memcached Running mta Running snmp Running spell Running stats Running zmconfigd Running Every thing runs fine except automated mail forwarding from one account to another(which is critical for us). I have been through Zimbra forums and the following ALWAYS fixes the issue: su - zimbra -c "zmprov mcf zimbraLogHostname mail.mydomain.com" /opt/zimbra/libexec/zmsyslogsetup /etc/init.d/rsyslog restart su - zimbra -c "zmcontrol restart" After I run the above commands, the status on control panel turns green and mail forwarding starts to work again BUT only for a few days. Other than the above, everything works fine including Server statistics. Anyone seen this issue before?

    Read the article

  • Does a cable / adaptor for my needs exist? (photos)

    - by Evan
    I recently bought a UPS, however, the only outlets take this sort of connector: But, the other end of the cables is like this (right): The thing is, I really need to connect it to a power bar, so I need a cable that has one end like in the first photo, and the second end like this (right side): I've searched around for quite a while, and asked at a few shops, but cannot seem to find a cable that will fulfill this purpose. This seems crazy to me, as I would think that a UPS wouldn't only be able to be attached to devices that take the relatively obscure output seen on the right side of the second image I posted. Do I need to buy another UPS that has the proper connectors?? Please help!!

    Read the article

  • SSL Certificate error: verify error:num=20:unable to get local issuer certificate

    - by Brian
    I've been trying to get an SSL connection to an LDAPS server (Active Directory) to work, but keep having problems. I tried using this: openssl s_client -connect the.server.edu:3269 With the following result: verify error:num=20:unable to get local issuer certificate I thought, OK, well server's an old production server a few years old. Maybe the CA isn't present. I then pulled the certificate from the output into a pem file and tried: openssl s_client -CAfile mycert.pem -connect the.server.edu:3269 And that didn't work either. What am I missing? Shouldn't that ALWAYS work?

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • [Visual Studio Extension Of The Day] Test Scribe for Visual Studio Ultimate 2010 and Test Professional 2010

    - by Hosam Kamel
      Test Scribe is a documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc... . Known Issues/Limitations Customizing the generated report by changing the template, adding comments, including attachments etc… is not supported While opening a test plan summary document in  Office 2007, if you get the warning: “The file Test Plan Summary cannot be opened because there are problems with the contents” (with Details: ‘The file is corrupt and cannot be opened’), click ‘OK’. Then, click ‘Yes’ to recover the contents of the document. This will then open the document in Office 2007. The same problem is not found in Office 2010. Generated documents are stored by default in the “My documents” folder. The output path of the generated report cannot be modified. Exporting word documents for individual test suites or test cases in a test plan is not supported. Download it from Visual Studio Extension Manager Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Customize Team Build 2010 – Part 11: Speed up opening my build process template

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       When you open the build process template, it takes 15 – 30 seconds until it opens. When you are in the process of creating your custom build process template, this can be very frustrating. Thanks to Ed Blankenship how has found a little trick to speed up the opening of the template. It now only takes a few seconds. Create a file called empty.xaml and place the following text in it: <Activity http://www.edsquared.com/ct.ashx?id=1746c587-59ce-45eb-85af-8ea167862617&url=http%3a%2f%2fschemas.microsoft.com%2fnetfx%2f2009%2fxaml%2factivities"http://schemas.microsoft.com/netfx/2009/xaml/activities"> </Activity> Open this file in Visual Studio. In the toolbox panel, add a new tab called “Team Foundation Build Activities”.  Note that it is important to get the tab name correct because if it is not correct then the activities will be reloaded. Inside the new tab, right click and select “Choose Items” Click the Browse button Load the file C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.TeamFoundation.Build.Workflow\v4.0_10.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Build.Workflow.dll Click OK to add the toolbox items to the tab. Create another new tab called “Team Foundation LabManagement Activities”. Inside the new tab, right click and select “Choose Items” Click the Browse button Load the file C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.TeamFoundation.Lab.Workflow.Activities\v4.0_10.0.0.0__b03f5f7f11d50a3a\Microsoft.TeamFoundation.Lab.Workflow.Activities.dll Click OK to add the toolbox items to the tab. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Using jQuery to POST Form Data to an ASP.NET ASMX AJAX Web Service

    - by Rick Strahl
    The other day I got a question about how to call an ASP.NET ASMX Web Service or PageMethods with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs. However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are: Capture form variables into an array on the client with jQuery’s .serializeArray() function Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array On the server create a custom type that matches the .serializeArray() name/value structure Create extension methods on NameValue[] to easily extract form variables Create a [WebMethod] that accepts this name/value type as an array (NameValue[]) This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications. Let’s look at a short example that looks like this as a base form of fields to ship to the server: The HTML for this form looks something like this: <div id="divMessage" class="errordisplay" style="display: none"> </div> <div> <div class="label">Name:</div> <div><asp:TextBox runat="server" ID="txtName" /></div> </div> <div> <div class="label">Company:</div> <div><asp:TextBox runat="server" ID="txtCompany"/></div> </div> <div> <div class="label" ></div> <div> <asp:DropDownList runat="server" ID="lstAttending"> <asp:ListItem Text="Attending" Value="Attending"/> <asp:ListItem Text="Not Attending" Value="NotAttending" /> <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" /> <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" /> </asp:DropDownList> </div> </div> <div> <div class="label">Special Needs:<br /> <small>(check all that apply)</small></div> <div> <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple"> <asp:ListItem Text="Vegitarian" Value="Vegitarian" /> <asp:ListItem Text="Vegan" Value="Vegan" /> <asp:ListItem Text="Kosher" Value="Kosher" /> <asp:ListItem Text="Special Access" Value="SpecialAccess" /> <asp:ListItem Text="No Binder" Value="NoBinder" /> </asp:ListBox> </div> </div> <div> <div class="label"></div> <div> <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" /> </div> </div> <hr /> <input type="button" id="btnSubmit" value="Send Registration" /> The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values. Setting up the Server Side [WebMethod] The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback. public class PageMethodsService : System.Web.Services.WebService { [WebMethod] public string SendRegistration(NameValue[] formVars) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("Thank you {0}, <br/><br/>", HttpUtility.HtmlEncode(formVars.Form("txtName"))); sb.AppendLine("You've entered the following: <hr/>"); foreach (NameValue nv in formVars) { // strip out ASP.NET form vars like _ViewState/_EventValidation if (!nv.name.StartsWith("__")) { if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk")) sb.Append(nv.name.Substring(3)); else sb.Append(nv.name); sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>"); } } sb.AppendLine("<hr/>"); string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs == null) sb.AppendLine("No Special Needs"); else { sb.AppendLine("Special Needs: <br/>"); foreach (string need in needs) { sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>"); } } return sb.ToString(); } } The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name. The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray(): public class NameValue { public string name { get; set; } public string value { get; set; } } The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array: /// <summary> /// Simple NameValue class that maps name and value /// properties that can be used with jQuery's /// $.serializeArray() function and JSON requests /// </summary> public static class NameValueExtensionMethods { /// <summary> /// Retrieves a single form variable from the list of /// form variables stored /// </summary> /// <param name="formVars"></param> /// <param name="name">formvar to retrieve</param> /// <returns>value or string.Empty if not found</returns> public static string Form(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault(); if (matches != null) return matches.value; return string.Empty; } /// <summary> /// Retrieves multiple selection form variables from the list of /// form variables stored. /// </summary> /// <param name="formVars"></param> /// <param name="name">The name of the form var to retrieve</param> /// <returns>values as string[] or null if no match is found</returns> public static string[] FormMultiple(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray(); if (matches.Length == 0) return null; return matches; } } Using these extension methods it’s easy to retrieve individual values from the array: string name = formVars.Form("txtName"); or multiple values: string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs != null) { // do something with matches } Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client. Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code. POSTing Form Variables from the Client to the AJAX Service To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page: <script src="Scripts/jquery.min.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> json2.js can be found here (be sure to remove the first line from the file): http://www.json.org/json2.js It’s required to handle JSON serialization for those browsers that don’t support it natively. With those script references in the document let’s hookup the button click handler and call the service: $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); $.ajax({ url: "PageMethodsService.asmx/SendRegistration", type: "POST", contentType: "application/json", data: JSON.stringify({ formVars: arForm }), dataType: "json", success: function (result) { var jEl = $("#divMessage"); jEl.html(result.d).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, error: function (xhr, status) { alert("An error occurred: " + status); } }); } The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with: JSON.stringify({ formVars: arForm }) The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it. The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call. On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response: jEl.html(result.d).fadeIn(1000); Slightly simpler: Using ServiceProxy.js If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly: Automatic JSON encoding Automatic fix up of ‘d’ wrapper property Automatic Date conversion on the client Simplified error handling Reusable and abstracted To add the service proxy add: <script src="Scripts/ServiceProxy.js" type="text/javascript"></script> and then change the code to this slightly simpler version: <script type="text/javascript"> proxy = new ServiceProxy("PageMethodsService.asmx/"); $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); proxy.invoke("SendRegistration", { formVars: arForm }, function (result) { var jEl = $("#divMessage"); jEl.html(result).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, function (error) { alert(error.message); } ); } The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call. Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client. Summary Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions. Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream! In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well. Additional Resources ServiceProxy.js A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion. Making jQuery Calls to WCF/ASMX with a ServiceProxy Client More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written. ww.jquery.js The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  AJAX  

    Read the article

  • Redhat Software RAID 1 not syncing

    - by hamstar
    Hey guys, I setup a software RAID 1 on a Redhat server, everything went sweet and it synced the first time. The other day the raid failedover for some reason and the disks hadn't been syncing since that first time, so it went back to 2 weeks ago when we did the first sync. We got the system back up running off the master only. However what would cause the software raid to not sync? I used mdadm to setup the RAID. Any ideas? EDIT: Sorry I don't have the output from /proc/mdstat before the raid failedover, it is now running on only the master... I can put the slave back in no problems but I was wondering how to make it sync all the time instead of only when I add it.

    Read the article

  • Using JCA Adapter with OSB 11.1.1.3

    - by James Taylor
    In OSB 10g to use the JCA adapters you were required to use JDeveloper to create the necessary WSDLs and XSDs etc using the associated adapter wizard. These files were imported into Oracle Workshop (Eclipse) and used to create the business service as you would any other web service. In 11g unfortunately JDeveloper is still required. The process has changed slightly as described below. As an example I have used the JCA DB adapter as an example. Start JDeveloper 11.1.1.3 Create a new SOA Application Create a new SOA Project and call it DBAdapters. Choose the Empty Composite Template Drag a Database Adapter Component to the External References panel on the composite. Provide a service name. Create a new database connection, or use an existing one Take note of the JNDI Name, e.g. eis/DB/MyConnection This will be used to configure the DB connection in the WebLogic Console. In my example I use a stored procedure, but you can use what ever operation you require. Please refer to the following link for other options: User's Guide for Technology Adapters Select a schema and stored procedure Once the procedure has been selected, accept the defaults and finish. Startup your OEPE version of Eclipse. Create a new Oracle Service Bus Configuration Project (you can use an existing project if you have one) Create a new Oracle Service Bus Project in the configuration project created above. Instead of importing the WSDL and XSD files you import the jca file created in JDeveloper. In Eclipse right click the Oracle Service Bus Project and select Import –> Import    Choose File System Browse to the directory where JDeveloper stores its project Select the jca, wsdl, and xsd files based on the service you created in step 5. Also check the ‘Create selected folders only’ radio button. When you import you may have a little red x indicating the files are invalid. This is due to the location of the files. Open the invalid files and fix the path in relation to where you store your files in the OSB project.   Once you have the files all valid, Right-Click the jca file and select Oracle Service Bus –> Generate Service. This will create a new Business Service. In the WebLogic Console configure the JNDI name defined in step 7. You can now deploy your project and test

    Read the article

  • Reverse-engineer SharePoint fields, content types and list instance—Part2

    - by ybbest
    Reverse-engineer SharePoint fields, content types and list instance—Part1 Reverse-engineer SharePoint fields, content types and list instance—Part2 In the part1 of this series, I demonstrated how to use VS2010 to Reverse-engineer SharePoint fields, content types and list instances. In the part 2 of this series, I will demonstrate how to do the same using CKS:Dev. CKS:Dev extends the Visual Studio 2010 SharePoint project system with advanced templates and tools. Using these extensions you will be able to find relevant information from your SharePoint environments without leaving Visual Studio. You will have greater productivity while developing SharePoint components and you will have greater deployment capabilities on your local SharePoint installation. You can download the complete solution here. 1. First, download and install appropriate CKS:Dev from CodePlex. If you are using SharePoint Foundation 2010 then download and install the SharePoint Foundation 2010 version If you are using SharePoint Server 2010 then download and install the SharePoint Server 2010 version 2. After installation, you need to restart your visual studio and create empty SharePoint. 3. Go to Viewà Server Explorer 4. Add SharePoint web application connection to the server explorer. 5. After add the connection, you can browse to see the contents for the Web Application. 6. Go to Site Columns à YBBEST (Custom Group of you own choice) and right-click the YBBEST Folder and Click Import Site Columns. 7. Go to ContentTypesà YBBEST (Custom Group of you own choice) and right-click the YBBEST Folder and Click Import Content Types. 8. After the import completes, you can find the fields and contentTypes in the SharePoint project below. Of course you need to do some modification to your current project to make it work. 9. Next, create list instances using list instance item template in Visual Studio 10. Finally, create lookup columns using the feature receivers and the final project will look like this. You can download the complete solution here.

    Read the article

  • Messed up installation of mysql-server - cannot complete installation or deinstallation

    - by Christian Engel
    apt-get got stuck while installing mysql-server. I don't know why but it just stopped working and never continued. I had to reboot the machine in the middle of the setup process. Now, if I try to install or purge the mysql-server package, apt-get tries to configure mysql-server first (tells me its not installed before that) and cancels with a error message: Sub-process /usr/bin/dpkg returned an error code(1) apt-get also tells me that two packages have not been successfully installed or removed. this is the complete console output: christian@devbox:~$ sudo apt-get install mysql-server [sudo] password for christian: Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 17 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up mysql-server-5.5 (5.5.32-0ubuntu7) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) christian@devbox:~$

    Read the article

  • WinSat command line closes too fast

    - by Rob Cowell
    I'm trying to do some analysis under Windows 7 as to why I can't get a Windows Experience Index (WEI) rating due to disk issues. To this end, I'm trying to run winsat from the command line with :- winsat disk -seq -read -drive c and winsat disk -ran -write -n 2 but the command window is closing too quickly to be able to read the results. I've tried opening a seperate cmd window to run it in but it still insists on launching its own window to run in, closing straight away. Any idea how I can see the output?

    Read the article

  • Turn Photos and Home Videos into Movies with Windows Live Movie Maker

    - by DigitalGeekery
    Are you looking for an easy way to take your digital photos and videos and turn them into a movie or slideshow? Today we’ll take a detailed look at how to do use Windows Live Movie Maker. Installation Windows Live Movie Maker comes bundled as part of the Windows Live Essentials suite (link below). However, you don’t have to install any of the programs you may not want. Take notice of the You’re almost done screen. Before clicking Continue, be sure to uncheck the boxes to set your search provider and homepage. Adding Pictures and Videos Open Windows Live Movie Maker. You can add videos or photos by simply dragging and dropping them onto the storyboard area. You can also click on the storyboard area or on the Add videos and photos button on the Home tab to browse for videos and photos. Windows Live Movie Maker supports most video, image, and audio file types. Select your files and add click Open to add them to Windows Live Movie Maker. By default WLMM doesn’t allow you to add files from network locations…so check out our article on how to add network support to Windows Live MovieMaker if the files you want to add are on a network drive. Layout All of your added clips will appear in the storyboard area on the right, while the currently selected clip will appear in the preview window on the left. You can adjust the size of the two areas by clicking and dragging the dividing line in the middle.    Make the clips on the storyboard bigger or smaller by clicking on the thumbnail size icon. The slider at the lower right adjusts the zoom time scale.   Previewing your Movie At any time, you can playback your movie and preview how it will look in the Preview window by clicking the space bar, or by pushing the play button under the preview window. You can also manually move the preview bar slider across the storyboard to view the clips as the video progresses. Adjusting Clips on the Storyboard You can click and drag clips on the storyboard to change the order in which the photos and videos appear.   Adding Music Nothing brings a movie to life quite like music. Selecting Add music will add your music to the beginning of the movie. Select Add music at the current point to include it in the movie to the current location of your preview bar slider, then browse for your music clip. WLMM supports many common audio files such as WAV, MP3, M4A, WMA, AIFF, and ASF. The music clip will appear above the video / photos clips on the storyboard.   You can change the location of music clips by clicking and dragging them to a different location on the storyboard. Add Titles, Captions, and Credits To add a Title screen to your movie, click the Title button on the Home tab. Type your title directly into the text box on the preview screen. The title will be placed at the location of the preview slider on the storyboard. However, you can change the location by clicking and dragging title to other areas of the storyboard. On the Format tab, there are a handful of text settings. You can change the font, color, size, alignment,  and transparency. The Adjust group allows you to change the background color, edit the text, and set the length of time the Title will appear in the movie.   The Effects group on the Format tab allows you to select an effect for your title screen. By hovering your cursor over each option, you will get a live preview of how each effect will appear in the preview window. Click to apply any of the effects. For captions, select where you want your caption to appear with the preview slider on the storyboard, then click the captions button on the Home tab. Just like the title, you type your caption directly into the text box on the preview screen, and you can make any adjustments by using the Font and Paragraph, Adjust, and Effects groups above. Credits are done the same as titles and captions, except they are automatically placed at the end of the movie.   Transitions Go to the Animation tab on the ribbon to apply transitions. Select a clip from the storyboard and hover over one of the transition to see it in the preview window. Click on the transition to apply it to the clip. You can apply transitions separately to clips or hold down Ctrl button while clicking to select multiple clips to which to apply the same transition. Pan and zoom effects are also located on the Animations tab, but can be applied to photos only. Like transition, you can apply them individually to a clip or hold down Ctrl button while clicking to select multiple clips to which to apply the same pan and zoom effect. Once applied, you can adjust the duration of the transitions and pan and zoom effects. You can also click the dropdown for additional transitions or effects. Visual Effects Similar to Pan and Zoom and Transitions, you can apply a variety of Visual Effects to individual or multiple clips. Editing Video and Music Note: This does not actually edit the original video you imported into your Windows Live Movie Maker project, only how it appears in your WLMM project. There are some very basic editing tools located on the Home tab. The Rotate left and Rotate right button will adjust any clip that may be oriented incorrectly. The Fit to music button will automatically adjust the duration of the photos (if you have any in your project) to fit the length of the music in your movie. Audio mix allows you to change the volume level   You can also do some slightly more advanced editing from the Edit tab. Select the video clip on the storyboard and click the Trim tool to edit or remove portions of a video clip. Next, click and drag the sliders in the preview windows to select the are you wish to keep. For example, the area outside the sliders is the area trimmed from the movie. The area inside is the section that is kept in the movie. You can also adjust the Start and End points manually on the ribbon.   When you are finished, click Save trim. You can also split your video clips. Move the preview slider to the location in the video clip where you’d like to split it, and select Split. Your video will be split into separate sections. Now you can apply different effects or move them to different locations on the storyboard. Editing Music Clips Select the music clip on the storyboard and then the Options tab on the ribbon. You can adjust the music volume by moving the slider right and left.   You can also choose to have your music clip fade in or out at the beginning and end of your movie. From the Fade in and Fade out dropdowns, select None, Slow, Medium, or Fast. To adjust the sound of your audio clips, click on the Edit tab, select the Video volume button, and adjust the slider. Move it all the way to the left to mute any background noise in your video clips.   AutoMovie As you have seen, Windows Live Movie Maker allows you to add effects, transitions, titles, and more. If you don’t want to do any of that stuff yourself, AutoMovie will automatically add title, credits, cross fade transitions between items, pan and zoom effects to photos, and fit your project to the music. Just select the AutoMovie button on the Home tab. You can go from zero to movie in literally a couple minutes.   Uploading to YouTube You can share your video on YouTube directly from Windows Live Movie Maker. Click on the YouTube icon in the Sharing group on the Home tab. You’ll be prompted for your YouTube username and password. Fill in the details about your movie and click Publish. The movie will be converted to WMV before being uploaded to YouTube. As soon as the YouTube conversion is complete, you’re new movie is live and ready to be viewed. Saving your Movie as a Video File Select the icon at the top left, then select Save movie. As you hover your mouse over each of the options, you will see the output display size, aspect ratio, and estimated file size per minute of video. All of these settings will output your movie as a WMV file. (Unfortunately, the only option is to save a movie as a WMV file.) The only difference is how they are encoded based on preset common settings. The Burn to DVD option also outputs a WMV file, but then opens Windows DVD Maker and walks you through the process of creating and burning a DVD.   If you choose the Burn to DVD option, close this window when the WMV file conversion is complete and the Windows DVD Maker will prompt you to begin. When your movie is finished, it’s time to relax and enjoy.   Conclusion Windows Live Movie Maker makes it easy for the average person to quickly churn out nice looking movies and slideshows from there own pictures and videos. However, long time users of previous editions (formerly called Windows Movie Maker) will likely be disappointed by some features missing in Windows Live Movie Maker that existed in earlier editions. Looking for details on burning your new project to DVD, check out our article on how to create and author DVDs with Windows DVD Maker. Download Windows Live Movie Maker Similar Articles Productive Geek Tips Family Fun: Share Photos with Photo Gallery and Windows Live SpacesCreate and Author DVDs in Windows 7Rotate a Video 90 degrees with VLC or Windows Live Movie MakerInstall Windows Live Essentials In Windows 7How to Make/Edit a movie with Windows Movie Maker in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • IIS 7.5 Custom HTTP Response Headers Not Working

    - by Craig
    Trying to setup custom HTTP Response Headers on a new install of IIS7.5 on Windows Server 2008 R2 Standard and they are not working. Default headers work fine (X-Powered-By, etc...). Modifying default header values work (ie. change X-Powered-By to ASP.NET2). Modifying default header names cause header to stop being output (ie. Change X-Powered-By to X-Powered-By2). Site in question is a test site with a single html page. Custom headers also don't work on ASP.NET 2.0 site. I've tried setting the headers at the global level and at the site level to no effect.

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >