Search Results

Search found 2109 results on 85 pages for 'it depends'.

Page 77/85 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Error installing mysql, any way to fix it?

    - by user156355
    I was installing mysql with apt-get install mysql-server (like I always did) before that I had done an apt-get update (I am using Debian 6), and when I installed I found this problem, pretty common as I see, but Ive followed all stepts and nothing have worked, Ive tried with apt-get install -f also with apt-get remove mysql-server (and common, and mysql-server-5.1) and also with apt-get purge (every packate) and later install, but nothing... I tried also dpkg-reconfigure mysql-server-5.1 apt-get install --reinstall mysql-server (all runed as Root) Still, nothing worked, any idea??? 130130 10:11:48 InnoDB: Shutdown completed; log sequence number 0 44233 Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured configured to not write apport reports Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) when I tried dpkg-reconfigure mysql-server-5.1 /usr/sbin/dpkg-reconfigure: mysql-server-5.1 is broken or not fully installed the case "Start" on /etc/init.d/mysql is 'start') sanity_checks; # Start daemon log_daemon_msg "Starting MySQL database server" "mysqld" if mysqld_status check_alive nowarn; then log_progress_msg "already running" log_end_msg 0 else # Could be removed during boot test -e /var/run/mysqld || install -m 755 -o mysql -g root -d /var/run/mysqld # Start MySQL! /usr/bin/mysqld_safe > /dev/null 2>&1 & # 6s was reported in #352070 to be too few when using ndbcluster for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14; do sleep 1 if mysqld_status check_alive nowarn ; then break; fi log_progress_msg "." done if mysqld_status check_alive warn; then log_end_msg 0 # Now start mysqlcheck or whatever the admin wants. output=$(/etc/mysql/debian-start) [ -n "$output" ] && log_action_msg "$output" else log_end_msg 1 log_failure_msg "Please take a look at the syslog" fi fi When I make mysql force-reload: Reloading MySQL database server: mysqld/usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! root@americandougnuts:/etc/init.d# root@americandougnuts:/etc/init.d# Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!

    Read the article

  • Debian Lenny to Debian Squeeze upgrade problems

    - by Roland Soós
    Hi! Yesterday I made a dist-upgrade on my Debian Lenny server. I thought it will be easy as an usual upgrade, but it's not. I got a lot of problem after the update: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-2.6-amd64 : Depends: linux-image-2.6.32-5-amd64 but it is not installed E: Unmet dependencies. Try using -f. Then I tried the suggestion: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libio-compress-base-perl libatk1.0-0 libts-0.0-0 libmime-types-perl libc-client2007b libgtk2.0-common libxfixes3 libgsf-1-common hicolor-icon-theme libfile-remove-perl libxcomposite1 libltdl3-dev libneon27 libmd5-perl libwmf0.2-7 libilmbase6 libatk1.0-data djvulibre-desktop libdirectfb-1.0-0 fam libxinerama1 libcroco3 libopenexr6 libgsf-1-114 libmail-box-perl libdjvulibre21 openssl-blacklist librsvg2-2 libio-compress-zlib-perl libsysfs2 libbeecrypt6 libxdamage1 libobject-realize-later-perl libuser-identity-perl libgtk2.0-bin libxi6 libxcursor1 portmap libxrandr2 libgtk2.0-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-5-amd64 Suggested packages: linux-doc-2.6.32 The following NEW packages will be installed: linux-image-2.6.32-5-amd64 0 upgraded, 1 newly installed, 0 to remove and 121 not upgraded. 98 not fully installed or removed. Need to get 0 B/28.6 MB of archives. After this operation, 103 MB of additional disk space will be used. Do you want to continue [Y/n]? y perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Preconfiguring packages ... (Reading database ... 37915 files and directories currently installed.) Unpacking linux-image-2.6.32-5-amd64 (from .../linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./lib/modules/2.6.32-5-amd64/kernel/sound/pci/hda/snd-hda-codec-realtek.ko': No space left on device configured to not write apport reports dpkg-deb: subprocess paste killed by signal (Broken pipe) locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Running postrm hook script /sbin/update-grub. Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-2.6.26-2-amd64 Updating /boot/grub/menu.lst ... done Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64 Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) # dpkg-reconfigure locales perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r /usr/sbin/dpkg-reconfigure: locales is broken or not fully installed Then I stucked. Do you have any idea how could I solve this?

    Read the article

  • Windows 7 / Ubuntu Dualboot GRUB Problem.

    - by Tek
    I'd like to first say ahead of time that I'm running a RAID-0 Setup. 1.First of all, I'm glad Ubuntu 9.10 installed flawlessly and detected my RAID-0 setup just fine. The issue I'm having now is that I already had Windows 7 installed and made a small 12GB partition for Linux/Swap. I grabbed EasyBCD 2.0 to edit the W7 bootloader and configured it to use dual boot Grub2 because before it didn't even show the option for Ubuntu. The bootloader points to a file made in the windows directory made by EasyBCD called "C:\NST\AutoNeoGrub0.mbr" which is what I'm guessing grub is booting from. After that I got the option for booting Ubuntu. The problem is that it's sending me to the Grub prompt (probably because it's pointing to \NST|AutoNeoGrub0.mbr?), at first I didn't know what to do but I researched and have to type grub commands to manually boot into Ubuntu Linux. Ex: grubroot (hd0,4) grubkernel /boot/vmlinuz-2.6... root=/dev/disk/by-uuid/24624-2424... grubinitrd boot/initrd.img-2.6... grubboot After all that Ubuntu boots just fine, but how do I fix it permanently? Do I need to edit the bootloader manually (since Easy BCD "autoconfigures")? Some insight on this would rock! Also, it sucks to type the actual uuid since it's REALLY long. I tried getting the name of the drive via fdisk -l but since it's raid 0 I'm guessing I can't do that. How can I get a shorter name of the drive? like /dev/sda, /dev/sdb etc? I've also tried to update to the latest GRUB and I got this: Creating config file /etc/default/grub with new version Generating core.img error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' grub-probe: error: no mapping exists for nvidia_dbedfcca5' Auto-detection of a filesystem module failed. Please specify the module with the option--modules' explicitly. dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of grub2: grub2 depends on grub-pc; however: Package grub-pc is not configured yet. dpkg: error processing grub2 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. E: Sub-process /usr/bin/dpkg returned an error code (1) I've also tried: b@dnb:~$ sudo update-grub error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.31-14-generic Found initrd image: /boot/initrd.img-2.6.31-14-generic error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/mapper/nvidia_dbedfcca1 error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca1' done To no avail. Any idea what I can do to fix this mess? :( Edit: This is my disk configuration. b@dnb:~$ sudo df -l Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/nvidia_dbedfcca5 12302232 2744788 8932520 24% / udev 1030288 268 1030020 1% /dev none 1030288 964 1029324 1% /dev/shm none 1030288 92 1030196 1% /var/run none 1030288 0 1030288 0% /var/lock none 1030288 0 1030288 0% /lib/init/rw /dev/sr0 706532 706532 0 100% /media/cdrom0 Note: /dev/mapper/nvidia_dbedfcca5 is my Linux boot partition

    Read the article

  • Open-source and client-side IRC-Client

    - by user125197
    I'm administrating a homepage with an IRC-Webclient. At the moment, it uses a Java-client, modified to be SSL-compatible and compatible with whatever design I want to use. The usual PJIRC thing, with users wishes and concerns implemented (I'll give you SSL-PJIRC if you want - no problem. The login-script will be improved for IE6-support this weekend, means write the object tag if an old IE is found, that's it. Rest runs perfectly well). But still, the better user experience would be a client that only requires the user to enable JavaScript. So, before I go into rewriting and customizing a complete Java-IRC-solution, I'd like to ask some other people - after researching for half a year of time. The requirements are: a) Free hosting, no cgi-bin is acceptable. A lot of hosters also have CGI supported, but it doesn't allow IRC-access. So, seems you are limited to x10-hosting for CGI:IRC. b) Solution must be open source (not free as in free beer, but free as in free speech - I want to be able to read every line of the code). c) Hosting cannot be limited to Windows-hosting. Free operation systems (anything with Linux, BSD, whatever - I think you get what I mean) must be possible. d) No server-side technology. I'm limited to everything that runs client-side only. (Well, a Java-Applet does...). d) Solution must support SSL. e) The side must be able to move. Means, you register to a new free hoster, load your things up via FileZilla and the thing simply runs. No server-concerning implementations needed. f) Old browsers must be supported. From all my research, there are two ways a) Java-Applet b) Use HTML5-Websockets (it is impossible to use the JavaScript-library I found, as it depends on ActiveX - means, Windows hosting). b) means "you are going to enter very unstable content". I worked with HTML5 for month, and, well ... Though, HTML5 also is not suitable for older browsers (old browser support is an absolutely necessary requirement). PHP by the way is a server side solution ... so no line of PHP. My idea now was a Java Applet, and a fallback which again is embedded and proprietary, but only requieres JavaScript (wsirc or Mibbit are great here, whilst I prefer wsirc ... and zooming fonts that are plainly to small, but worse, cannot read the code). So my question is - with free hosting, not installation on server, plain upload - do you see and open source, complelety client-side, ssl-compatible way to use or write a client? This wouldn't be my first programming project, I'm not afraid of the code. If there was a way and I had to write it, even from scratch, I'd do. From all I know, there sadly is none. But maybe some experienced admin has an idea? Best Wishes, yetanotheruser Sry my English isn't perfect. It's enough for programming at least.

    Read the article

  • dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack)

    - by udo
    I had an issue (Question 199582) which was resolved. Unfortunately I am stuck at this point now. Running root@X100e:/var/cache/apt/archives# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: file libexpat1 libmagic1 libreadline6 libsqlite3-0 mime-support python python-minimal python2.6 python2.6-minimal readline-common 0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,204kB of archives. After this operation, 19.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i --force-depends python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb is not able to fix this. Any clues how to fix this?

    Read the article

  • ASP.NET MVC 3 Hosting :: New Features in ASP.NET MVC 3

    - by mbridge
    Razor View Engine The Razor view engine is a new view engine option for ASP.NET MVC that supports the Razor templating syntax. The Razor syntax is a streamlined approach to HTML templating designed with the goal of being a code driven minimalist templating approach that builds on existing C#, VB.NET and HTML knowledge. The result of this approach is that Razor views are very lean and do not contain unnecessary constructs that get in the way of you and your code. ASP.NET MVC 3 Preview 1 only supports C# Razor views which use the .cshtml file extension. VB.NET support will be enabled in later releases of ASP.NET MVC 3. For more information and examples, see Introducing “Razor” – a new view engine for ASP.NET on Scott Guthrie’s blog. Dynamic View and ViewModel Properties A new dynamic View property is available in views, which provides access to the ViewData object using a simpler syntax. For example, imagine two items are added to the ViewData dictionary in the Index controller action using code like the following: public ActionResult Index() {          ViewData["Title"] = "The Title";          ViewData["Message"] = "Hello World!"; } Those properties can be accessed in the Index view using code like this: <h2>View.Title</h2> <p>View.Message</p> There is also a new dynamic ViewModel property in the Controller class that lets you add items to the ViewData dictionary using a simpler syntax. Using the previous controller example, the two values added to the ViewData dictionary can be rewritten using the following code: public ActionResult Index() {     ViewModel.Title = "The Title";     ViewModel.Message = "Hello World!"; } “Add View” Dialog Box Supports Multiple View Engines The Add View dialog box in Visual Studio includes extensibility hooks that allow it to support multiple view engines, as shown in the following figure: Service Location and Dependency Injection Support ASP.NET MVC 3 introduces improved support for applying Dependency Injection (DI) via Inversion of Control (IoC) containers. ASP.NET MVC 3 Preview 1 provides the following hooks for locating services and injecting dependencies: - Creating controller factories. - Creating controllers and setting dependencies. - Setting dependencies on view pages for both the Web Form view engine and the Razor view engine (for types that derive from ViewPage, ViewUserControl, ViewMasterPage, WebViewPage). - Setting dependencies on action filters. Using a Dependency Injection container is not required in order for ASP.NET MVC 3 to function properly. Global Filters ASP.NET MVC 3 allows you to register filters that apply globally to all controller action methods. Adding a filter to the global filters collection ensures that the filter runs for all controller requests. To register an action filter globally, you can make the following call in the Application_Start method in the Global.asax file: GlobalFilters.Filters.Add(new MyActionFilter()); The source of global action filters is abstracted by the new IFilterProvider interface, which can be registered manually or by using Dependency Injection. This allows you to provide your own source of action filters and choose at run time whether to apply a filter to an action in a particular request. New JsonValueProviderFactory Class The new JsonValueProviderFactory class allows action methods to receive JSON-encoded data and model-bind it to an action-method parameter. This is useful in scenarios such as client templating. Client templates enable you to format and display a single data item or set of data items by using a fragment of HTML. ASP.NET MVC 3 lets you connect client templates easily with an action method that both returns and receives JSON data. Support for .NET Framework 4 Validation Attributes and IvalidatableObject The ValidationAttribute class was improved in the .NET Framework 4 to enable richer support for validation. When you write a custom validation attribute, you can use a new IsValid overload that provides a ValidationContext instance. This instance provides information about the current validation context, such as what object is being validated. This change enables scenarios such as validating the current value based on another property of the model. The following example shows a sample custom attribute that ensures that the value of PropertyOne is always larger than the value of PropertyTwo: public class CompareValidationAttribute : ValidationAttribute {     protected override ValidationResult IsValid(object value,              ValidationContext validationContext) {         var model = validationContext.ObjectInstance as SomeModel;         if (model.PropertyOne > model.PropertyTwo) {            return ValidationResult.Success;         }         return new ValidationResult("PropertyOne must be larger than PropertyTwo");     } } Validation in ASP.NET MVC also supports the .NET Framework 4 IValidatableObject interface. This interface allows your model to perform model-level validation, as in the following example: public class SomeModel : IValidatableObject {     public int PropertyOne { get; set; }     public int PropertyTwo { get; set; }     public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {         if (PropertyOne <= PropertyTwo) {            yield return new ValidationResult(                "PropertyOne must be larger than PropertyTwo");         }     } } New IClientValidatable Interface The new IClientValidatable interface allows the validation framework to discover at run time whether a validator has support for client validation. This interface is designed to be independent of the underlying implementation; therefore, where you implement the interface depends on the validation framework in use. For example, for the default data annotations-based validator, the interface would be applied on the validation attribute. Support for .NET Framework 4 Metadata Attributes ASP.NET MVC 3 now supports .NET Framework 4 metadata attributes such as DisplayAttribute. New IMetadataAware Interface The new IMetadataAware interface allows you to write attributes that simplify how you can contribute to the ModelMetadata creation process. Before this interface was available, you needed to write a custom metadata provider in order to have an attribute provide extra metadata. This interface is consumed by the AssociatedMetadataProvider class, so support for the IMetadataAware interface is automatically inherited by all classes that derive from that class (notably, the DataAnnotationsModelMetadataProvider class). New Action Result Types In ASP.NET MVC 3, the Controller class includes two new action result types and corresponding helper methods. HttpNotFoundResult Action The new HttpNotFoundResult action result is used to indicate that a resource requested by the current URL was not found. The status code is 404. This class derives from HttpStatusCodeResult. The Controller class includes an HttpNotFound method that returns an instance of this action result type, as shown in the following example: public ActionResult List(int id) {     if (id < 0) {                 return HttpNotFound();     }     return View(); } HttpStatusCodeResult Action The new HttpStatusCodeResult action result is used to set the response status code and description. Permanent Redirect The HttpRedirectResult class has a new Boolean Permanent property that is used to indicate whether a permanent redirect should occur. A permanent redirect uses the HTTP 301 status code. Corresponding to this change, the Controller class now has several methods for performing permanent redirects: - RedirectPermanent - RedirectToRoutePermanent - RedirectToActionPermanent These methods return an instance of HttpRedirectResult with the Permanent property set to true. Breaking Changes The order of execution for exception filters has changed for exception filters that have the same Order value. In ASP.NET MVC 2 and earlier, exception filters on the controller with the same Order as those on an action method were executed before the exception filters on the action method. This would typically be the case when exception filters were applied without a specified order Order value. In MVC 3, this order has been reversed in order to allow the most specific exception handler to execute first. As in earlier versions, if the Order property is explicitly specified, the filters are run in the specified order. Known Issues When you are editing a Razor view (CSHTML file), the Go To Controller menu item in Visual Studio will not be available, and there are no code snippets.

    Read the article

  • WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    - by Mladen Prajdic
    In previous posts I’ve shown you our SuperForm test application solution structure and how the main wxs and wxi include file look like. In this post I’ll show you how to automate inclusion of files to install into your build process. For our SuperForm application we have a single exe to install. But in the real world we have 10s or 100s of different files from dll’s to resource files like pictures. It all depends on what kind of application you’re building. Writing a directory structure for so many files by hand is out of the question. What we need is an automated way to create this structure. Enter Heat.exe. Heat is a command line utility to harvest a file, directory, Visual Studio project, IIS website or performance counters. You might ask what harvesting means? Harvesting is converting a source (file, directory, …) into a component structure saved in a WiX fragment (a wxs) file. There are 2 options you can use: Create a static wxs fragment with Heat and include it in your project. The pro of this is that you can add or remove components by hand. The con is that you have to do the pro part by hand. Automation always beats manual labor. Run heat command line utility in a pre-build event of your WiX project. I prefer this way. By always recreating the whole fragment you don’t have to worry about missing any new files you add. The con of this is that you’ll include files that you otherwise might not want to. There is no perfect solution so pick one and deal with it. I prefer using the second way. A neat way of overcoming the con of the second option is to have a post-build event on your main application project (SuperForm.MainApp in our case) to copy the files needed to be installed in a special location and have the Heat.exe read them from there. I haven’t set this up for this tutorial and I’m simply including all files from the default SuperForm.MainApp \bin directory. Remember how we created a System Environment variable called SuperFormFilesDir? This is where we’ll use it for the first time. The command line text that you have to put into the pre-build event of your WiX project looks like this: "$(WIX)bin\heat.exe" dir "$(SuperFormFilesDir)" -cg SuperFormFiles -gg -scom -sreg -sfrag -srd -dr INSTALLLOCATION -var env.SuperFormFilesDir -out "$(ProjectDir)Fragments\FilesFragment.wxs" After you install WiX you’ll get the WIX environment variable. In the pre/post-build events environment variables are referenced like this: $(WIX). By using this you don’t have to think about the installation path of the WiX. Remember: for 32 bit applications Program files folder is named differently between 32 and 64 bit systems. $(ProjectDir) is obviously the path to your project and is a Visual Studio built in variable. You can view all Heat.exe options by running it without parameters but I’ll explain some that stick out the most. dir "$(SuperFormFilesDir)": tell Heat to harvest the whole directory at the set location. That is the location we’ve set in our System Environment variable. –cg SuperFormFiles: the name of the Component group that will be created. This name is included in out Feature tag as is seen in the previous post. -dr INSTALLLOCATION: the directory reference this fragment will fall under. You can see the top level directory structure in the previous post. -var env.SuperFormFilesDir: the name of the variable that will replace the SourceDir text that would otherwise appear in the fragment file. -out "$(ProjectDir)Fragments\FilesFragment.wxs": the full path and name under which the fragment file will be saved. If you have source control you have to include the FilesFragment.wxs into your project but remove its source control binding. The auto generated FilesFragment.wxs for our test app looks like this: <?xml version="1.0" encoding="utf-8"?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <ComponentGroup Id="SuperFormFiles"> <ComponentRef Id="cmp5BB40DB822CAA7C5295227894A07502E" /> <ComponentRef Id="cmpCFD331F5E0E471FC42A1334A1098E144" /> <ComponentRef Id="cmp4614DD03D8974B7C1FC39E7B82F19574" /> <ComponentRef Id="cmpDF166522884E2454382277128BD866EC" /> </ComponentGroup> </Fragment> <Fragment> <DirectoryRef Id="INSTALLLOCATION"> <Component Id="cmp5BB40DB822CAA7C5295227894A07502E" Guid="{117E3352-2F0C-4E19-AD96-03D354751B8D}"> <File Id="filDCA561ABF8964292B6BC0D0726E8EFAD" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.exe" /> </Component> <Component Id="cmpCFD331F5E0E471FC42A1334A1098E144" Guid="{369A2347-97DD-45CA-A4D1-62BB706EA329}"> <File Id="filA9BE65B2AB60F3CE41105364EDE33D27" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.pdb" /> </Component> <Component Id="cmp4614DD03D8974B7C1FC39E7B82F19574" Guid="{3443EBE2-168F-4380-BC41-26D71A0DB1C7}"> <File Id="fil5102E75B91F3DAFA6F70DA57F4C126ED" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe" /> </Component> <Component Id="cmpDF166522884E2454382277128BD866EC" Guid="{0C0F3D18-56EB-41FE-B0BD-FD2C131572DB}"> <File Id="filF7CA5083B4997E1DEC435554423E675C" KeyPath="yes" Source="$(env.SuperFormFilesDir)\SuperForm.MainApp.vshost.exe.manifest" /> </Component> </DirectoryRef> </Fragment></Wix> The $(env.SuperFormFilesDir) will be replaced at build time with the directory where the files to be installed are located. There is nothing too complicated about this. In the end it turns out that this sort of automation is great! There are a few other ways that Heat.exe can compose the wxs file but this is the one I prefer. It just seems the clearest. Play with its options to see what can it do. It’s one awesome little tool.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler

    - by Reed
    In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution.  In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading.  Rather, the Task class is used to implement our decomposition of tasks.  Once we’ve implemented our tasks, we need to execute them.  In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class. The TaskScheduler class is an abstract class which provides a single function: it schedules the tasks and executes them within an appropriate context.  This class is the class which actually runs individual Task instances.  The .NET Framework provides two (internal) implementations of the TaskScheduler class. Since a Task, based on our decomposition, should be a self-contained piece of code, parallel execution makes sense when executing tasks.  The default implementation of the TaskScheduler class, and the one most often used, is based on the ThreadPool.  This can be retrieved via the TaskScheduler.Default property, and is, by default, what is used when we just start a Task instance with Task.Start(). Normally, when a Task is started by the default TaskScheduler, the task will be treated as a single work item, and run on a ThreadPool thread.  This pools tasks, and provides Task instances all of the advantages of the ThreadPool, including thread pooling for reduced resource usage, and an upper cap on the number of work items.  In addition, .NET 4 brings us a much improved thread pool, providing work stealing and reduced locking within the thread pool queues.  By using the default TaskScheduler, our Tasks are run asynchronously on the ThreadPool. There is one notable exception to my above statements when using the default TaskScheduler.  If a Task is created with the TaskCreationOptions set to TaskCreationOptions.LongRunning, the default TaskScheduler will generate a new thread for that Task, at least in the current implementation.  This is useful for Tasks which will persist for most of the lifetime of your application, since it prevents your Task from starving the ThreadPool of one of it’s work threads. The Task Parallel Library provides one other implementation of the TaskScheduler class.  In addition to providing a way to schedule tasks on the ThreadPool, the framework allows you to create a TaskScheduler which works within a specified SynchronizationContext.  This scheduler can be retrieved within a thread that provides a valid SynchronizationContext by calling the TaskScheduler.FromCurrentSynchronizationContext() method. This implementation of TaskScheduler is intended for use with user interface development.  Windows Forms and Windows Presentation Foundation both require any access to user interface controls to occur on the same thread that created the control.  For example, if you want to set the text within a Windows Forms TextBox, and you’re working on a background thread, that UI call must be marshaled back onto the UI thread.  The most common way this is handled depends on the framework being used.  In Windows Forms, Control.Invoke or Control.BeginInvoke is most often used.  In WPF, the equivelent calls are Dispatcher.Invoke or Dispatcher.BeginInvoke. As an example, say we’re working on a background thread, and we want to update a TextBlock in our user interface with a status label.  The code would typically look something like: // Within background thread work... string status = GetUpdatedStatus(); Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action( () => { statusLabel.Text = status; })); // Continue on in background method .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This works fine, but forces your method to take a dependency on WPF or Windows Forms.  There is an alternative option, however.  Both Windows Forms and WPF, when initialized, setup a SynchronizationContext in their thread, which is available on the UI thread via the SynchronizationContext.Current property.  This context is used by classes such as BackgroundWorker to marshal calls back onto the UI thread in a framework-agnostic manner. The Task Parallel Library provides the same functionality via the TaskScheduler.FromCurrentSynchronizationContext() method.  When setting up our Tasks, as long as we’re working on the UI thread, we can construct a TaskScheduler via: TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); We then can use this scheduler on any thread to marshal data back onto the UI thread.  For example, our code above can then be rewritten as: string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); // Continue on in background method This is nice since it allows us to write code that isn’t tied to Windows Forms or WPF, but is still fully functional with those technologies.  I’ll discuss even more uses for the SynchronizationContext based TaskScheduler when I demonstrate task continuations, but even without continuations, this is a very useful construct. In addition to the two implementations provided by the Task Parallel Library, it is possible to implement your own TaskScheduler.  The ParallelExtensionsExtras project within the Samples for Parallel Programming provides nine sample TaskScheduler implementations.  These include schedulers which restrict the maximum number of concurrent tasks, run tasks on a single threaded apartment thread, use a new thread per task, and more.

    Read the article

  • Security Trimmed Cross Site Collection Navigation

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This article will serve as documentation of a fully functional codeplex project that I just created. This project will give you a WebPart that will give you security trimmed navigation across site collections. The first question is, why create such a project? In every single SharePoint project you will do, one question you will always be faced with is, what should the boundaries of sites be, and what should the boundaries of site collections be? There is no good or bad answer to this, because it really really depends on your needs. There are some factors in play here. Site Collections will allow you to scale, as a Site collection is the smallest entity you can put inside a content database Site collections will allow you to offer different levels of SLAs, because you put a site collection on a separate content database, and put that database on a separate server. Site collections are a security boundary – and they can be moved around at will without affecting other site collections. Site collections are also a branding boundary. They are also a feature deployment boundary, so you can have two site collections on the same web application with completely different nature of services. But site collections break navigation, i.e. a site collection at “/”, and a site collection at “/sites/mySiteCollection”, are completely independent of each other. If you have access to both, the navigation of / won’t show you a link to /sites/mySiteCollection. Some people refer to this as a huge issue in SharePoint. Luckily, some workarounds exist. A long time ago, I had blogged about “Implementing Consistent Navigation across Site Collections”. That approach was a no-code solution, it worked – it gave you a consistent navigation across site collections. But, it didn’t work in a security trimmed fashion! i.e., if I don’t have access to Site Collection ‘X’, it would still show me a link to ‘X’. Well this project gets around that issue. Simply deploy this project, and it’ll give you a WebPart. You can use that WebPart as either a webpart or as a server control dropped via SharePoint designer, and it will give you Security Trimmed Cross Site Collection Navigation. The code has been written for SP2010, but it will work in SP2007 with the help of http://spwcfsupport.codeplex.com . What do I need to do to make it work? I’m glad you asked! Simple! Deploy the .wsp (which you can download here). This will give you a site collection feature called “Winsmarts Cross Site Collection Navigation” as shown below. Go ahead and activate it, and this will give you a WebPart called “Winsmarts Navigation Web Part” as shown below: Just drop this WebPart on your page, and it will show you all site collections that the currently logged in user has access to. Really it’s that easy! This is shown as below - In the above example, I have two site collections that I created at /sites/SiteCollection1 and /sites/SiteCollection2. The navigation shows the titles. You see some extraneous crap as well, you might want to clean that – I’ll talk about that in a minute. What? You’re running into problems? If the problem you’re running into is that you are prompted to login three times, and then it shows a blank webpart that says “Loading your applications ..” and then craps out!, then most probably you’re using a different authentication scheme. Behind the scenes I use a custom WCF service to perform this job. OOTB, I’ve set it to work with NTLM, but if you need to make it work alternate authentications such as forms based auth, or client side certs, you will need to edit the %14%\ISAPI\Winsmarts.CrossSCNav\web.config file, specifically, this section - 1: <bindings> 2: <webHttpBinding> 3: <binding name="customWebHttpBinding"> 4: <security mode="TransportCredentialOnly"> 5: <transport clientCredentialType="Ntlm"/> 6: </security> 7: </binding> 8: </webHttpBinding> 9: </bindings> For Kerberos, change the “clientCredentialType” to “Windows” For Forms auth, remove that transport line For client certs – well that’s a bit more involved, but it’s just web.config changes – hit a good book on WCF or hire me for a billion trillion $. But fair warning, I might be too busy to help immediately. If you’re running into a different problem, please leave a comment below, but the code is pretty rock solid, so .. hmm .. check what you’re doing! BTW, I don’t  make any guarantee/warranty on this – if this code makes you sterile, unpopular, bad hairstyle, anything else, that is your problem! But, there are some known issues - I wrote this as a concept – you can easily extend it to be more flexible. Example, hierarchical nav, or, horizontal nav, jazzy effects with jquery or silverlight– all those are possible very very easily. This webpart is not smart enough to co-exist with another instance of itself on the same page. I can easily extend it to do so, which I will do in my spare(!?) time! Okay good! But that’s not all! As you can see, just dropping the WebPart may show you many extraneous site collections, or maybe you want to restrict which site collections are shown, or exclude a certain site collection to be shown from the navigation. To support that, I created a property on the WebPart called “UrlMatchPattern”, which is a regex expression you specify to trim the results :). So, just edit the WebPart, and specify a string property of “http://sp2010/sites/” as shown below. Note that you can put in whatever regex expression you want! So go crazy, I don’t care! And this gives you a cleaner look.   w00t! Enjoy! Comment on the article ....

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Maintaining shared service in ASP.NET MVC Application

    - by kazimanzurrashid
    Depending on the application sometimes we have to maintain some shared service throughout our application. Let’s say you are developing a multi-blog supported blog engine where both the controller and view must know the currently visiting blog, it’s setting , user information and url generation service. In this post, I will show you how you can handle this kind of case in most convenient way. First, let see the most basic way, we can create our PostController in the following way: public class PostController : Controller { public PostController(dependencies...) { } public ActionResult Index(string blogName, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublished(blog.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCount(blog.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new IndexViewModel(urlResolver, user, blog, posts, count, page)); } public ActionResult Archive(string blogName, int? page, ArchiveDate archiveDate) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindArchived(blog.Id, archiveDate, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetArchivedCount(blog.Id, archiveDate); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new ArchiveViewModel(urlResolver, user, blog, posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } TagInfo tag = tagService.FindBySlug(blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(blog.Id, tag.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new TagViewModel(urlResolver, user, blog, posts, count, page, tag)); } } As you can see the above code heavily depends upon the current blog and the blog retrieval code is duplicated in all of the action methods, once the blog is retrieved the same blog is passed in the view model. Other than the blog the view also needs the current user and url resolver to render it properly. One way to remove the duplicate blog retrieval code is to create a custom model binder which converts the blog from a blog name and use the blog a parameter in the action methods instead of the string blog name, but it only helps the first half in the above scenario, the action methods still have to pass the blog, user and url resolver etc in the view model. Now lets try to improve the the above code, first lets create a new class which would contain the shared services, lets name it as BlogContext: public class BlogContext { public BlogInfo Blog { get; set; } public UserInfo User { get; set; } public IUrlResolver UrlResolver { get; set; } } Next, we will create an interface, IContextAwareService: public interface IContextAwareService { BlogContext Context { get; set; } } The idea is, whoever needs these shared services needs to implement this interface, in our case both the controller and the view model, now we will create an action filter which will be responsible for populating the context: public class PopulateBlogContextAttribute : FilterAttribute, IActionFilter { private static string blogNameRouteParameter = "blogName"; private readonly IBlogService blogService; private readonly IUserService userService; private readonly BlogContext context; public PopulateBlogContextAttribute(IBlogService blogService, IUserService userService, IUrlResolver urlResolver) { Invariant.IsNotNull(blogService, "blogService"); Invariant.IsNotNull(userService, "userService"); Invariant.IsNotNull(urlResolver, "urlResolver"); this.blogService = blogService; this.userService = userService; context = new BlogContext { UrlResolver = urlResolver }; } public static string BlogNameRouteParameter { [DebuggerStepThrough] get { return blogNameRouteParameter; } [DebuggerStepThrough] set { blogNameRouteParameter = value; } } public void OnActionExecuting(ActionExecutingContext filterContext) { string blogName = (string) filterContext.Controller.ValueProvider.GetValue(BlogNameRouteParameter).ConvertTo(typeof(string), Culture.Current); if (!string.IsNullOrWhiteSpace(blogName)) { context.Blog = blogService.FindByName(blogName); } if (context.Blog == null) { filterContext.Result = new NotFoundResult(); return; } if (filterContext.HttpContext.User.Identity.IsAuthenticated) { context.User = userService.FindByName(filterContext.HttpContext.User.Identity.Name); } IContextAwareService controller = filterContext.Controller as IContextAwareService; if (controller != null) { controller.Context = context; } } public void OnActionExecuted(ActionExecutedContext filterContext) { Invariant.IsNotNull(filterContext, "filterContext"); if ((filterContext.Exception == null) || filterContext.ExceptionHandled) { IContextAwareService model = filterContext.Controller.ViewData.Model as IContextAwareService; if (model != null) { model.Context = context; } } } } As you can see we are populating the context in the OnActionExecuting, which executes just before the controllers action methods executes, so by the time our action methods executes the context is already populated, next we are are assigning the same context in the view model in OnActionExecuted method which executes just after we set the  model and return the view in our action methods. Now, lets change the view models so that it implements this interface: public class IndexViewModel : IContextAwareService { // More Codes } public class ArchiveViewModel : IContextAwareService { // More Codes } public class TagViewModel : IContextAwareService { // More Codes } and the controller: public class PostController : Controller, IContextAwareService { public PostController(dependencies...) { } public BlogContext Context { get; set; } public ActionResult Index(int? page) { IEnumerable<PostInfo> posts = postService.FindPublished(Context.Blog.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCount(Context.Blog.Id); return View(new IndexViewModel(posts, count, page)); } public ActionResult Archive(int? page, ArchiveDate archiveDate) { IEnumerable<PostInfo> posts = postService.FindArchived(Context.Blog.Id, archiveDate, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetArchivedCount(Context.Blog.Id, archiveDate); return View(new ArchiveViewModel(posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { TagInfo tag = tagService.FindBySlug(Context.Blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(Context.Blog.Id, tag.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); return View(new TagViewModel(posts, count, page, tag)); } } Now, the last thing where we have to glue everything, I will be using the AspNetMvcExtensibility to register the action filter (as there is no better way to inject the dependencies in action filters). public class RegisterFilters : RegisterFiltersBase { private static readonly Type controllerType = typeof(Controller); private static readonly Type contextAwareType = typeof(IContextAwareService); protected override void Register(IFilterRegistry registry) { TypeCatalog controllers = new TypeCatalogBuilder() .Add(GetType().Assembly) .Include(type => controllerType.IsAssignableFrom(type) && contextAwareType.IsAssignableFrom(type)); registry.Register<PopulateBlogContextAttribute>(controllers); } } Thoughts and Comments?

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • OFM 11g: OAM SSO for Forms and ADF Faces

    - by olaf.heimburger
    In my blog entry OFM 11g: Implementing OAM SSO with Forms we set the foundation for providing a complete Single Sign-On solution based on Oracle Access Manager (OAM). This foundation should now be used to combine Forms 11g and ADF Faces 11g applications with a transparent login. The Beginning Before we start, lets re-consider the requirements to achieve the ultimate goal. These are:- Access to the Forms 11g Application must be authenticated by OAM (protected). Access to the ADF Faces 11g Application must be authenticated by OAM (protected). Switching from one application to the other should not result in a re-authentication (aka single sign-on). User identity should be availble to the application without any extra work in the application code. All these are the common requirements for a single sign-on solution. The challenge here is that Forms relies on Oracle AS SSO (OSSO or "the old SSO") while ADF Faces is quite open and can be protected by Oracle AS SSO and Oracle Access Manager SSO (OAM SSO or "the modern SSO"). Both application types can use their own login mechanism. The Forms 11g Application To demonstrate the SSO functionality, we use the standard Forms test (/forms/frmservlet?form=test.fmx). Although this shows nothing specific in the Forms application, it is good enough to demonstrate that it is protected. The ADF Faces 11g Application With ADF 11g you can develop quite a number of useful Faces based applications. Among many features, it comes with the ADF Security feature that provides you with functionality to protect your pages, regions, and even TaskFlows from un-authenticated usage in a declarative way.To demonstrate that functionality a sample application with different access levels plus a login dialog is used. This application comes with a publc page that has protected content (a button). Once you are authenticated for the application, the protected content and some personalisation (the users name) is shown. Protecting Forms 11g As already explained in the OFM 11g: Implementing OAM SSO with Forms, the easiest way to protect a Forms application is to configure it as a OSSO partner application, setup mod_osso, test it, migrate OSSO to OAM SSO with the Upgrade Agent, reconfigure mod_osso, and you are done.Sort of. By default the OAM is configured to run in co-exist mode. This means that a user has to re-authenticate to the Forms application when logged into an OAM SSO application before. To avoid this, you must disable the co-exist mode, for example by using WLST and issue the disableCoexistMode on the OAM server. Protecting ADF Faces 11g To protect an ADF Faces 11g application we have to consider two scenarios: Use a HTTPD server in front of WLS Use WLS without a HTTPD server Both scenarios have their pro's and cons' and we won't get into details and just describe how to configure both. Scenario 1: HTTPD Server with WLS In this scenario we have to setup the environment in some steps:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Install the OAM WebGate into an HTTPD serverThe type of webgate you need to install depends on you HTTPD server. With Oracle HTTP Server 11g you can use the latest OAM 11g WebGate. With other HTTPD servers you must resort to OAM 10g WebGates. A OAM 11g WebGate can use the pre-created configuration files supplied during the WebGate configuration at OAM. An OAM 10g WebGate asks for the specific configuration and verifies it during installation. Configure the WLS plugin to forward the requests to WLSAgain, depending on your HTTPD Server you have different plugins to forward requests to WLS. With OHS 11g you can use the pre-installed mod_wl_ohs plugin. Its configuration is quite simple and straightforward. Configure an OAM SSPI Provider as a IdentityAsserter in WLS to retrieve the user identifierThis configuration is quite important as it retrieves the user identifier for the next step. If you have a SOA Suite installation within your OFM_HOME, the necessary software is already installed and you only need to setup your Security Realm within WLS.You can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. We add the OAMIdentityAsserter as the first SSPI Provider. It is important that the Control Flag is set to SUFFICIENT. Every other configuration can be left as is, no changes are necessary here. Configure an OAM Identity Provider to get the real user identityIn OFM 11g: Implementing OAM SSO with Forms we have configured an OID as Identity Store. To get the user identity we need to configure the same OID as an SSPI Provider for WLS. This will retrieve the real user information from OID and creates the JAAS Subject and Principals to be used by any application within WLS.Again, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Scenario 2: WLS only This scenario is a bit easier but requires more work in the WLS setup:- Configure a WebGate at OAMThis configuration can be done through the OAM console or by a script. No matter which way you choose, the WebGate configuration files will be created for you. Configure the OAM SSPI Provider as IdentityAuthenticator to authenticate and set the user identifierWhen using the OAM SSPI Provider as OAMAuthenticator we create it with the Control Flag as SUFFICIENT. Afte saving it, the Provider Specific settings must be configured to allow the OAM SSPI Provider to connect to the OAM Server. Configure an OAM Identity Provider to get the real user identity providerAgain, you can do this by pointing your browser to the WLS Console, log in as administrator, select the Security Realm (usually myrealm), and select Providers. Now add the OIDAuthenticator as the second SSPI Provider. It is important that the Control Flag is set to OPTIONAL. After we saved this setup, we need to configure this provider by setting the Provider Specific details to access OID. Configure ADF 11g Application for OAM Actually, there are no changes to be made within the ADF application. We only need to add the value CLIENT_CERT to the <auth-mode> tag in the <login-config> tag in the web.xml file. Testing To test the configuration, simply point your browser to one of both appliction URLs. OAM should kick in and redirect you to the OAM Login page. After you have entered the correct credentials, access to the URLs is granted and you will see the application. Enjoy!

    Read the article

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • Creating A SharePoint Parent/Child List Relationship&ndash; SharePoint 2010 Edition

    - by Mark Rackley
    Hey blog readers… It has been almost 2 years since I posted my most read blog on creating a Parent/Child list relationship in SharePoint 2007: Creating a SharePoint List Parent / Child Relationship - Out of the Box And then a year ago I improved on my method and redid the blog post… still for SharePoint 2007: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX Since then many of you have been asking me how to get this to work in SharePoint 2010, and frankly I have just not had time to look into it. I wish I could have jumped into this sooner, but have just recently began to look at it. Well.. after all this time I have actually come up with two solutions that work, neither of them are as clean as I’d like them to be, but I wanted to get something in your hands that you can start using today. Hopefully in the coming weeks and months I’ll be able to improve upon this further and give you guys some better options. For the most part, the process is identical to the 2007 process, but you have probably found out that the list view web parts in 2010 behave differently, and getting the Parent ID to your new child form can be a pain in the rear (at least that’s what I’ve discovered). Anyway, like I said, I have found a couple of solutions that work. If you know of a better one, please let us know as it bugs me that this not as eloquent as my 2007 implementation. Getting on the same page First thing I’d recommend is recreating this blog: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX in SharePoint 2010… There are some vague differences, but it’s basically the same…  Here’s a quick video of me doing this in SP 2010: Creating Lists necessary for this blog post Now that you have the lists created, lets set up the New Time form to use a QueryString variable to populate the Parent ID field: Creating parameters in Child’s new item form to set parent ID Did I talk fast enough through both of those videos? Hopefully by now that stuff is old hat to you, but I wanted to make sure everyone could get on the same page.  Okay… let’s get started. Solution 1 – XSLTListView with Javascript This solution is the more elegant of the two, however it does require the use of a little javascript.  The other solution does not use javascript, but it also doesn’t use the pretty new SP 2010 pop-ups.  I’ll let you decide which you like better. The basic steps of this solution are: Inserted a Related Item View Insert a ContentEditorWebPart Insert script in ContentEditorWebPart that pulls the ID from the Query string and calls the method to insert a new item on the child entry form Hide the toolbar from data view to remove “add new item” link. Again, you don’t HAVE to use a CEWP, you could just put the javascript directly in the page using SPD.  Anyway, here is how I did it: Using Related Item View / JavaScript Here’s the JavaScript I used in my Content Editor Web Part: <script type="text/javascript"> function NewTime() { // Get the Query String values and split them out into the vals array var vals = new Object(); var qs = location.search.substring(1, location.search.length); var args = qs.split("&"); for (var i=0; i < args.length; i++) { var nameVal = args[i].split("="); var temp = unescape(nameVal[1]).split('+'); nameVal[1] = temp.join(' '); vals[nameVal[0]] = nameVal[1]; } var issueID = vals["ID"]; //use this to bring up the pretty pop up NewItem2(event,"http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID); //use this to open a new window //window.location="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID; } </script> Solution 2 – DataFormWebPart and exact same 2007 Process This solution is a little more of a hack, but it also MUCH more close to the process we did in SP 2007. So, if you don’t mind not having the pretty pop-up and prefer the comforts of what you are used to, you can give this one a try.  The basics steps are: Insert a DataFormWebPart instead of the List Data View Create a Parameter on DataFormWebPart to store “ID” Query String Variable Filter DataFormWebPart using Parameter Insert a link at bottom of DataForm Web part that points to the Child’s new item form and passes in the Parent Id using the Parameter. See.. like I told you, exact same process as in 2007 (except using the DataFormWeb Part). The DataFormWebPart also requires a lot more work to make it look “pretty” but it’s just table rows and cells, and can be configured pretty painlessly.  Here is that video: Using DataForm Web Part One quick update… if you change the link in this solution from: <tr> <td><a href="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}">Click here to create new item...</a> </td> </tr> to: <tr> <td> <a href="javascript:NewItem2(event,'http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}');">Click here to create new item...</a> </td> </tr> It will open up in the pretty pop up and act the same as solution one… So… both Solutions will now behave the same to the end user. Just depends on which you want to implement. That’s all for now… Remember in both solutions when you have them working, you can make the “IssueID” invisible to users by using the “ms-hidden” class (it’s my previous blog post on the subject up there). That’s basically all there is to it! No pithy or witty closing this time… I am sorry it took me so long to dive into this and I hope your questions are answered. As I become more polished myself I will try to come up with a cleaner solution that will make everyone happy… As always, thanks for taking the time to stop by.

    Read the article

  • CodePlex Daily Summary for Sunday, May 02, 2010

    CodePlex Daily Summary for Sunday, May 02, 2010New ProjectsAdventureWorks in Access: AdventureWorks database in Access format. Data has been ported in Access starting from Adventure Works database for SQL Server 2008.amplifi: This project is still under construction. We will add more information here as soon as it is available.ASP.NET MVC Bug Tracker: Bug Track written in C# ASP.NET MVC 2BigDecimal: BigDecimal is an attempt to create a number class that can have large precision. It is developed in vb.net (.net 4).CBM-Command: Coming soon....Chuyou: ChuyouCMinus: A C Minus Compiler!Complex and advanced mathematical functions: Mathematics toolkit is a Class Library Project which help Programmers to Calculate Mathematics Functions easily.Confuser: Confuser is a obfuscator for .NET. It is developed in C# and using Mono.Cecil for assembly manipulation.easypos: Micro punto de venta que permite ventas express de ropa, que se acopla fácil y transaparente con el ERP Click OneElmech Address Book: Web based Address Book for maintaining details of your business clients. This project targets Suppliers - Traders - Manufacturers - users. Applicat...Feed Viewer: Feed Viewer is able to synchronize subscribed feed and red news among all computers you are using. It understands both RSS and Atom format. It can ...Google URL Shortener, C#: Implementation in C# of generating short URLs by Goo.gl service (Google URL Shortener)MARS - Medical Assistant Record System: MARS - Medical Assistant Record SystemRx Contrib: Rx Contrib is a library which contain extensions for the Rx frameworkSimple Service Administration Tool: A simple tool to start/stop/restart a service of a WinNT based system. The tool is placed in the task bar as a notify icon, so the specified servic...Vis3D: Visual 3D controls for Silverlight.VisContent: XML content controls for ASP.NET.Windows Phone 7 database: This project implements a Isolated Storage (IsolatedStorage) based database for Windows Phone 7. The database consists of table object, each one s...New Releases$log$ / Keyword Substitution / Expansion Check-In Policy (TFS - LogSubstPol): LogSubstPol_v1.2010.0.4 (VS2010): LogSubstPol is a TFS check-in policy which insertes the check-in comments and other keywords into your source code, so you can keep track of the ch...Bojinx: Bojinx Core V4.5.1: The following new features were added: You can now use either BojinxMXMLContext or ContextModule to configure your application or module context. ...CBM-Command: Initial Public Demonstration: Initial public demonstration version. Can browse attached drives and display directory of any attached drive. A common question is "How does it w...Confuser: Confuser v1.0: It is the Confuser v1.0 that used to confuse the reverse-engineers :)Font Family Name Retrieval: 2nd Release: Added New MKV Font Extractor application to showcase the library. MKV Font Extractor depends on MKVToolnix to be installed before it will work. R...Google URL Shortener, C#: Goo.gl-CS v1 Beta: Extract the ZIP file to any location. Two files have to be in the same folder!HouseFly controls: HouseFly controls alpha 0.9.6.1: HouseFly controls release 0.9.6.1 alphaIsWiX: IsWiX 1.0.261.0: Build 1.0.261.0 - built against Fireworks 1.0.264.0. Adds support for VS2010 Integration to support WiX 3.5 beta releases.Managed Extensibility Framework (MEF) Contrib: MefContrib 0.9.2.0: Added conventions based catalog (read more at http://www.thecodejunkie.com/2010/03/bringing-convention-based-registration.html) MEF + Unity integ...MARS - Medical Assistant Record System: license: licenseNSIS Autorun: NSIS Autorun 0.1.5: This release includes source code, executable binary, files and example materials.PHP.net: Release 0.0.0.1: This is the first release of PHP.Net. The features available in this release are: new File Save File Save As Open File In the rar file is th...Rx Contrib: V1: Rx Contrib is ongoing effort for community additions for Rx. Current features are: ReactiveQueue: ISubject that does not loose values if there are ...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4 v1.0: - Added a margin for icon display. - Added the PopupMenuItem class which is a derivative of the DockPanel. - Find* methods can now drill down the v...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4 v1.1 Beta: - Added a margin for icon display. - Added the PopupMenuItem class which is a derivative of the DockPanel. - Added a AddSeperator method. - The Fin...Simple Service Administration Tool: SSATool 0.1.3: New Simple Service Administration Tool Version 0.1.3 compiled with Visual Studio .NET 2010.sMAPedit: sMAPedit v0.7a + Map-Pack: Required Additional Map-Pack Added: height setting by color picker (shift+leftclick)sMAPedit: sMAPedit v0.7b: Fixed: force a gargabe collection update to prevent pictureBox's memory leaksqwarea: Sqwarea 0.0.228.0 (alpha): This release corrects a critical bug in ConnexityNotifier service. We strongly recommend you to upgrade to this version. Known bugs : if you open...StackOverflow Desktop Client in C# and WPF: StackOverflow Client 0.1: Source code for the sample.TortoiseHg: TortoiseHg 1.0.2: This is a bug fix release, we recommend all users upgrade to 1.0.2VCC: Latest build, v2.1.30501.0: Automatic drop of latest buildVidCoder: 0.4.0: Changes: Added ability to queue up multiple video files or titles at once. These queued jobs will use the currently selected encoding settings. Mul...WabbitStudio Z80 Software Tools: Wabbitemu 32-bit Test Release: Wabbitemu Visual Studio build for testing purposesWindows Phone 7 database: Initial Release v1.0: This project implements a Isolated Storage (IsolatedStorage) based database for Windows Phone 7. The usage of this software is very simple. You cre...YouTubeEmbeddedVideo WebControl for ASP.NET: VideoControls version 1: This zip file contains the VideoControls.dll, version 1.Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active Projectspatterns & practices – Enterprise LibraryRawrIonics Isapi Rewrite FilterHydroServer - CUAHSI Hydrologic Information System Serverpatterns & practices: Azure Security GuidanceTinyProjectNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETDambach Linear Algebra FrameworkFacebook Developer Toolkit

    Read the article

  • Creating and maintaining Orchard translations

    - by Bertrand Le Roy
    Many volunteers have already stepped up to provide translations for Orchard. There are many challenges to overcome with translating such a project. Orchard is a very modular CMS, so the translation mechanism needs to account for the core as well as first and third party modules and themes. Another issue is that every new version of Orchard or of a module changes some localizable strings and adds new ones as others enter obsolescence. In order to address those problems, I've built a small Orchard module that automates some of the most complex tasks that maintaining a translation implies. In this post, I'll walk you through the operations I had to do to update the French translation for Orchard 1.0. In order to make sure you translate all the first party modules, I would recommend that you start from a full source code enlistment. The reason is that I'll show how you can extract the default en-US translation from any source code enlistment. That enables you to create a translation that is even more up-to-date than what is currently on the site. Alternatively, you could start by downloading the current en-US translation. If you decide to do so, just skip the relevant paragraphs. First, let's install the Orchard Translation Manager. I'm starting from a vanilla clone of the latest in the code repository. After you've setup the site, go into the dashboard and click on Gallery. Locate the Orchard Translation Manager in the list of modules and click "Install". Once the module is installed, you need to enable its one feature by going into Configuration/Features and clicking "Enable" next to Vandelay.TranslationManager. We're done with the setup that we need in order to start our translation work. We'll now switch to the command-line and to our favorite text editor. Open a command-line on the Orchard web site folder. I found the easiest way to do this is to do a SHIFT+right-click on the Orchard.Web folder in Windows Explorer and to click "Open command window here". Type bin\orchard to enter the Orchard command-line environment. If you do a "help commands" you should see four commands in the list that came from the module we just installed: extract default translation, install translation, package translation and sync translation. First, we're going to generate the default translation. Note that it is possible to generate that default translation for a specific list of modules and themes by using the /Extensions: switch, which should facilitate the translation of third party extensions, but in this tutorial we're going to generate it for the whole of the Orchard source code. extract default translation /Output:\temp .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This should have created an Orchard.en-us.po.zip file in the temp directory. Extract that archive into an orchard.po folder under \temp. The next step depends on whether you have an existing translation that you want to update or not. If you do have an existing translation, just extract it into the same \temp\orchard.po directory. That should result in a file structure where you have the default en-US translation alongside your own. If you don't have an existing translation, just continue, the commands will be the same. We are now going to synchronize those translations (or generate the stub for a new one if you didn't start from an existing translation). sync translation /Input:\temp\orchard.po /Culture:fr-FR After this command (where you should of course substitute fr-FR with the culture you're working on), we now have updated files that contain a few useful flags. Open each of the .po files under the culture you are working on (there should be around 36) with your favorite text editor. For all the strings that are still valid in the latest version, nothing changes and you don't need to do anything. For all the strings that disappeared from the default culture, the old translation will still be there but they will be prefixed with the following comment: # Obsolete translation Conveniently, all the obsolete strings will be grouped at the end of the file. You can select all those and delete them. For all the new strings, you will see the following comment: # Untranslated string This is where the hard work begins. You'll need to translate each of those new strings by entering the translation between the quotes in: msgstr "" Don't introduce hard carriage returns in the strings, just stay on one line (your text editor should do some reasonable wrapping so this shouldn't be a big deal). Once you're done with a file, save it. Make sure, and this is very important, that your text editor is saving using the UTF-8 encoding. In Notepad, that setting can be found in the file saving dialog by doing a "Save As" rather than a plain "Save": When all the po files have been edited, you are ready to package the translation for submission (a.k.a. sending e-mail to the localization mailing list). package translation /Culture:fr-FR /Input:\temp\orchard.po /Output:\temp You should now see a Orchard.fr-FR.po.zip file in temp that is ready to be submitted. That is, once you've tested it, which can be done by deploying it into the site: install translation \temp\orchard.fr-fr.po.zip Once this is done you can go into the dashboard under Configuration/Settings and click on "Add or remove supported cultures for the site". Choose your culture and click "Add". You can go back to settings and set the default culture. Save. You may now take a tour of the application and verify that everything works as expected: And that's it really. Creating a translation for Orchard is a matter of a few hours. If you don't see a translation for your culture, please consider creating it.

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • OpenCV install problems on Studio 12.04 - broken dependencies

    - by Will
    I'm trying to follow the Ubuntu OpenCV documentation at OpenCV. The provided script has a line which executed for some time, taking away more packages than I expected (such as ubuntu-studio video); sudo apt-get -qq remove ffmpeg x264 libx264-dev When the script gets to the line below, it bombs; sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils ffmpeg The error msg is; E: Unable to correct problems, you have held broken packages. I've since run Update-Manager, run sudo apt-get updates, rebooted, tried the above script line manually, and still no change. I've just run sudo apt-get install -f and nothing seemed to change. It did mention that some packages were no longer needed and could be removed by apt-get autoremove, so I ran that. It removed a number of packages, so I reran the install command above. Still same problem of held broken packages. I just ran sudo apt-get -u dist-upgrade Part of the response was; The following packages have been kept back: gstreamer0.10-ffmpeg I'm not sure what that means. I do know that it shows up in my Update-Manager and cannot be checked I then ran sudo dpkg --configure -a and then reran sudo apt-get -f install and the package was still not upgraded, though there was this very interesting comment; Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-ffmpeg : Depends: libavcodec53 (< 5:0) but it is not going to be installed or libavcodec-extra-53 (< 5:0) but 5:0.7.2-1ubuntu1+codecs1~oneiric2 is to be installed E: Unable to correct problems, you have held broken packages. Then I ran sudo apt-get -u dist-upgrade It showed I had one held package, so I ran; sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade It also exited without upgrading the package, so I ran; sudo apt-get remove --dry-run gstreamer0.10-ffmpeg:i386 And it gave me; *The following packages will be REMOVED: arista gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Remv arista [0.9.7-3ubuntu1] Remv gstreamer0.10-ffmpeg [0.10.12-1ubuntu1]* But when I reran sudo apt-get -u dist-upgrade It showed the package was still there. *The following packages have been kept back: gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.* Update: Just went into Synaptic PM and completely removed gstreamer0.10-ffmpeg Reran sudo apt-get -u dist-upgrade And was told; 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. However, when I ran the original apt-get to install opencv (first code at the top of this question), it still gave me the same broken package errors. So I tried $ cat /etc/apt/sources.list # # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main # deb http://download.opensuse.org/repositories/home:/popinet/xUbuntu_11.04 ./ # disabled on upgrade to precise and then; $ cat /etc/apt/sources.list.d/* But I don't have enough reputation to post the results here (it says I need at least 10 reputation points to post more than 2 links), so I don't know how to provide the requested feedback. Then tried; $ sudo apt-get check [sudo] password for <abcd>: Reading package lists... Done Building dependency tree Reading state information... Done However, no resolution of the problem yet. What else do I need to do? Will an upgrade to Ubuntu Studio 13.xx solve this problem (or compound it)?

    Read the article

  • Networking in VirtualBox

    - by Fat Bloke
    Networking in VirtualBox is extremely powerful, but can also be a bit daunting, so here's a quick overview of the different ways you can setup networking in VirtualBox, with a few pointers as to which configurations should be used and when. VirtualBox allows you to configure up to 8 virtual NICs (Network Interface Controllers) for each guest vm (although only 4 are exposed in the GUI) and for each of these NICs you can configure: Which virtualized NIC-type is exposed to the Guest. Examples include: Intel PRO/1000 MT Server (82545EM),  AMD PCNet FAST III (Am79C973, the default) or  a Paravirtualized network adapter (virtio-net). How the NIC operates with respect to your Host's physical networking. The main modes are: Network Address Translation (NAT) Bridged networking Internal networking Host-only networking NAT with Port-forwarding The choice of NIC-type comes down to whether the guest has drivers for that NIC.  VirtualBox, suggests a NIC based on the guest OS-type that you specify during creation of the vm, and you rarely need to modify this. But the choice of networking mode depends on how you want to use your vm (client or server) and whether you want other machines on your network to see it. So let's look at each mode in a bit more detail... Network Address Translation (NAT) This is the default mode for new vm's and works great in most situations when the Guest is a "client" type of vm. (i.e. most network connections are outbound). Here's how it works: When the guest OS boots,  it typically uses DHCP to get an IP address. VirtualBox will field this DHCP request and tell the guest OS its assigned IP address and the gateway address for routing outbound connections. In this mode, every vm is assigned the same IP address (10.0.2.15) because each vm thinks they are on their own isolated network. And when they send their traffic via the gateway (10.0.2.2) VirtualBox rewrites the packets to make them appear as though they originated from the Host, rather than the Guest (running inside the Host). This means that the Guest will work even as the Host moves from network to network (e.g. laptop moving between locations), and from wireless to wired connections too. However, how does another computer initiate a connection into a Guest?  e.g. connecting to a web server running in the Guest. This is not (normally) possible using NAT mode as there is no route into the Guest OS. So for vm's running servers we need a different networking mode.... Bridged Networking Bridged Networking is used when you want your vm to be a full network citizen, i.e. to be an equal to your host machine on the network. In this mode, a virtual NIC is "bridged" to a physical NIC on your host, like this: The effect of this is that each VM has access to the physical network in the same way as your host. It can access any service on the network such as external DHCP services, name lookup services, and routing information just as the host does. Logically, the network looks like this: The downside of this mode is that if you run many vm's you can quickly run out of IP addresses or your network administrator gets fed up with you asking for statically assigned IP addresses. Secondly, if your host has multiple physical NICs (e.g. Wireless and Wired) you must reconfigure the bridge when your host jumps networks.  Hmm, so what if you want to run servers in vm's but don't want to involve your network administrator? Maybe one of the next 2 modes is for you... Internal Networking When you configure one or more vm's to sit on an Internal network, VirtualBox ensures that all traffic on that network stays within the host and is only visible to vm's on that virtual network. Configuration looks like this: The internal network ( in this example "intnet" ) is a totally isolated network and so is very "quiet". This is good for testing when you need a separate, clean network, and you can create sophisticated internal networks with vm's that provide their own services to the internal network. (e.g. Active Directory, DHCP, etc). Note that not even the Host is a member of the internal network, but this mode allows vm's to function even when the Host is not connected to a network (e.g. on a plane). Note that in this mode, VirtualBox provides no "convenience" services such as DHCP, so your machines must be statically configured or one of the vm's needs to provide a DHCP/Name service. Multiple internal networks are possible and you can configure vm's to have multiple NICs to sit across internal and other network modes and thereby provide routes if needed. But all this sounds tricky. What if you want an Internal Network that the host participates on with VirtualBox providing IP addresses to the Guests? Ah, then for this, you might want to consider Host-only Networking... Host-only Networking Host-only Networking is like Internal Networking in that you indicate which network the Guest sits on, in this case, "vboxnet0": All vm's sitting on this "vboxnet0" network will see each other, and additionally, the host can see these vm's too. However, other external machines cannot see Guests on this network, hence the name "Host-only". Logically, the network looks like this: This looks very similar to Internal Networking but the host is now on "vboxnet0" and can provide DHCP services. To configure how a Host-only network behaves, look in the VirtualBox Manager...Preferences...Network dialog: Port-Forwarding with NAT Networking Now you may think that we've provided enough modes here to handle every eventuality but here's just one more... What if you cart around a mobile-demo or dev environment on, say, a laptop and you have one or more vm's that you need other machines to connect into? And you are continually hopping onto different (customer?) networks. In this scenario: NAT - won't work because external machines need to connect in. Bridged - possibly an option, but does your customer want you eating IP addresses and can your software cope with changing networks? Internal - we need the vm(s) to be visible on the network, so this is no good. Host-only - same problem as above, we want external machines to connect in to the vm's. Enter Port-forwarding to save the day! Configure your vm's to use NAT networking; Add Port Forwarding rules; External machines connect to "host":"port number" and connections are forwarded by VirtualBox to the guest:port number specified. For example, if your vm runs a web server on port 80, you could set up rules like this:  ...which reads: "any connections on port 8080 on the Host will be forwarded onto this vm's port 80".  This provides a mobile demo system which won't need re-configuring every time you open your laptop lid. Summary VirtualBox has a very powerful set of options allowing you to set up almost any configuration your heart desires. For more information, check out the VirtualBox User Manual on Virtual Networking. -FB 

    Read the article

  • ubuntu 12.04 python problem or?

    - by Trki
    Hi i am trying to fix this for a long time but without success. When i open my zsh terminal i get this error: (terminal is working but error appear) Welcome to the world of tomorrow! virtualenvwrapper_run_hook:12: permission denied: virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON= and that PATH is set properly. I tried few things but... dont know how to solve it. Somehow during looking for a search i found i should post here an output of: ? sudo dpkg --configure -a Setting up python-pip (1.0-1build1) ... /var/lib/dpkg/info/python-pip.postinst: 6: /var/lib/dpkg/info/python-pip.postinst: pycompile: not found dpkg: error processing python-pip (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libc-dev-bin (2.15-0ubuntu10.5) ... Setting up gnome-control-center-data (1:3.4.2-0ubuntu0.13) ... Setting up linux-libc-dev (3.2.0-56.86) ... Setting up python-virtualenv (1.7.1.2-1) ... /var/lib/dpkg/info/python-virtualenv.postinst: 6: /var/lib/dpkg/info/python-virtualenv.postinst: pycompile: not found dpkg: error processing python-virtualenv (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libglib2.0-0 (2.32.4-0ubuntu1) ... Setting up libglib2.0-0:i386 (2.32.4-0ubuntu1) ... Setting up gimp (2.6.12-1ubuntu1.2) ... /var/lib/dpkg/info/gimp.postinst: 11: /var/lib/dpkg/info/gimp.postinst: pycompile: not found dpkg: error processing gimp (--configure): subprocess installed post-installation script returned error exit status 127 Setting up libpolkit-gobject-1-0 (0.104-1ubuntu1.1) ... Setting up libgnome-control-center1 (1:3.4.2-0ubuntu0.13) ... Setting up libnm-util2 (0.9.4.0-0ubuntu4.3) ... Setting up libc6-dev (2.15-0ubuntu10.5) ... Setting up libpulse-mainloop-glib0 (1:1.1-0ubuntu15.4) ... dpkg: dependency problems prevent configuration of virtualenvwrapper: virtualenvwrapper depends on python-virtualenv; however: Package python-virtualenv is not configured yet. dpkg: error processing virtualenvwrapper (--configure): dependency problems - leaving unconfigured Setting up libpolkit-agent-1-0 (0.104-1ubuntu1.1) ... Setting up libupower-glib1 (0.9.15-3git1ubuntu0.1) ... Setting up libaccountsservice0 (0.6.15-2ubuntu9.6.1) ... Setting up libpolkit-backend-1-0 (0.104-1ubuntu1.1) ... Setting up libglib2.0-bin (2.32.4-0ubuntu1) ... Setting up libnm-glib4 (0.9.4.0-0ubuntu4.3) ... Setting up policykit-1 (0.104-1ubuntu1.1) ... Setting up gnome-settings-daemon (3.4.2-0ubuntu0.6.4) ... Setting up accountsservice (0.6.15-2ubuntu9.6.1) ... dpkg: error processing ubuntu-system-service (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: python-pip python-virtualenv gimp virtualenvwrapper ubuntu-system-service Also: ? python --version zsh: command not found: python Part of my ~/.zshrc # python virtual env wrapper if [ -f ~/.local/bin/virtualenvwrapper.sh ]; then export WORKON_HOME=~/.virtualenvs source ~/.local/bin/virtualenvwrapper.sh plugins=("${plugins[@]}" virtualenvwrapper) fi # pythonbrew [[ -s ~/.pythonbrew/etc/bashrc ]] && source ~/.pythonbrew/etc/bashrc Part os zsh -xv # # Invoke the initialization functions # virtualenvwrapper_initialize +/home/trki/.local/bin/virtualenvwrapper.sh:1179> virtualenvwrapper_initialize +virtualenvwrapper_initialize:1> virtualenvwrapper_derive_workon_home +virtualenvwrapper_derive_workon_home:1> typeset 'workon_home_dir=/home/trki/.virtualenvs' +virtualenvwrapper_derive_workon_home:5> [ /home/trki/.virtualenvs '=' '' ']' +virtualenvwrapper_derive_workon_home:12> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:12> unset GREP_OPTIONS +virtualenvwrapper_derive_workon_home:12> grep '^[^/~]' +virtualenvwrapper_derive_workon_home:21> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:21> unset GREP_OPTIONS +virtualenvwrapper_derive_workon_home:21> egrep '([\$~]|//)' +virtualenvwrapper_derive_workon_home:30> echo /home/trki/.virtualenvs +virtualenvwrapper_derive_workon_home:31> return 0 +virtualenvwrapper_initialize:1> export 'WORKON_HOME=/home/trki/.virtualenvs' +virtualenvwrapper_initialize:3> virtualenvwrapper_verify_workon_home -q +virtualenvwrapper_verify_workon_home:1> RC=0 +virtualenvwrapper_verify_workon_home:2> [ ! -d /home/trki/.virtualenvs/ ']' +virtualenvwrapper_verify_workon_home:11> return 0 +virtualenvwrapper_initialize:6> [ /home/trki/.virtualenvs '=' '' ']' +virtualenvwrapper_initialize:11> virtualenvwrapper_run_hook initialize +virtualenvwrapper_run_hook:1> typeset hook_script +virtualenvwrapper_run_hook:2> typeset result +virtualenvwrapper_run_hook:4> hook_script=+virtualenvwrapper_run_hook:4> virtualenvwrapper_tempfile initialize-hook +virtualenvwrapper_tempfile:2> typeset 'suffix=initialize-hook' +virtualenvwrapper_tempfile:3> typeset file +virtualenvwrapper_tempfile:5> file=+virtualenvwrapper_tempfile:5> virtualenvwrapper_mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX +virtualenvwrapper_mktemp:1> mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX +virtualenvwrapper_tempfile:5> file=/tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_tempfile:6> [ 0 -ne 0 ']' +virtualenvwrapper_tempfile:6> [ -z /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 ']' +virtualenvwrapper_tempfile:6> [ ! -f /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 ']' +virtualenvwrapper_tempfile:11> echo /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_tempfile:12> return 0 +virtualenvwrapper_run_hook:4> hook_script=/tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_run_hook:11> cd /home/trki/.virtualenvs +cd:1> [[ x/home/trki/.virtualenvs == x... ]] +cd:3> [[ x/home/trki/.virtualenvs == x.... ]] +cd:5> [[ x/home/trki/.virtualenvs == x..... ]] +cd:7> [[ x/home/trki/.virtualenvs == x...... ]] +cd:9> [ -d /home/trki/.autoenv ']' +cd:13> cd /home/trki/.virtualenvs +virtualenvwrapper_run_hook:12> '' -m virtualenvwrapper.hook_loader --script /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 initialize virtualenvwrapper_run_hook:12: permission denied: +virtualenvwrapper_run_hook:15> result=126 +virtualenvwrapper_run_hook:17> [ 126 -eq 0 ']' +virtualenvwrapper_run_hook:27> [ initialize '=' initialize ']' +virtualenvwrapper_run_hook:29> cat - virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON= and that PATH is set properly. +virtualenvwrapper_run_hook:38> rm -f /tmp/virtualenvwrapper-initialize-hook-OhY86PXmo7 +virtualenvwrapper_run_hook:39> return 126 +virtualenvwrapper_initialize:13> virtualenvwrapper_setup_tab_completion +virtualenvwrapper_setup_tab_completion:1> [ -n '' ']' +virtualenvwrapper_setup_tab_completion:20> [ -n 4.3.17 ']' +virtualenvwrapper_setup_tab_completion:30> compctl -K _virtualenvs workon rmvirtualenv cpvirtualenv showvirtualenv +virtualenvwrapper_setup_tab_completion:31> compctl -K _cdvirtualenv_complete cdvirtualenv +virtualenvwrapper_setup_tab_completion:32> compctl -K _cdsitepackages_complete cdsitepackages +virtualenvwrapper_initialize:15> return 0 +/home/trki/.zshrc:17> plugins=( git python django symfony2 zsh-syntax-highlighting composer history-substring-search virtualenvwrapper ) # pythonbrew [[ -s ~/.pythonbrew/etc/bashrc ]] && source ~/.pythonbrew/etc/bashrc +/home/trki/.zshrc:21> [[ -s /home/trki/.pythonbrew/etc/bashrc ]] Also when i try to open ubuntu software center absolutly nothing happens. No idea what to do now.

    Read the article

  • 6 Ways to Free Up Hard Drive Space Used by Windows System Files

    - by Chris Hoffman
    We’ve previously covered the standard ways to free up space on Windows. But if you have a small solid-state drive and really want more hard space, there are geekier ways to reclaim hard drive space. Not all of these tips are recommended — in fact, if you have more than enough hard drive space, following these tips may actually be a bad idea. There’s a tradeoff to changing all of these settings. Erase Windows Update Uninstall Files Windows allows you to uninstall patches you install from Windows Update. This is helpful if an update ever causes a problem — but how often do you need to uninstall an update, anyway? And will you really ever need to uninstall updates you’ve installed several years ago? These uninstall files are probably just wasting space on your hard drive. A recent update released for Windows 7 allows you to erase Windows Update files from the Windows Disk Cleanup tool. Open Disk Cleanup, click Clean up system files, check the Windows Update Cleanup option, and click OK. If you don’t see this option, run Windows Update and install the available updates. Remove the Recovery Partition Windows computers generally come with recovery partitions that allow you to reset your computer back to its factory default state without juggling discs. The recovery partition allows you to reinstall Windows or use the Refresh and Reset your PC features. These partitions take up a lot of space as they need to contain a complete system image. On Microsoft’s Surface Pro, the recovery partition takes up about 8-10 GB. On other computers, it may be even larger as it needs to contain all the bloatware the manufacturer included. Windows 8 makes it easy to copy the recovery partition to removable media and remove it from your hard drive. If you do this, you’ll need to insert the removable media whenever you want to refresh or reset your PC. On older Windows 7 computers, you could delete the recovery partition using a partition manager — but ensure you have recovery media ready if you ever need to install Windows. If you prefer to install Windows from scratch instead of using your manufacturer’s recovery partition, you can just insert a standard Window disc if you ever want to reinstall Windows. Disable the Hibernation File Windows creates a hidden hibernation file at C:\hiberfil.sys. Whenever you hibernate the computer, Windows saves the contents of your RAM to the hibernation file and shuts down the computer. When it boots up again, it reads the contents of the file into memory and restores your computer to the state it was in. As this file needs to contain much of the contents of your RAM, it’s 75% of the size of your installed RAM. If you have 12 GB of memory, that means this file takes about 9 GB of space. On a laptop, you probably don’t want to disable hibernation. However, if you have a desktop with a small solid-state drive, you may want to disable hibernation to recover the space. When you disable hibernation, Windows will delete the hibernation file. You can’t move this file off the system drive, as it needs to be on C:\ so Windows can read it at boot. Note that this file and the paging file are marked as “protected operating system files” and aren’t visible by default. Shrink the Paging File The Windows paging file, also known as the page file, is a file Windows uses if your computer’s available RAM ever fills up. Windows will then “page out” data to disk, ensuring there’s always available memory for applications — even if there isn’t enough physical RAM. The paging file is located at C:\pagefile.sys by default. You can shrink it or disable it if you’re really crunched for space, but we don’t recommend disabling it as that can cause problems if your computer ever needs some paging space. On our computer with 12 GB of RAM, the paging file takes up 12 GB of hard drive space by default. If you have a lot of RAM, you can certainly decrease the size — we’d probably be fine with 2 GB or even less. However, this depends on the programs you use and how much memory they require. The paging file can also be moved to another drive — for example, you could move it from a small SSD to a slower, larger hard drive. It will be slower if Windows ever needs to use the paging file, but it won’t use important SSD space. Configure System Restore Windows seems to use about 10 GB of hard drive space for “System Protection” by default. This space is used for System Restore snapshots, allowing you to restore previous versions of system files if you ever run into a system problem. If you need to free up space, you could reduce the amount of space allocated to system restore or even disable it entirely. Of course, if you disable it entirely, you’ll be unable to use system restore if you ever need it. You’d have to reinstall Windows, perform a Refresh or Reset, or fix any problems manually. Tweak Your Windows Installer Disc Want to really start stripping down Windows, ripping out components that are installed by default? You can do this with a tool designed for modifying Windows installer discs, such as WinReducer for Windows 8 or RT Se7en Lite for Windows 7. These tools allow you to create a customized installation disc, slipstreaming in updates and configuring default options. You can also use them to remove components from the Windows disc, shrinking the size of the resulting Windows installation. This isn’t recommended as you could cause problems with your Windows installation by removing important features. But it’s certainly an option if you want to make Windows as tiny as possible. Most Windows users can benefit from removing Windows Update uninstallation files, so it’s good to see that Microsoft finally gave Windows 7 users the ability to quickly and easily erase these files. However, if you have more than enough hard drive space, you should probably leave well enough alone and let Windows manage the rest of these settings on its own. Image Credit: Yutaka Tsutano on Flickr     

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >