Search Results

Search found 27581 results on 1104 pages for 'execute command'.

Page 486/1104 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Creating Item Templates as Visual Studio 2010 Extensions

    - by maziar
    Technorati Tags: Visual Studio 2010 Extension,T4 Template,VSIX,Item Template Wizard This blog post briefly introduces creation of an item template as a Visual studio 2010 extension. Problem specification Assume you are writing a Framework for data-oriented applications and you decide to include all your application messages in a SQL server database table. After creating the table, your create a class in your framework for getting messages with a string key specified.   var message = FrameworkMessages.Get("ChangesSavedSuccess");   Everyone would say this code is so error prone, because message keys are not strong-typed, and might create errors in application that are not caught in tests. So we think of a way to make it strong-typed, i.e. create a class to use it like this:   var message = Messages.ChangesSavedSuccess; in Messages class the code looks like this: public string ChangesSavedSuccess {     get { return FrameworkMessages.Get("ChangesSavedSuccess"); } }   And clearly, we are not going to create the Messages class manually; we need a class generator for it.   Again assume that the application(s) that intend to use our framework, contain multiple sub-systems. So each sub-system need to have it’s own strong-typed message class that call FrameworkMessages.Get method. So we would like to make our code generator an Item Template so that each developer would easily add the item to his project and no other works would be necessary.   Solution We create a T4 Text Template to generate our strong typed class from database. Then create a Visual Studio Item Template with this generator and publish it.   What Are T4 Templates You might be already familiar with T4 templates. If it’s so, you can skip this section. T4 Text Template is a fine Visual Studio file type (.tt) that generates output text. This file is a mixture of text blocks and code logic (in C# or VB). For example, you can generate HTML files, C# classes, Resource files and etc with use of a T4 template.   Syntax highlighting In Visual Studio 2010 a T4 Template by default will no be syntax highlighted and no auto-complete is supported for it. But there is a fine visual studio extension named ‘Visual T4’ which can be downloaded free from VisualStudioGallery. This tool offers IntelliSense, syntax coloring, validation, transformation preview and more for T4 templates.     How Item Templates work in Visual Studio Visual studio extensions allow us to add some functionalities to visual studio. In our case we need to create a .vsix file that adds a template to visual studio item templates. Item templates are zip files containing the template file and a meta-data file with .vstemplate extension. This .vstemplate file is an XML file that provides some information about the template. A .vsix file also is a zip file (renamed to .vsix) that are open with visual studio extension installer. (Re-installing a vsix file requires that old one to be uninstalled from VS: Tools > Extension Manager.) Installing a vsix will need Visual Studio to be closed and re-opened to take effect. Visual studio extension installer will easily find the item template’s zip file and copy it to visual studio’s template items folder. You can find other visual studio templates in [<VS Install Path>\Common7\IDE\ItemTemplates] and you can edit them; but be very careful with VS default templates.   How Can I Create a VSIX file 1. Visual Studio SDK depending on your Visual Studio’s version, you need to download Microsoft Visual Studio SDK. Note that if you have VS 2010 SP1, you will need to download and install VS 2010 SP1 SDK; VS 2010 SDK will not be installed (unless you change registry value that indicated your service pack number). Here is the link for VS 2010 SP1 SDK. After downloading, Run it and follow the wizard to complete the installation.   2. Create the file you want to make it an Item Template Create a project (or use an existing one) and add you file, edit it to make it work fine.   Back to our own problem, we need to create a T4 (.tt) template. VS-Prok: Add > New Item > General > Text Template Type a file name, ex. Message.tt, and press Add. Create the T4 template file (in this blog I do not intend to include T4 syntaxes so I just write down the code which is clear enough for you to understand)   <#@ template debug="false" hostspecific="true" language="C#" #> <#@ output extension=".cs" #> <#@ Assembly Name="System.Data" #> <#@ Import Namespace="System.Data.SqlClient" #> <#@ Import Namespace="System.Text" #> <#@ Import Namespace="System.IO" #> <#     var connectionString = "server=Maziar-PC; Database=MyDatabase; Integrated Security=True";     var systemName = "Sys1";     var builder = new StringBuilder();     using (var connection = new SqlConnection(connectionString))     {         connection.Open();         var command = connection.CreateCommand();         command.CommandText = string.Format("SELECT [Key] FROM [Message] WHERE System = '{0}'", systemName);         var reader = command.ExecuteReader();         while (reader.Read())         {             builder.AppendFormat("        public static string {0} {{ get {{ return FrameworkMessages.Get(\"{0}\"); }} }}\r\n", reader[0]);         }     } #> namespace <#= System.Runtime.Remoting.Messaging.CallContext.LogicalGetData("NamespaceHint") #> {     public static class <#= Path.GetFileNameWithoutExtension(Host.TemplateFile) #>     { <#= builder.ToString() #>     } } As you can see the T4 template connects to a database, reads message keys and generates a class. Here is the output: namespace MyProject.MyFolder {     public static class Messages     {         public static string ChangesSavedSuccess { get { return FrameworkMessages.Get("ChangesSavedSuccess"); } }         public static string ErrorSavingChanges { get { return FrameworkMessages.Get("ErrorSavingChanges"); } }     } }   The output looks fine but there is one problem. The connectionString and systemName are hard coded. so how can I create an flexible item template? One of features of item templates in visual studio is that you can create a designer wizard for your item template, so I can get connection information and system name there. now lets go on creating the vsix file.   3. Create Template In visual studio click on File > Export Template a wizard will show up. if first step click on Item Template on in the combo box select the project that contains Messages.tt. click next. Select Messages.tt from list in second step. click next. In the third step, you should choose References. For this template, System and System.Data are needed so choose them. click next. write down template name, description, if you like add a .ico file as the icon file and also preview image. Uncheck automatically add the templare … . Copy the output location in clip board. click finish.     4. Create VSIX Project In VS, Click File > New > Project > Extensibility > VSIX Project Type a name, ex. FrameworkMessages, Location, etc. The project will include a .vsixmanifest file. Fill in fields like Author, Product Name, Description, etc.   In Content section, click on Add Content. choose content type as Item Template. choose source as file. remember you have the template file address in clipboard? now paste it in front of file. click OK.     5. Build VSIX Project That’s it, build the project and go to output directory. You see a .vsix file. you can run it now. After restarting VS, if you click on a project > Add > New Item, you will see your item in list and you can add it. When you add this item to a project, if it has not references to System and System.Data they will be added. but the problem mentioned in step 2 is seen now.     6. Create Design Wizard for your Item Template Create a project i.e. Windows Application named ‘Framework.Messages.Design’, but better change its output type to Class Library. Add References to Microsoft.VisualStudio.TemplateWizardInterface and envdte Add a class Named MessagesDesigner in your project and Implement IWizard interface in it. This is what you should write: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TemplateWizard; using EnvDTE; namespace Framework.Messages.Design {     class MessageDesigner : IWizard     {         private bool CanAddProjectItem;         public void RunStarted(object automationObject, Dictionary<string, string> replacementsDictionary, WizardRunKind runKind, object[] customParams)         {             // Prompt user for Connection String and System Name in a Windows form (ShowDialog) here             // (try to provide good interface)             // if user clicks on cancel of your windows form return;             string connectionString = "connection;string"; // Set value from the form             string systemName = "system;name"; // Set value from the form             CanAddProjectItem = true;             replacementsDictionary.Add("$connectionString$", connectionString);             replacementsDictionary.Add("$systemName$", systemName);         }         public bool ShouldAddProjectItem(string filePath)         {             return CanAddProjectItem;         }         public void BeforeOpeningFile(ProjectItem projectItem)         {         }         public void ProjectFinishedGenerating(Project project)         {         }         public void ProjectItemFinishedGenerating(ProjectItem projectItem)         {         }         public void RunFinished()         {         }     } }   before your code runs  replacementsDictionary contains list of default template parameters. After that, two other parameters are added. Now build this project and copy the output assembly to [<VS Install Path>\Common7\IDE] folder.   your designer is ready.     The template that you had created is now added to your VSIX project. In windows explorer open your template zip file (extract it somewhere). open the .vstemplate file. first of all remove <ProjectItem SubType="Code" TargetFileName="$fileinputname$.cs" ReplaceParameters="true">Messages.cs</ProjectItem> because the .cs file is not to be intended to be a part of template and it will be generated. change value of ReplaceParameters for your .tt file to true to enable parameter replacement in this file. now right after </TemplateContent> end element, write this: <WizardExtension>   <Assembly>Framework.Messages.Design</Assembly>   <FullClassName>Framework.Messages.Design.MessageDesigner</FullClassName> </WizardExtension>   one other thing that you should do is to edit your .tt file and remove your .cs file. Lines 8 and 9 of your .tt file should be:     var connectionString = "$connectionString$";     var systemName = "$systemName$"; this parameters will be replaced when the item is added to a project. Save the contents to a zip file with same file name and replace the original file.   now again build your VSIX project, uninstall your extension. close VS. now run .vsix file. open vs, add an item of type messages to your project, bingo, your wizard form will show up. fill the fields and click ok, values are replaced in .tt file added.     that’s it. tried so hard to make this post brief, hope it was not so long…   Cheers Maziar

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

  • CodePlex Daily Summary for Thursday, November 22, 2012

    CodePlex Daily Summary for Thursday, November 22, 2012Popular ReleasesStackBuilder: StackBuilder 1.0.11.0: +Added Box/Case analysis...VidCoder: 1.4.7 Beta: Added view modes to the Preview window. Now you can see the image in 1:1 or in "Corners" mode to show a close-up of cropping results. Added the ability to set a custom completion sound. Gave the encoding settings command bar a more distinctive background color and extended it to the whole width of the window. Added the preview button to the command bar. Rearranged UI in Video tab and added back the section headers. Added the "Most" choice for the advanced x264 analysis option. Updat...ServiceMon - Extensible Real-time, Service Monitoring Utility for Windows: ServiceMon Release 1.2.0.58: Auto-uploaded from build serverImapX 2: ImapX 2.0.0.6: An updated release of the ImapX 2 library, containing many bugfixes for both, the library and the sample application.WiX Toolset: WiX v3.7 RC: WiX v3.7 RC (3.7.1119.0) provides feature complete Bundle update and reference tracking plus several bug fixes. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/11/20/WiX-v3.7-Release-Candidate-availablePicturethrill: Version 2.11.20.0: Fixed up Bing image provider on Windows 8Excel AddIn to reset the last worksheet cell: XSFormatCleaner.xla: Modified the commandbar code to use CommandBar IDs instead of English names.Json.NET: Json.NET 4.5 Release 11: New feature - Added ITraceWriter, MemoryTraceWriter, DiagnosticsTraceWriter New feature - Added StringEscapeHandling with options to escape HTML and non-ASCII characters New feature - Added non-generic JToken.ToObject methods New feature - Deserialize ISet<T> properties as HashSet<T> New feature - Added implicit conversions for Uri, TimeSpan, Guid New feature - Missing byte, char, Guid, TimeSpan and Uri explicit conversion operators added to JToken New feature - Special case...EntitiesToDTOs - Entity Framework DTO Generator: EntitiesToDTOs.v3.0: DTOs and Assemblers can be generated inside project folders! Choose the types you want to generate! Support for Visual Studio 2012 !!! Support for new Entity Framework EDMX (format used by VS2012) ! Support for Enum Types! Optional automatic check for updates! Added the following methods to Assemblers! IEnumerable<DTO>.ToEntities() : ICollection<Entity> IEnumerable<Entity>.ToDTOs() : ICollection<DTO> Indicate class identifier for DTOs and Assemblers! Cleaner Assemblers code....mojoPortal: 2.3.9.4: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2394-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...DotNetNuke® Store: 03.01.07: What's New in this release? IMPORTANT: this version requires DotNetNuke 04.06.02 or higher! DO NOT REPORT BUGS HERE IN THE ISSUE TRACKER, INSTEAD USE THE DotNetNuke Store Forum! Bugs corrected: - Replaced some hard coded references to the default address provider classes by the corresponding interfaces to allow the creation of another address provider with a different name. New Features: - Added the 'pickup' delivery option at checkout. - Added the 'no delivery' option in the Store Admin ...Bundle Transformer - a modular extension for ASP.NET Web Optimization Framework: Bundle Transformer 1.6.10: Version: 1.6.10 Published: 11/18/2012 Now almost all of the Bundle Transformer's assemblies is signed (except BundleTransformer.Yui.dll); In BundleTransformer.SassAndScss the SassAndCoffee.Ruby library was replaced by my own implementation of the Sass- and SCSS-compiler (based on code of the SassAndCoffee.Ruby library version 2.0.2.0); In BundleTransformer.CoffeeScript added support of CoffeeScript version 1.4.0-3; In BundleTransformer.TypeScript added support of TypeScript version 0....ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.0: +2012-11-18 v3.2.0 -?????????????????SelectedValueArray????????(◇?◆:)。 -???????????????????RecoverPropertiesFromJObject????(〓?〓、????、??、Vian_Pan)。 -????????????,?????????????,???SelectedValueArray???????(sam.chang)。 -??Alert.Show???????????(swtseaman)。 -???????????????,??Icon??IconUrl????(swtseaman)。 -?????????TimePicker(??)。 -?????????,??/res.axd?css=blue.css&v=1。 -????????,?????????????,???????。 -????MenuCheckBox(???????)。 -?RadioButton??AutoPostBack??。 -???????FCKEditor?????????...BugNET Issue Tracker: BugNET 1.2: Please read our release notes for BugNET 1.2: http://blog.bugnetproject.com/bugnet-1-2-has-been-released Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...New Projects1122case1325: It is a codeplex project1122case1327: Never be so greedy Accommodation Portal: This is a skeleton web site for holiday home owners, who wishes to rent their holiday accommodations to visitors from around the world. Analog Clock: This is project contains analog clock made in win forms.Android Socket Plus: ????Android??????。???????Android???????Socket???,?????????(?PC?Windows????????)??????Socket???,??,????????????;??????????????Socket???,??????????????Socket???。Bancosol: PFCBig Data Twitter Demo: This demo analyzes tweets in real-time, even including a dashboard. The tweets are also archived in Azure DB/Blob and Hadoop where Excel can be used for BI!Cloud For Science: This project serves as issue tracker for C4S components, and is used by participants of the C4S project. cwt: cwtDiary Application for Elementary Student: A Simple Diary Application Made for Elementary Student. Dice Dreams: Dice Dreams Dice Dreams is a dice game , the player must get 1.000.000 point to win, you start with 100.000 point. Dynamic Entity Framework Filtering: Generates linq to Entity Framework Queries by examining the EF data model, thus allowing for code reduction for queries to commonly requested entities.EazyErp: Enterprise Resource Planning System of Vf AsiaGoPlay: Tooke - add a project description here...HMyBlog: myblogHumanitarian Toolbox: This project is the publication site for bits built by the Humanitarian Toolbox ( http://humanitariantoolbox.net/).Impulse Media Player: Impulse Media Player is the ultimate media player for WindowsInspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who need a name (any name) to get started with their projects. Application written during Webcamp Singapore (4th Jun 2010 - 5th Jun 2010).iPictureUploader: Simple tool to upload images onto WEB and share via blogs/forums/etc lugionline: test projectMosaic Snake 3D: A clone of the popular Snake game for Windows 8. It contains a simple 3D engine based on SharpDX and is completely written in C# and Xaml.Outage Display: A simple web-based outage displayPrestaShop free Electronic Brown Shop Template - ModuleBazaar: Check out Latest and Best Featured PrestaShop Templates, Modules Magento Extensions, Opencart Extensions, Clone scripts from ModuleBazaarRackspace Cloud Files Manager: A simple Rackspace Cloud Files manager for static websitesRoll The Dice: Roll The Dice is a simple game developed by Erika Enggar Savitri and Queen Anugerah Aguslia who are currently studying Information System at Ma Chung UniversityRussian IDM: Russian IDM is free IdM solutionshootout: Comparing the speed of different languages and the constructs available within those languagessmart messaging connector: Sending fax from a PrintDocument or a file. Sending broadcast fax. Managing (Pause/Resume/Restart) current fax job. Managing configuration of the fax serverTeen Diary: Teen Diary Software - FREEWARE SOFTWARE. - SIMPLE, LIGHTWEIGHT, PORTABLE, COMPATIBLE. - made from .NET FRAMEWORK languange with XML data. Tekapo: Tekapo is a wizard style application that will help you manage your digital photos. Most digital cameras will store images as JPEG files. Information about the camera and how the camera has taken the photo is placed into the JPEG file along with the image data. One of the pieces of information stored is the date and time that the photo was taken. Tekapo uses the picture taken date to organise the photos.The Byte Kitchen's Open Sources: This project is related to The Byte Kitchen Blog (at thebytekitchen.com). It typically deals with Windows 8 apps, DirectX, and the Kinect for Windows.TheDiary: daily-self journalWCFsample: WCFsampleXNA Game Editor: This project will has familiar features like Unity Engine Editor.

    Read the article

  • 5 Best Practices - Laying the Foundation for WebCenter Projects

    - by Kellsey Ruppel
    Today’s guest post comes from Oracle WebCenter expert John Brunswick. John specializes in enterprise portal and content management solutions and actively contributes to the enterprise software business community and has authored a series of articles about optimal business involvement in portal, business process management and SOA development, examining ways of helping organizations move away from monolithic application development. We’re happy to have John join us today! Maximizing success with Oracle WebCenter portal requires a strategic understanding of Oracle WebCenter capabilities.  The following best practices enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration. They are inherently project agnostic, enabling a strong foundation for future growth and an expedient return on your investment in the platform.  If you are able to embrace even only a few of these practices, you will materially improve your deployment capability with WebCenter. 1. Segment Duties Around 3Cs - Content, Collaboration and Contextual Data "Agility" is one of the most common business benefits touted by modern web platforms.  It sounds good - who doesn't want to be Agile, right?  How exactly IT organizations go about supplying agility to their business counterparts often lacks definition - hamstrung by ambiguity. Ultimately, businesses want to benefit from reduced development time to deliver a solution to a particular constituent, which is augmented by as much self-service as possible to develop and manage the solution directly. All done in the absence of direct IT involvement. With Oracle WebCenter's depth in the areas of content management, pallet of native collaborative services, enterprise mashup capability and delegated administration, it is very possible to execute on this business vision at a technical level. To realize the benefits of the platform depth we can think of Oracle WebCenter's segmentation of duties along the lines of the 3 Cs - Content, Collaboration and Contextual Data.  All three of which can have their foundations developed by IT, then provisioned to the business on a per role basis. Content – Oracle WebCenter benefits from an extremely mature content repository.  Work flow, audit, notification, office integration and conversion capabilities for documents (HTML & PDF) make this a haven for business users to take control of content within external and internal portals, custom applications and web sites.  When deploying WebCenter portal take time to think of areas in which IT can provide the "harness" for content to reside, then allow the business to manage any content items within the site, using the content foundation to ensure compliance with business rules and process.  This frees IT to work on more mission critical challenges and allows the business to respond in short order to emerging market needs. Collaboration – Native collaborative services and WebCenter spaces are a perfect match for business users who are looking to enable document sharing, discussions and social networking.  The ability to deploy the services is granular and on the basis of roles scoped to given areas of the system - much like the first C “content”.  This enables business analysts to design the roles required and IT to provision with peace of mind that users leveraging the collaborative services are only able to do so in explicitly designated areas of a site. Bottom line - business will not need to wait for IT, but cannot go outside of the scope that has been defined based on their roles. Contextual Data – Collaborative capabilities are most powerful when included within the context of business data.  The ability to supply business users with decision shaping data that they can include in various parts of a portal or portals, just as they would with content items, is one of the most powerful aspects of Oracle WebCenter.  Imagine a discussion about new store selection for a retail chain that re-purposes existing information from business intelligence services about various potential locations and or custom backend systems - presenting it directly in the context of the discussion.  If there are some data sources that are preexisting in your enterprise take a look at how they can be made into discrete offerings within the portal, then scoped to given business user roles for inclusion within collaborative activities. 2. Think Generically, Execute Specifically Constructs.  Anyone who has spent much time around me knows that I am obsessed with this word.  Why? Because Constructs offer immense power - more than APIs, Web Services or other technical capability. Constructs offer organizations the ability to leverage a platform's native characteristics to offer substantial business functionality - without writing code.  This concept becomes more powerful with the additional understanding of the concepts from the platform that an organization learns over time.  Let's take a look at an example of where an Oracle WebCenter construct can substantially reduce the time to get a subscription-based site out the door and into the hands of the end consumer. Imagine a site that allows members to subscribe to specific disciplines to access information and application data around that various discipline.  A space is a collection of secured pages within Oracle WebCenter.  Spaces are not only secured, but also default content stored within it to be scoped automatically to that space. Taking this a step further, Oracle WebCenter’s Activity Stream surfaces events, discussions and other activities that are scoped to the given user on the basis of their space affiliations.  In order to have a portal that would allow users to "subscribe" to information around various disciplines - spaces could be used out of the box to achieve this capability and without using any APIs or low level technical work to achieve this. 3. Make Governance Work for You Imagine driving down the street without the painted lines on the road.  The rules of the road are so ingrained in our minds, we often do not think about the process, but seemingly mundane lane markers are critical enablers. Lane markers allow us to travel at speeds that would be impossible if not for the agreed upon direction of flow. Additionally and more importantly, it allows people to act autonomously - going where they please at any given time. The return on the investment for mobility is high enough for people to buy into globally agreed up governance processes. In Oracle WebCenter we can use similar enablers to lane markers.  Our goal should be to enable the flow of information and provide end users with the ability to arrive at business solutions as needed, not on the basis of cumbersome processes that cannot meet the business needs in a timely fashion. How do we do this? Just as with "Segmentation of Duties" Oracle WebCenter technologies offer the opportunity to compartmentalize various business initiatives from each other within the system due to constructs and security that are available to use within the platform. For instance, when a WebCenter space is created, any content added within that space by default will be secured to that particular space and inherits meta data that is associated with a folder created for the space. Oracle WebCenter content uses meta data to support a broad range of rich ECM functionality and can automatically impart retention, workflow and other policies automatically on the basis of what has been defaulted for that space. Depending on your business needs, this paradigm will also extend to sub sections of a space, offering some interesting possibilities to enable automated management around content. An example may be press releases within a particular area of an extranet that require a five year retention period and need to the reviewed by marketing and legal before release.  The underlying content system will transparently take care of this process on the basis of the above rules, enabling peace of mind over unstructured data - which could otherwise become overwhelming. 4. Make Your First Project Your Second Imagine if Michael Phelps was competing in a swimming championship, but told right before his race that he had to use a brand new stroke.  There is no doubt that Michael is an outstanding swimmer, but chances are that he would like to have some time to get acquainted with the new stroke. New technologies should not be treated any differently.  Before jumping into the deep end it helps to take time to get to know the new approach - even though you may have been swimming thousands of times before. To quickly get a handle on Oracle WebCenter capabilities it can be helpful to deploy a sandbox for the team to use to share project documents, discussions and announcements in an effort to help the actual deployment get under way, while increasing everyone’s knowledge of the platform and its functionality that may be helpful down the road. Oracle Technology Network has made a pre-configured virtual machine available for download that can be a great starting point for this exercise. 5. Get to Know the Community If you are reading this blog post you have most certainly faced a software decision or challenge that was solved on the basis of a small piece of missing critical information - which took substantial research to discover.  Chances were also good that somewhere, someone had already come across this information and would have been excited to share it. There is no denying the power of passionate, connected users, sharing key tips around technology.  The Oracle WebCenter brand has a rich heritage that includes industry-leading technology and practitioners.  With the new Oracle WebCenter brand, opportunities to connect with these experts has become easier. Oracle WebCenter Blog Oracle Social Enterprise LinkedIn WebCenter Group Oracle WebCenter Twitter Oracle WebCenter Facebook Oracle User Groups Additionally, there are various Oracle WebCenter related blogs by an excellent grouping of services partners.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • CodePlex Daily Summary for Saturday, July 07, 2012

    CodePlex Daily Summary for Saturday, July 07, 2012Popular ReleasesHigLabo: HigLabo_20120706: Breaking change Now HigLabo.Mail require reference to HigLabo.Net. ProtocolType change name to HttpProtocolType in HigLabo.Net project. AsyncCallErrorEventArgs change name to AsyncHttpCallErrorEventArgs. Delete command class in Pop3,Smtp that may not used. Other change Add HigLabo.Net.Ftp project.(Not complete) Create SocketClient that can easily communicate to server by Socket object.ecBlog: ecBlog 0.2: ecBlog alpha realaseTaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...DotNetNuke Persian Packages: ??? ?? ???? ????? ???? 6.2.0: *????? ???? ??? ?? ???? 6.2.0 ? ??????? ???? ????? ???? ??? ????? *????? ????? ????? ??? ??? ???? ??? ??????? ??????? - ???? *?????? ???? ??? ?????? ?? ???? ???? ????? ? ?? ??? ?? ???? ???? ?? *????? ????? ????? ????? ????? / ??????? ???? ?? ???? ??? ??? - ???? *???? ???? ???? ????? ? ??????? ??? ??? ??? ?? ???? *????? ????? ???????? ??? ? ??????? ?? ?? ?????? ????? ????????? ????? ?????? - ???? *????? ????? ?????? ????? ?? ???? ?? ?? ?? ???????? ????? ????? ????????? ????? ?????? *???? ?...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.?????????? - ????????: All-In-One Code Framework ??? 2012-07-04: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ???OneCode??????,??????????10????Microsoft OneCode Sample,????4?Windows Base Sample,2?XML Sample?4?ASP.NET Sample。???????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Windows Base Sample CSCheckOSBitness VBCheckOSBitness CSCheckOSVersion VBCheckOSVersion XML Sample CSXPath VBXPath ASP.NET Sample CSASPNETDataPager VBASPNET...xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...IronPython: 2.7.3: On behalf of the IronPython team, I'm happy to announce the final release of IronPython 2.7.3. This release includes everything from IronPython 54498, 62475, and 74478 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. The incompatibility with IronRuby has been resolved, and they can once again be installed side-by-side. The biggest improvements in IronPython 2.7.3 are: the...BlackJumboDog: Ver5.6.6: 2012.07.03 Ver5.6.6 (1) ???????????ftp://?????????、????LIST?????Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...CommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsnopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.60: Highlight features & improvements: • Significant performance optimization. • Use AJAX for adding products to the cart. • New flyout mini-shopping cart. • Auto complete suggestions for product searching. • Full-Text support. • EU cookie law support. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).THE NVL Maker: The NVL Maker Ver 3.51: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/beoef05k#THE-NVL-Maker-ver3.51-sim.7z ????:http://www.mediafire.com/file/6tqdwj9jr6eb9qj/THENVLMakerver3.51tra.7z ======================================== ???? ======================================== 3.51 beta ???: ·?????????????????????? ·?????????,?????????0,?????????????????????? ·??????????????????????????? ·?????????????TJS????(EXP??) ·??4:3???,???????????????,??????????? ·?????????...????: ????2.0.3: 1、???????????。 2、????????。 3、????????????。 4、bug??,????。New ProjectsCode Bits: Set of useful code blocks that can be included in your code. Includes NuGet support.Critr: A personal project that takes formatted Excel show logs, parses them and uploads them to small local database for analytics.kb.net: An Open Source Knowledge Base based on SQL Server Express 2012 and .Net 4.0LyncServerExtension: L’objectif de ce projet est l’ajout de la fonctionnalité de délégation patron/secrétaire à Microsoft Lync Server 2010. MVC Web Api 4 Flot: MVC4 Web Api Service Layers for the Flot project on http://code.google.com/p/flot. Until now implemented only the GET method.ostests: testif is web and mobile assessment software. Create interactive tests easily and share them with your colleagues, employees and friends.Pegasus Attack: Pegasus Attack will be a simple shmup style game in the style of Truxton Basic features Multiple levels (text document written, just stores location of enemies) Basic enemies with basic AI (hard-coded, or from a text document) Various bullet types Title screen / Help screen / Control window / In-game game-states / two playable Characters Rainbow Dash and Fluttershy Basic effects (explosion animation) Items (powerups, guns, ...)proLearningEnglish: Apps RDF to build a software for learning English. Users are teachers and pupils in grades 6.Pusher .Net Client: This is a .Net client for Pusher (http://www.pusher.com) allowing .Net clients such as WinForms and Console applications to receive websocket messages.RadEditor Lite for AJAX: RadEditor Lite for AJAX modified from the open source Telerik Free Tool: RadEditor Lite for MOSS 2010. RconLibrary: Battlefield 3 RCON communication library.SharePoint Notes: Simple visual webpart to show list items as notes. Easy to modify, and not really complex.Software Manager: Software Manager is a software package that will help with distribution and licensing of programs that are developed with VB.NET or C#.StoreFramework: this project is a test framework about the codefirst and pocoTwitterRt - Tweet from Windows Metro Apps: Add the ability to tweet from your Metro style (WinRT) application. Binaries at nuget.org/packages/TwitterRt. Discussion at w8isms.blogspot.com.YucadagBlog: e

    Read the article

  • Watching a variable for changes without polling.

    - by milkfilk
    I'm using a framework called Processing which is basically a Java applet. It has the ability to do key events because Applet can. You can also roll your own callbacks of sorts into the parent. I'm not doing that right now and maybe that's the solution. For now, I'm looking for a more POJO solution. So I wrote some examples to illustrate my question. Please ignore using key events on the command line (console). Certainly this would be a very clean solution but it's not possible on the command line and my actual app isn't a command line app. In fact, a key event would be a good solution for me but I'm trying to understand events and polling beyond just keyboard specific problems. Both these examples flip a boolean. When the boolean flips, I want to fire something once. I could wrap the boolean in an Object so if the Object changes, I could fire an event too. I just don't want to poll with an if() statement unnecessarily. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; /* * Example of checking a variable for changes. * Uses dumb if() and polls continuously. */ public class NotAvoidingPolling { public static void main(String[] args) { boolean typedA = false; String input = ""; System.out.println("Type 'a' please."); while (true) { InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); try { input = br.readLine(); } catch (IOException ioException) { System.out.println("IO Error."); System.exit(1); } // contrived state change logic if (input.equals("a")) { typedA = true; } else { typedA = false; } // problem: this is polling. if (typedA) System.out.println("Typed 'a'."); } } } Running this outputs: Type 'a' please. a Typed 'a'. On some forums people suggested using an Observer. And although this decouples the event handler from class being observed, I still have an if() on a forever loop. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Observable; import java.util.Observer; /* * Example of checking a variable for changes. * This uses an observer to decouple the handler feedback * out of the main() but still is polling. */ public class ObserverStillPolling { boolean typedA = false; public static void main(String[] args) { // this ObserverStillPolling o = new ObserverStillPolling(); final MyEvent myEvent = new MyEvent(o); final MyHandler myHandler = new MyHandler(); myEvent.addObserver(myHandler); // subscribe // watch for event forever Thread thread = new Thread(myEvent); thread.start(); System.out.println("Type 'a' please."); String input = ""; while (true) { InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); try { input = br.readLine(); } catch (IOException ioException) { System.out.println("IO Error."); System.exit(1); } // contrived state change logic // but it's decoupled now because there's no handler here. if (input.equals("a")) { o.typedA = true; } } } } class MyEvent extends Observable implements Runnable { // boolean typedA; ObserverStillPolling o; public MyEvent(ObserverStillPolling o) { this.o = o; } public void run() { // watch the main forever while (true) { // event fire if (this.o.typedA) { setChanged(); // in reality, you'd pass something more useful notifyObservers("You just typed 'a'."); // reset this.o.typedA = false; } } } } class MyHandler implements Observer { public void update(Observable obj, Object arg) { // handle event if (arg instanceof String) { System.out.println("We received:" + (String) arg); } } } Running this outputs: Type 'a' please. a We received:You just typed 'a'. I'd be ok if the if() was a NOOP on the CPU. But it's really comparing every pass. I see real CPU load. This is as bad as polling. I can maybe throttle it back with a sleep or compare the elapsed time since last update but this is not event driven. It's just less polling. So how can I do this smarter? How can I watch a POJO for changes without polling? In C# there seems to be something interesting called properties. I'm not a C# guy so maybe this isn't as magical as I think. private void SendPropertyChanging(string property) { if (this.PropertyChanging != null) { this.PropertyChanging(this, new PropertyChangingEventArgs(property)); } }

    Read the article

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

  • C -Segmentation fault !

    - by FILIaS
    It seems at least weird to me... The program runs normally.But after I call the enter() function for the 4th time,there is a segmentation fault!I would appreciate any help. With the following function enter() I wanna add user commands' datas to a list. [Some part of the code is already posted on another question of me, but I think I should post it again...as it's a different problem I'm facing now.] /* struct for all the datas that user enters on file*/ typedef struct catalog { char short_name[50]; char surname[50]; signed int amount; char description[1000]; struct catalog *next; }catalog,*catalogPointer; catalogPointer current; catalogPointer head = NULL; void enter(void) //user command: i <name> <surname> <amount> <description> { int n,j=2,k=0; char temp[1500]; char *short_name,*surname,*description; signed int amount; char* params = strchr(command,' ') + 1; //strchr returns a pointer to the 1st space on the command.U want a pointer to the char right after that space. strcpy(temp, params); //params is saved as temp. char *curToken = strtok(temp," "); //strtok cuts 'temp' into strings between the spaces and saves them to 'curToken' printf("temp is:%s \n",temp); printf("\nWhat you entered for saving:\n"); for (n = 0; curToken; ++n) //until curToken ends: { if (curToken) { short_name = malloc(strlen(curToken) + 1); strncpy(short_name, curToken, sizeof (short_name)); } printf("Short Name: %s \n",short_name); curToken = strtok(NULL," "); if (curToken) { surname = malloc(strlen(curToken) + 1); strncpy(surname, curToken,sizeof (surname)); } printf("SurName: %s \n",surname); curToken = strtok(NULL," "); if (curToken) { //int * amount= malloc(sizeof (signed int *)); char *chk; amount = (int) strtol(curToken, &chk, 10); if (!isspace(*chk) && *chk != 0) fprintf(stderr,"Warning: expected integer value for amount, received %s instead\n",curToken); } printf("Amount: %d \n",amount); curToken = strtok(NULL,"\0"); if (curToken) { description = malloc(strlen(curToken) + 1); strncpy(description, curToken, sizeof (description)); } printf("Description: %s \n",description); break; } if (findEntryExists(head, surname,short_name) != NULL) //call function in order to see if entry exists already on the catalog printf("\nAn entry for <%s %s> is already in the catalog!\nNew entry not entered.\n",short_name,surname); else { printf("\nTry to entry <%s %s %d %s> in the catalog list!\n",short_name,surname,amount,description); newEntry(&head,short_name,surname,amount,description); printf("\n**Entry done!**\n"); } // Maintain the list in alphabetical order by surname. } catalogPointer findEntryExists (catalogPointer head, char num[],char first[]) { catalogPointer p = head; while (p != NULL && strcmp(p->surname, num) != 0 && strcmp(p->short_name,first) != 0) { p = p->next; } return p; } catalogPointer newEntry (catalog** headRef,char short_name[], char surname[], signed int amount, char description[]) { catalogPointer newNode = (catalogPointer)malloc(sizeof(catalog)); catalogPointer first; catalogPointer second; catalogPointer tmp; first=head; second=NULL; strcpy(newNode->short_name, short_name); strcpy(newNode->surname, surname); newNode->amount=amount; strcpy(newNode->description, description); while (first!=NULL) { if (strcmp(surname,first->surname)>0) second=first; else if (strcmp(surname,first->surname)==0) { if (strcmp(short_name,first->short_name)>0) second=first; } first=first->next; } if (second==NULL) { newNode->next=head; head=newNode; } else //SEGMENTATION APPEARS WHEN IT GETS HERE! { tmp=second->next; newNode->next=tmp; first->next=newNode; } } UPDATE: SegFault appears only when it gets on the 'else' loop of InsertSort() function. I observed that segmentation fault appears when i try to put on the list names that are after it. For example, if in the list exists: [Name:b Surname:b Amount:6 Description:b] [Name:c Surname:c Amount:5 Description:c] [Name:d Surname:d Amount:4 Description:d] [Name:e Surname:e Amount:3 Description:e] [Name:g Surname:g Amount:2 Description:g] [Name:x Surname:x Amount:1 Description:x] and i put: " x z 77 gege" there is a segmentation but if i put "x a 77 gege" it continues normally....

    Read the article

  • parallel_for_each from amp.h – part 1

    - by Daniel Moth
    This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation). Basic structure and parameters Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters: a grid<N> (think of it as an alias to extent) a restrict(direct3d) lambda, whose signature is such that it returns void and accepts an index of the same rank as the grid So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space: // some_code_A parallel_for_each( g, // g is of type grid<2> [ ](index<2> idx) restrict(direct3d) { // kernel code } ); // some_code_B The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the grid object you pass, as described next. You know that grid is simply a wrapper on extent. In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your grid was setup by some_code_A as follows: extent<2> e(2,3); grid<2> g(e); ...then given that: e.size()==6, e[0]==2, and e[1]=3 ...the six index<2> objects it generates (and hence the values that your lambda would receive) are:    (0,0) (1,0) (0,1) (1,1) (0,2) (1,2) So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6. If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?" Passing data in and out It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables. In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(direct3d), where the empty square brackets means that no variables were captured. If instead I write it like this [&](index<2> idx) restrict(direct3d), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the direct3d restrictions, where only one type can be capture by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data). If I write the lambda line like this [=](index<2> idx) restrict(direct3d), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for direct3d. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll introduce in the post after next. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container. Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(direct3d) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how. So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts… (a)synchronous Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.   That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Can't create admin user on Heroku

    - by Nick5a1
    I am new to rails and I have gone through Kevin Skoglund's Ruby on Rails 3 Essential Training course on Lynda.com. Through the course you set up a simple cms, which I did. It doesn't cover Git or deployment but I've pushed my simple cms to github (https://github.com/nick5a1/Simple_CMS) and deployed to Heroku (http://nkarrasch.herokuapp.com/). In order to deploy to Heroku I followed the Heroku setup guide (https://devcenter.heroku.com/articles/rails3) and switched my database from MySQL to PostgreSQL. As instructed I changed gen'mysql2' to gen 'sqlite3' in my Gemfile and ran bundle install before pushing. I then ran heroku run rake db:migrate. I'm running into 2 problems. When I try to log in (http://nkarrasch.herokuapp.com/access) I get an error "We're sorry, but something went wrong". I should instead be getting a flash message with invalid username/password combination. This is what I'm getting on my test environment on my local machine. Secondly, when I log into the Heroku console to create and create an admin user, when I try to save that user I get the following error: irb(main):004:0> user.save (1.2ms) BEGIN AdminUser Exists (1.9ms) SELECT 1 AS one FROM "admin_users" WHERE "admin_users"."username" = 'Nick5a1' LIMIT 1 (1.7ms) ROLLBACK => false Any advice on how to troubleshoot would be greatly appreciated :). Thanks very much, Nick EDIT: Here are my Heroku logs: 2012-06-27T20:36:44+00:00 heroku[slugc]: Slug compilation started 2012-06-27T20:37:34+00:00 heroku[api]: Add shared-database:5mb add-on by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v2 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Add RAILS_ENV, LANG, PATH, RACK_ENV, GEM_PATH config by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v3 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v4 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Deploy 1d82839 by [email protected] 2012-06-27T20:37:35+00:00 heroku[slugc]: Slug compilation finished 2012-06-27T20:37:36+00:00 heroku[web.1]: Starting process with command `bundle exec rails server -p 45450` 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:44+00:00 app[web.1]: => Rails 3.2.6 application starting in production on http://0.0.0.0:45450 2012-06-27T20:37:44+00:00 app[web.1]: => Call with -d to detach 2012-06-27T20:37:44+00:00 app[web.1]: => Booting WEBrick 2012-06-27T20:37:44+00:00 app[web.1]: Connecting to database specified by DATABASE_URL 2012-06-27T20:37:44+00:00 app[web.1]: => Ctrl-C to shutdown server 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO WEBrick 1.3.1 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO ruby 1.9.2 (2011-07-09) [x86_64-linux] 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO WEBrick::HTTPServer#start: pid=2 port=45450 2012-06-27T20:37:45+00:00 heroku[web.1]: State changed from starting to up 2012-06-27T20:39:44+00:00 heroku[run.1]: Awaiting client 2012-06-27T20:39:44+00:00 heroku[run.1]: Starting process with command `bundle exec rake db:migrate` 2012-06-27T20:39:44+00:00 heroku[run.1]: State changed from starting to up 2012-06-27T20:39:51+00:00 heroku[run.1]: Process exited with status 0 2012-06-27T20:39:51+00:00 heroku[run.1]: State changed from up to complete 2012-06-27T20:41:05+00:00 heroku[run.1]: Awaiting client 2012-06-27T20:41:05+00:00 heroku[run.1]: Starting process with command `bundle exec rails console` 2012-06-27T20:41:05+00:00 heroku[run.1]: State changed from starting to up 2012-06-27T20:46:09+00:00 heroku[run.1]: Process exited with status 0 2012-06-27T20:46:09+00:00 heroku[run.1]: State changed from up to complete

    Read the article

  • C -Segmentation fault after the 4th call of the function!

    - by FILIaS
    It seems at least weird to me... The program runs normally.But after I call the enter() function for the 4th time,there is a segmentation fault!I would appreciate any help. With the following function enter() I wanna add user commands' datas to a list. [Some part of the code is already posted on another question of me, but I think I should post it again...as it's a different problem I'm facing now.] /* struct for all the datas that user enters on file*/ typedef struct catalog { char short_name[50]; char surname[50]; signed int amount; char description[1000]; struct catalog *next; }catalog,*catalogPointer; catalogPointer current; catalogPointer head = NULL; void enter(void) //user command: i <name> <surname> <amount> <description> { int n,j=2,k=0; char temp[1500]; char *short_name,*surname,*description; signed int amount; char* params = strchr(command,' ') + 1; //strchr returns a pointer to the 1st space on the command.U want a pointer to the char right after that space. strcpy(temp, params); //params is saved as temp. char *curToken = strtok(temp," "); //strtok cuts 'temp' into strings between the spaces and saves them to 'curToken' printf("temp is:%s \n",temp); printf("\nWhat you entered for saving:\n"); for (n = 0; curToken; ++n) //until curToken ends: { if (curToken) { short_name = malloc(strlen(curToken) + 1); strncpy(short_name, curToken, sizeof (short_name)); } printf("Short Name: %s \n",short_name); curToken = strtok(NULL," "); if (curToken) { surname = malloc(strlen(curToken) + 1); strncpy(surname, curToken,sizeof (surname)); } printf("SurName: %s \n",surname); curToken = strtok(NULL," "); if (curToken) { //int * amount= malloc(sizeof (signed int *)); char *chk; amount = (int) strtol(curToken, &chk, 10); if (!isspace(*chk) && *chk != 0) fprintf(stderr,"Warning: expected integer value for amount, received %s instead\n",curToken); } printf("Amount: %d \n",amount); curToken = strtok(NULL,"\0"); if (curToken) { description = malloc(strlen(curToken) + 1); strncpy(description, curToken, sizeof (description)); } printf("Description: %s \n",description); break; } if (findEntryExists(head, surname,short_name) != NULL) //call function in order to see if entry exists already on the catalog printf("\nAn entry for <%s %s> is already in the catalog!\nNew entry not entered.\n",short_name,surname); else { printf("\nTry to entry <%s %s %d %s> in the catalog list!\n",short_name,surname,amount,description); newEntry(&head,short_name,surname,amount,description); printf("\n**Entry done!**\n"); } // Maintain the list in alphabetical order by surname. } catalogPointer findEntryExists (catalogPointer head, char num[],char first[]) { catalogPointer p = head; while (p != NULL && strcmp(p->surname, num) != 0 && strcmp(p->short_name,first) != 0) { p = p->next; } return p; } catalogPointer newEntry (catalog** headRef,char short_name[], char surname[], signed int amount, char description[]) { catalogPointer newNode = (catalogPointer)malloc(sizeof(catalog)); catalogPointer first; catalogPointer second; catalogPointer tmp; first=head; second=NULL; strcpy(newNode->short_name, short_name); strcpy(newNode->surname, surname); newNode->amount=amount; strcpy(newNode->description, description); //SEGMENTATION ON THE 4TH RUN OF PROGRAM STOPS HERE depending on the names each time it gets! while (first!=NULL) { if (strcmp(surname,first->surname)>0) second=first; else if (strcmp(surname,first->surname)==0) { if (strcmp(short_name,first->short_name)>0) second=first; } first=first->next; } if (second==NULL) { newNode->next=head; head=newNode; } else { tmp=second->next; newNode->next=tmp; first->next=newNode; } }

    Read the article

  • OpenSSL in C++ email client - server closes connection with TLSv1 Alert message

    - by mice
    My app connects to a IMAP email server. One client configured his server to reject SSLv2 certificates, and now my app fails to connect to the server. All other email clients connect to this server successfully. My app uses openssl. I debugged by creating minimal openssl client and attempt to connect to the server. Below is the code with connects to the mail server (using Windows sockets, but same problem is with unix sockets). Server sends its initial IMAP greeting message, but after client sends 1st command, server closes connection. In Wireshark, I see that after sending command to server, it returns TLSv1 error message 21 (Encrypted Alert) and connection is gone. I'm looking for proper setup of OpenSSL for this connection to succeed. Thanks #include <stdio.h> #include <memory.h> #include <errno.h> #include <sys/types.h> #include <winsock2.h> #include <openssl/crypto.h> #include <openssl/x509.h> #include <openssl/pem.h> #include <openssl/ssl.h> #include <openssl/err.h> #define CHK_NULL(x) if((x)==NULL) exit(1) #define CHK_ERR(err,s) if((err)==-1) { perror(s); exit(1); } #define CHK_SSL(err) if((err)==-1) { ERR_print_errors_fp(stderr); exit(2); } SSL *ssl; char buf[4096]; void write(const char *s){ int err = SSL_write(ssl, s, strlen(s)); printf("> %s\n", s); CHK_SSL(err); } void read(){ int n = SSL_read(ssl, buf, sizeof(buf) - 1); CHK_SSL(n); if(n==0){ printf("Finished\n"); exit(1); } buf[n] = 0; printf("%s\n", buf); } void main(){ int err=0; SSLeay_add_ssl_algorithms(); SSL_METHOD *meth = SSLv23_client_method(); SSL_load_error_strings(); SSL_CTX *ctx = SSL_CTX_new(meth); CHK_NULL(ctx); WSADATA data; WSAStartup(0x202, &data); int sd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); CHK_ERR(sd, "socket"); struct sockaddr_in sa; memset(&sa, 0, sizeof(sa)); sa.sin_family = AF_INET; sa.sin_addr.s_addr = inet_addr("195.137.27.14"); sa.sin_port = htons(993); err = connect(sd,(struct sockaddr*) &sa, sizeof(sa)); CHK_ERR(err, "connect"); /* ----------------------------------------------- */ /* Now we have TCP connection. Start SSL negotiation. */ ssl = SSL_new(ctx); CHK_NULL(ssl); SSL_set_fd(ssl, sd); err = SSL_connect(ssl); CHK_SSL(err); // Following two steps are optional and not required for data exchange to be successful. /* printf("SSL connection using %s\n", SSL_get_cipher(ssl)); X509 *server_cert = SSL_get_peer_certificate(ssl); CHK_NULL(server_cert); printf("Server certificate:\n"); char *str = X509_NAME_oneline(X509_get_subject_name(server_cert),0,0); CHK_NULL(str); printf(" subject: %s\n", str); OPENSSL_free(str); str = X509_NAME_oneline(X509_get_issuer_name (server_cert),0,0); CHK_NULL(str); printf(" issuer: %s\n", str); OPENSSL_free(str); // We could do all sorts of certificate verification stuff here before deallocating the certificate. X509_free(server_cert); */ printf("\n\n"); read(); // get initial IMAP greeting write("1 CAPABILITY\r\n"); // send 1st command read(); // get reply to cmd; server closes connection here write("2 LOGIN a b\r\n"); read(); SSL_shutdown(ssl); closesocket(sd); SSL_free(ssl); SSL_CTX_free(ctx); }

    Read the article

  • C++ Program performs better when piped

    - by ET1 Nerd
    I haven't done any programming in a decade. I wanted to get back into it, so I made this little pointless program as practice. The easiest way to describe what it does is with output of my --help codeblock: ./prng_bench --help ./prng_bench: usage: ./prng_bench $N $B [$T] This program will generate an N digit base(B) random number until all N digits are the same. Once a repeating N digit base(B) number is found, the following statistics are displayed: -Decimal value of all N digits. -Time & number of tries taken to randomly find. Optionally, this process is repeated T times. When running multiple repititions, averages for all N digit base(B) numbers are displayed at the end, as well as total time and total tries. My "problem" is that when the problem is "easy", say a 3 digit base 10 number, and I have it do a large number of passes the "total time" is less when piped to grep. ie: command ; command |grep took : ./prng_bench 3 10 999999 ; ./prng_bench 3 10 999999|grep took .... Pass# 999999: All 3 base(10) digits = 3 base(10). Time: 0.00005 secs. Tries: 23 It took 191.86701 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. An average of 0.00019 secs & 99 tries was needed to find each one. It took 159.32355 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. If I run the same command many times w/o grep time is always VERY close. I'm using srand(1234) for now, to test. The code between my calls to clock_gettime() for start and stop do not involve any stream manipulation, which would obviously affect time. I realize this is an exercise in futility, but I'd like to know why it behaves this way. Below is heart of the program. Here's a link to the full source on DB if anybody wants to compile and test. https://www.dropbox.com/s/6olqnnjf3unkm2m/prng_bench.cpp clock_gettime() requires -lrt. for (int pass_num=1; pass_num<=passes; pass_num++) { //Executes $passes # of times. clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time start_time = timetodouble(temp_time); //convert time to double, store as start_time for(i=1, tries=0; i!=0; tries++) { //loops until 'comparison for' fully completes. counts reps as 'tries'. <------------ for (i=0; i<Ndigits; i++) //Move forward through array. | results[i]=(rand()%base); //assign random num of base to element (digit). | /*for (i=0; i<Ndigits; i++) //---Debug Lines--------------- | std::cout<<" "<<results[i]; //---a LOT of output.---------- | std::cout << "\n"; //---Comment/decoment to disable/enable.*/ // | for (i=Ndigits-1; i>0 && results[i]==results[0]; i--); //Move through array, != element breaks & i!=0, new digits drawn. -| } //If all are equal i will be 0, nested for condition satisfied. -| clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time draw_time = (timetodouble(temp_time) - start_time); //convert time to dbl, subtract start_time, set draw_time to diff. total_time += draw_time; //add time for this pass to total. total_tries += tries; //add tries for this pass to total. /*Formated output for each pass: Pass# ---: All -- base(--) digits = -- base(10) Time: ----.---- secs. Tries: ----- (LINE) */ std::cout<<"Pass# "<<std::setw(width_pass)<<pass_num<<": All "<<Ndigits<<" base("<<base<<") digits = " <<std::setw(width_base)<<results[0]<<" base(10). Time: "<<std::setw(width_time)<<draw_time <<" secs. Tries: "<<tries<<"\n"; } if(passes==1) return 0; //No need for totals and averages of 1 pass. /* It took ----.---- secs & ------ tries to find --- repeating -- digit base(--) numbers. (LINE) An average of ---.---- secs & ---- tries was needed to find each one. (LINE)(LINE) */ std::cout<<"It took "<<total_time<<" secs & "<<total_tries<<" tries to find " <<passes<<" repeating "<<Ndigits<<" digit base("<<base<<") numbers.\n" <<"An average of "<<total_time/passes<<" secs & "<<total_tries/passes <<" tries was needed to find each one. \n\n"; return 0;

    Read the article

  • ffmpeg: Could not find codec parameters for stream 0 (Video: h264) unspecified size

    - by dempap
    I try to convert a video from .raw to .mp4. For this reason I did download, build and install both x264 and ffmpeg. However, command: ffmpeg -f h264 -i output.raw -vcodec copy output.mp4 fails with error (shown in picture below). Is there any way to fix this? Commands I also run: 1 root@beagleboard:/# v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG 2 root@beagleboard:/dev# v4l2-ctl --set-fmt-video=pixelformat=0

    Read the article

  • Overcoming the 1024 character limit with setx

    - by Madhur Ahuja
    I am trying to set environment variables using the setx command, such as follows setx PATH "f:\common tools\git\bin;f:\common tools\python\app;f:\common tools\python\app\scripts;f:\common tools\ruby\bin;f:\masm32\bin;F:\Borland\BCC55\Bin;%PATH%" However, I get the following error if the value is more then 1024 characters long: WARNING: The data being saved is truncated to 1024 characters. SUCCESS: Specified value was saved. But some of the paths in the end are not saved in variable, I guess due to character limit as the error suggests.

    Read the article

  • Converting .wav (CCITT A-Law format) to .mp3 using LAME

    - by iar
    I would like to convert wav files to mp3 using the lame encoder (lame.exe). The wav files are recorded along the following specifications: Bit Rate: 64kbps Audio sample size: 8 bit Channels: 1 (mono) Audio sample rate: 8 kHz Audio format: CCITT A-Law If I try to convert such a wav file using lame, I get the following error message: Unsupported data format: 0x0006 Could anyone provide me with a command line string using lame.exe that will enable me to convert these kind of wav files?

    Read the article

  • net use - System error 1920 has occurred

    - by Martin
    On Windows Server 2008 R2, when I run the following command I am getting the 1920. I've tried pretty much everything I am aware of and I can't figure out what causes the error. When I try to map the same network share using the UI and same credentials, everything works fine. net use * \\EAAA-12345\C$\ /USER:\\EAAA-12345\Administrator Passw0rd /PERSISTENT:NO Anybody knows how to get rid of the 1920 error?

    Read the article

  • Set filename character encoding in Putty's PSFTP

    - by lacton
    I am using PuTTY's command line utility psftp.exe to transfer files between a UTF8-configured linux server and a MS Windows PC. File names containing non ASCII characters (e.g., Japanese kana) are corrupted when using the 'ls' or 'get' commands of the psftp utility. I tried to create a saved session from putty.exe with the translation set to UTF8, and use that saved session from psftp.exe (i.e., open saved_session_with_UTF8_translation), but the filename characters were still corrupted. How can I configure psftp.exe so that it uses the right charset for the file names?

    Read the article

  • How to run VisualSVN Server on port 443 running IIS on same server?

    - by Metro Smurf
    Server 2008 R2 SP1 VisualSVN Server 2.1.6 The IIS server has about 10 sites. One of them uses https over port 443 with the following bindings: http x.x.x.39:80 site.com http x.x.x.39:80 www.site.com https x.x.x.39:443 VisualSVN Server Properties server name: svn.SomeSite.com server port: 443 Server Binding: x.x.x.40 No sites on IIS are listening to x.x.x.40. When starting up VisualSVN server, the following errors are thrown: make_sock: could not bind to address x.x.x.40:443 (OS 10013) An attempt was made to access a socket in a way forbidden by its access permissions. no listening sockets available, shutting down When I stop Site.com on IIS, then VisualSVN Server starts up without a problem. When I bind VisualSVN server to port 8443 and start Site.com, then VisualSVN Server starts without a problem. My goal is to be able to access the VisualSvn Server with a normal url, i.e., one that does't use a port number in the address: https://svn.site.com vs https://svn.site.com:8443 What needs to be configured to allow VisualSVN Server to run on port 443 with IIS running on the same server? Edit / Answer The answer provided by Ivan did point me in the right direction. For anyone else running into this, here is a bit more information. Even though my IIS had no bindings set to the IP address I am using for VisualSvn, IIS will still take the IP address hostage unless IIS is explicitly told which IP addresses to listen to. There is no GUI in Win Server 2k8 to configure the IP addresses for IIS to listen; by default, IIS listens to all IP addresses assigned to the server. The following will help configure IIS to only listen to the IP addresses you want: open a command prompt enter: netsh enter: http enter: show iplisten -- this will show a table of the IP addresses IIS is listening to. By default, the table will be empty (I guess this means IIS listens to all IP's) For each IP address IIS should listen to, enter: add iplisten ipaddress=x.x.x.x enter: show iplisten -- you should now see all the IP addresses added to the listening table. Exit and then reset IIS. Each of these commands can also be run directly, i.e., netsh http show iplisten If you need to delete an IP address from the listening table: open a command prompt enter: netsh enter: http enter: delete iplisten ipaddress=x.x.x.x Exit and then reset IIS.

    Read the article

  • megacli forceWB

    - by Pascal den Bekker
    We are using a raid controller: LSI Logic / Symbios Logic MegaRAID SAS 2008 But there is no BBU on Board, is there a way to force the WB Cache ?? I tried following: - /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp CachedBadBBU -L0 -a0 Failed to set Write Cache OK if bad BBU on Adapter 0, VD 0. FW error description: The requested command has invalid arguments. Can anyone help? Cheers, Pascal

    Read the article

  • Unable to enable FTP access of Mac OSX Snow Leopard

    - by Xetius
    When I try to enable the FTP service in the preferences (File Sharing-Options-Share Files and Folders Using FTP) the check box enables and then disables again. The console is giving me the message : 16/04/2010 12:14:20 com.apple.coreservicesd[51] sh: launchctl: command not found This indicates to me that it can't find the launchctl executable launchctl is present in the folder /bin /bin is set in the PATH variable for sh and bash shells and also in the ~/.MacOS/environment.plist How can I fix this so that my preferences can find this so that I can enable the FTP service.

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >