Search Results

Search found 1399 results on 56 pages for 'naming'.

Page 50/56 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • MySQL Connector/Net 6.6.3 Beta 2 has been released

    - by fernando
    MySQL Connector/Net 6.6.3, a new version of the all-managed .NET driver for MySQL has been released.  This is the second of two beta releases intended to introduce users to the new features in the release. This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense&   the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions&   triggers. These release contains fixes specific of the debugger as well as other fixes specific of other areas of Connector/NET:   * Added feature to define initial values for InOut stored procedure arguments.   * Debugger: Fixed Visual Studio locked connection after debugging a routine.   * Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202).   * Fix for bug "CacheServerProperties can cause 'Packet too large' error". MySQL Bug #66578 Orabug #14593547.   * Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549).   * Fixed end of line issue when debugging a routine.   * Added validation to avoid overwriting a routine backup file when it hasn't changed.   * Fixed inheritance on Entity Framework Code First scenarios. (MySql bug #63920 and Oracle bug #13582335).   * Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048).   * Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292).   * Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960).   * Fixed "Decimal type should have digits at right of decimal point", now default is 2, and user's changes in     EDM designer are recognized (MySql bug #65127, Oracle bug #14474342).   * Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715).   * Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338).   * Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204).   * Added ANTLR attribution notice (Oracle bug #14379162).   * Fix for debugger failing when having a routine with an if-elseif-else.   * Also the programming interface for authentication plugins has been redefined. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Extending Blend for Visual Studio 2013

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/11/01/extending-blend-for-visual-studio-2013.aspxSo, I got a comment yesterday on my post about Extending Blend 4 and Blend for Visual Studio 2012 asking if I knew how to get it working for Blend for Visual Studio 2013.. My initial thoughts were, just change the location to get the blend dlls from Visual Studio 11.0 to 12.0 and you’re all set, so I went to do that, only to discover that the dlls I normally reference, well – they don’t exist. So… I’ve made a presumption that the actual process of using MEF etc is still the same. I was wrong. So, the route to discovery – required DotPeek and opening a few of blends dlls.. Browsing through the Blend install directory (./Microsoft Visual Studio 12.0/Blend/) I notice the .addin files: So I decide to peek into the SketchFlow dll, then promptly remember SketchFlow is quite a big thing, and hunting through there is not ideal, luckily there is another dll using an .addin file, ‘Microsoft.Expression.Importers.Host’, so we’ll go for that instead. We can see it’s still using the ‘IPackage’ formula, but where is that sucker? Well, we just press F12 on the ‘IPackage’ bit and DotPeek takes us there, with a very handy comment at the top: // Type: Microsoft.Expression.Framework.IPackage // Assembly: Microsoft.Expression.Framework, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a // MVID: E092EA54-4941-463C-BD74-283FD36478E2 // Assembly location: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Blend\Microsoft.Expression.Framework.dll Now we know where the IPackage interface is defined, so let’s just try writing a control. Last time I did a separate dll for the control, this time I’m not, but it still works if you want to do it that way. Let’s build a control! STEP 1 Create a new WPF application Naming doesn’t matter any more! I have gone with ‘Hello2013’ (see what I did there?) STEP 2 Delete: App.Config App.xaml MainWindow.xaml We won’t be needing them STEP 3 Change your application to be a Class Library instead. (You might also want to delete the ‘vshost’ stuff in your output directory now, as they only exist for hosting the WPF app, and just cause clutter) STEP 4 Add a reference to the ‘Microsoft.Expression.Framework.dll’ (which you can find in ‘C:\Program Files\Microsoft Visual Studio 12.0\Blend’ – that’s Program Files (x86) if you’re on an x64 machine!). STEP 5 Add a User Control, I’m going with ‘Hello2013Control’, and following from last time, it’s just a TextBlock in a Grid: <UserControl x:Class="Hello2013.Hello2013Control" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock>Hello Blend for VS 2013</TextBlock> </Grid> </UserControl> STEP 6 Add a class to load the package – I’ve called it – yes you guessed – Hello2013Package, which will look like this: namespace Hello2013 { using Microsoft.Expression.Framework; using Microsoft.Expression.Framework.UserInterface; public class Hello2013Package : IPackage { private Hello2013Control _hello2013Control; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } private void Initialize() { _hello2013Control = new Hello2013Control(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _hello2013Control, "Hello Window"); } public void Unload(){} } } You might note that compared to the 2012 version we’re no longer [Exporting(typeof(IPackage))]. The file you create in STEP 7 covers this for us. STEP 7 Add a new file called: ‘<PROJECT_OUTPUT_NAME>.addin’ – in reality you can call it anything and it’ll still read it in just fine, it’s just nicer if it all matches up, so I have ‘Hello2013.addin’. Content wise, we need to have: <?xml version="1.0" encoding="utf-8"?> <AddIn AssemblyFile="Hello2013.dll" /> obviously, replacing ‘Hello2013.dll’ with whatever your dll is called. STEP 8 We set the ‘addin’ file to be copied to the output directory: STEP 9 Build! STEP 10 Go to your output directory (./bin/debug) and copy the 3 files (Hello2013.dll, Hello2013.pdb, Hello2013.addin) and then paste into the ‘Addins’ folder in your Blend directory (C:\Program Files\Microsoft Visual Studio 12.0\Blend\Addins) STEP 11 Start Blend for Visual Studio 2013 STEP 12 Go to the ‘Window’ menu and select ‘Hello Window’ STEP 13 Marvel at your new control! Feel free to email me / comment with any problems!

    Read the article

  • Master Page: Dynamically Adding Rows in ASP Table on Button Click event

    - by Vincent Maverick Durano
    In my previous post here, I wrote an example that demonstrates how are we going to generate table rows dynamically using ASP Table on click of the Button control. Now based on some comments in my previous example and in the forums they wanted to implement it within Masterpage. Unfortunately the code in my previous example doesn't work in Masterpage for the following main reasons: The Table is dynamically added within the Form tag and so the TextBox control will not be generated correcty in the page. The data will not be retained on each and every postbacks because the SetPreviousData() method is looking for the Table element within the Page and not on the MasterPage. The Request.Form key value should be set correctly since all controls within the master page are prefixed with the naming containter ID to prevent duplicate ids on the final rendered HTML. For example the TextBox control with the ID of TextBoxRow will turn to ID to this ctl00$MainBody$TextBoxRow. In order for the previous example to work within Masterpage then we will have to correct those three main reasons above and this post will guide you how to correct it. Suppose we have this content page declaration below:   <asp:Content ID="Content1" ContentPlaceHolderID="MainHead" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainBody" Runat="Server"> <asp:PlaceHolder ID="PlaceHolder1" runat="server"> <asp:Button ID="BTNAdd" runat="server" Text="Add New Row" OnClick="BTNAdd_Click" /> </asp:PlaceHolder> </asp:Content> As you notice I've added a PlaceHolder control within the MainBody ContentPlaceHolder. This is because we are going to generate the Table in the PlaceHolder instead of generating it within the Form element. Now since issue #1 is already corrected then let's proceed to the code beind part. Here are the full code blocks below:     using System; using System.Web.UI; using System.Web.UI.WebControls; public partial class DynamicControlDemo : System.Web.UI.Page { private int numOfRows = 1; protected void Page_Load(object sender, EventArgs e) { //Generate the Rows on Initial Load if (!Page.IsPostBack) { GenerateTable(numOfRows); } } protected void BTNAdd_Click(object sender, EventArgs e) { if (ViewState["RowsCount"] != null) { numOfRows = Convert.ToInt32(ViewState["RowsCount"].ToString()); GenerateTable(numOfRows); } } private void SetPreviousData(int rowsCount, int colsCount) { Table table = (Table)this.Page.Master.FindControl("MainBody").FindControl("Table1"); // **** if (table != null) { for (int i = 0; i < rowsCount; i++) { for (int j = 0; j < colsCount; j++) { //Extracting the Dynamic Controls from the Table TextBox tb = (TextBox)table.Rows[i].Cells[j].FindControl("TextBoxRow_" + i + "Col_" + j); //Use Request object for getting the previous data of the dynamic textbox tb.Text = Request.Form["ctl00$MainBody$TextBoxRow_" + i + "Col_" + j];//***** } } } } private void GenerateTable(int rowsCount) { //Creat the Table and Add it to the Page Table table = new Table(); table.ID = "Table1"; PlaceHolder1.Controls.Add(table);//****** //The number of Columns to be generated const int colsCount = 3;//You can changed the value of 3 based on you requirements // Now iterate through the table and add your controls for (int i = 0; i < rowsCount; i++) { TableRow row = new TableRow(); for (int j = 0; j < colsCount; j++) { TableCell cell = new TableCell(); TextBox tb = new TextBox(); // Set a unique ID for each TextBox added tb.ID = "TextBoxRow_" + i + "Col_" + j; // Add the control to the TableCell cell.Controls.Add(tb); // Add the TableCell to the TableRow row.Cells.Add(cell); } // And finally, add the TableRow to the Table table.Rows.Add(row); } //Set Previous Data on PostBacks SetPreviousData(rowsCount, colsCount); //Sore the current Rows Count in ViewState rowsCount++; ViewState["RowsCount"] = rowsCount; } }   As you observed the code is pretty much similar to the previous example except for the highlighted lines above. That's it! I hope someone find this post usefu! Technorati Tags: Dynamic Controls,ASP.NET,C#,Master Page

    Read the article

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • Second Day of Data Integration Track at OpenWorld 2012

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Our second day at OpenWorld and the Data Integration Team was very active with customer meetings, product updates, product demonstrations, sessions, plus much more.  If the volume of traffic by our demo pods is any indicator, this is a record year for attendance at OpenWorld.  The DIS team have had tremendous number of people stop by our demo pods to learn about the latest product releases or to speak to one of our product managers.    For Oracle GoldenGate, there has been a great deal of interest in Integrated Capture and the  Oracle GoldenGate Monitor plug-in for Enterprise Manager.  Our customer panels this year have been very well attended and on Tuesday we held the “Real World Operational Reporting with Oracle GoldenGate Customer Panel”. On this panel this year we had Michael Wells from Raymond James, Joy Mathew and Venki Govindarajan from Comcast, and Serkan Karatas from Turk Telekom. Our panelists have a great mix of experiences and all are passionate about using Oracle Data Integration products to solve very complex use cases. Each panelist was given a ten minute to overview their use of our product, followed by a barrage of questions from the audience. Michael Wells spoke about using Oracle GoldenGate for heterogeneous real time replication from HP (Tandem) NonStop to SQL Server and emphasized the need for using standard naming conventions for when customers configure GoldenGate, as the practices is immensely helpful when debugging a problem. Joy Mathew and Venkat Govindarajan from Comcast described how they have used GoldenGate for over a decade and their experiences of using the product for replicating data from HP nonstop to Terdata. Serkan Karatas from Turk Telekom dove into using Oracle GoldenGate and the value of archiving data in extremely large databases, which in Turk Telekoms case resulted in a 1 month ROI for the entire project. Thanks again to our panelist and audience participants for making the session interactive and informative.  For Wednesday we have a number of sessions available to attendees plus two hands-on labs, which I have listed below.   If you are unable to attend our hands-on lab for Oracle GoldenGate Veridata, it is available online at youtube.com. Sessions  11:45 AM - 12:45 PM Best Practices for High Availability with Oracle GoldenGate on Oracle Exadata -Moscone South - 102 1:15 PM - 2:15 PM Customer Perspectives: Oracle Data Integrator -Marriott Marquis - Golden Gate C3 Oracle GoldenGate Case Study: Real-Time Operational Reporting Deployment at Oracle -Moscone West - 2003 Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform -Moscone West - 3000 3:30 PM - 4:30 PM Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active -Moscone West - 3000 5:00 PM - 6:00 PM Tuning and Troubleshooting Oracle GoldenGate on Oracle Database -Moscone South - 102 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Hands-on Labs 10:15 AM - 11:15 AM Introduction to Oracle GoldenGate Veridata Marriott Marquis - Salon 1/2 11:45 AM - 12:45 PM Oracle Data Integrator and Oracle SOA Suite: Hands-on Lab -Marriott Marquis - Salon 1/2 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On Document.

    Read the article

  • What is Happening vs. What is Interesting

    - by Geertjan
    Devoxx 2011 was yet another confirmation that all development everywhere is either on the web or on mobile phones. Whether you looked at the conference schedule or attended sessions or talked to speakers at any point at all, it was very clear that no development whatsoever is done anymore on the desktop. In fact, that's something Tim Bray himself told me to my face at the speakers dinner. No new developments of any kind are happening on the desktop. Everyone who is currently on the desktop is working overtime to move all of their applications to the web. They're probably also creating a small subset of their application on an Android tablet, with an even smaller subset on their Android phone. Then you scratch that monolithic surface and find some interesting results. Without naming any names, I asked one of these prominent "ah, forget about the desktop" people at the Devoxx speakers dinner (and I have a witness): "Yes, the desktop is dead, but what about air traffic control, stock trading, oil analysis, risk management applications? In fact, what about any back office application that needs to be usable across all operating systems? Here there is no concern whatsoever with 100% accessibility which is, after all, the only thing that the web has over the desktop, (except when there's a network failure, of course, or when you find yourself in the 3/4 of the world where there's bandwidth problems)? There are 1000's of hidden applications out there that have processing requirements, security requirements, and the requirement that they'll be available even when the network is down or even completely unavailable. Isn't that a valid use case and aren't there 1000's of applications that fall into this so-called niche category? Are you not, in fact, confusing consumer applications, which are increasingly web-based and mobile-based, with high-end corporate applications, which typically need to do massive processing, of one kind or another, for which the web and mobile worlds are completely unsuited?" And you will not believe what the reply to the above question was. (Again, I have a witness to this discussion.) But here it is: "Yes. But those applications are not interesting. I do not want to spend any of my time or work in any way on those applications. They are boring." I'm sad to say that the leaders of the software development community, including those in the Java world, either share the above opinion or are led by it. Because they find something that is not new to be boring, they move on to what is interesting and start talking like the supposedly-boring developments don't even exist. (Kind of like a rapper pretending classical music doesn't exist.) Time and time again I find myself giving Java desktop development courses (at companies, i.e., not hobbyists, or students, but companies, i.e., the places where dollars are earned), where developers say to me: "The course you're giving about creating cross-platform, loosely coupled, and highly cohesive applications is really useful to us. Why do we never find information about this topic at conferences? Why can we never attend a session at a conference where the story about pluggable cross-platform Java is told? Why do we get the impression that we are uncool because we're not on the web and because we're not on a mobile phone, while the reason for that is because we're creating $1000,000 simulation software which has nothing to gain from being on the web or on the mobile phone?" And then I say: "Because nobody knows you exist. Because you're not submitting abstracts to conferences about your very interesting use cases. And because conferences tend to focus on what is new, which tends to be web related (especially HTML 5) or mobile related (especially Android). Because you're not taking the responsibility on yourself to tell the real stories about the real applications being developed all the time and every day. Because you yourself think your work is boring, while in fact it is fascinating. Because desktop developers are working from 9 to 5 on the desktop, in secure environments, such as banks and defense, where you can't spend time, nor have the interest in, blogging your latest tip or trick, as opposed to web developers, who tend to spend a lot of time on the web anyway and are therefore much more inclined to create buzz about the kind of work they're doing." So, next time you look at a conference program and wonder why there's no stories about large desktop development projects in the program, here's the short answer: "No one is going to put those items on the program until you start submitting those kinds of sessions. And until you start blogging. Until you start creating the buzz that the web developers have been creating around their work for the past 10 years or so. And, yes, indeed, programmers get the conference they deserve." And what about Tim Bray? Ask yourself, as Google's lead web technology evangelist, how many desktop developers do you think he talks to and, more generally, what his frame of reference is and what, clearly, he considers to be most interesting.

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • Migrating R Scripts from Development to Production

    - by Mark Hornick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “How do I move my R scripts stored in one database instance to another? I have my development/test system and want to migrate to production.” Users of Oracle R Enterprise Embedded R Execution will often store their R scripts in the R Script Repository in Oracle Database, especially when using the ORE SQL API. From previous blog posts, you may recall that Embedded R Execution enables running R scripts managed by Oracle Database using both R and SQL interfaces. In ORE 1.3.1., the SQL API requires scripts to be stored in the database and referenced by name in SQL queries. The SQL API enables seamless integration with database-based applications and ease of production deployment. Loading R scripts in the repository Before talking about migration, we’ll first introduce how users store R scripts in Oracle Database. Users can add R scripts to the repository in R using the function ore.scriptCreate, or SQL using the function sys.rqScriptCreate. For the sample R script     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100) users wrap this in a function and store it in the R Script Repository with a name. In R, this looks like ore.scriptCreate("RandomRedDots", function () { line-height: 115%; font-family: "Courier New";">     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)) }) In SQL, this looks like begin sys.rqScriptCreate('RandomRedDots',  'function(){     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)   }'); end; / The R function ore.scriptDrop and SQL function sys.rqScriptDrop can be used to drop these scripts as well. Note that the system will give an error if the script name already exists. Accessing R scripts once they’ve been loaded If you’re not using a source code control system, it is possible that your R scripts can be misplaced or files modified, making what is stored in Oracle Database to only or best copy of your R code. If you’ve loaded your R scripts to the database, it is straightforward to access these scripts from the database table SYS.RQ_SCRIPTS. For example, select * from sys.rq_scripts where name='myScriptName'; From R, scripts in the repository can be loaded into the R client engine using a function similar to the following: ore.scriptLoad <- function(name) { query <- paste("select script from sys.rq_scripts where name='",name,"'",sep="") str.f <- OREbase:::.ore.dbGetQuery(query) assign(name,eval(parse(text = str.f)),pos=1) } ore.scriptLoad("myFunctionName") This function is also useful if you want to load an existing R script from the repository into another R script in the repository – think modular coding style. Just include this function in the body of the other function and load the named script. Migrating R scripts from one database instance to another To move a set of functions from one system to another, the following script loads the functions from one R script repository into the client R engine, then connects to the target database and creates the scripts there with the same names. scriptNames <- OREbase:::.ore.dbGetQuery("select name from sys.rq_scripts where name not like 'RQG$%' and name not like 'RQ$%'")$NAME for(s in scriptNames) { cat(s,"\n") ore.scriptLoad(s) } ore.disconnect() ore.connect("rquser","orcl","localhost","rquser") for(s in scriptNames) { cat(s,"\n") ore.scriptDrop(s) ore.scriptCreate(s,get(s)) } Best Practice When naming R scripts, keep in mind that the name can be up to 128 characters. As such, consider organizing scripts in a directory structure manner. For example, if an organization has multiple groups or applications sharing the same database and there are multiple components, use “/” to facilitate the function organization: line-height: 115%;">ore.scriptCreate("/org1/app1/component1/myFuntion1", myFunction1) ore.scriptCreate("/org1/app1/component1/myFuntion2", myFunction2) ore.scriptCreate("/org1/app2/component2/myFuntion2", myFunction2) ore.scriptCreate("/org2/app2/component1/myFuntion3", myFunction3) ore.scriptCreate("/org3/app2/component1/myFuntion4", myFunction4) Users can then query for all functions using the path prefix when looking up functions. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Part 4 of 4 : Tips/Tricks for Silverlight Developers.

    - by mbcrump
    Part 1 | Part 2 | Part 3 | Part 4 I wanted to create a series of blog post that gets right to the point and is aimed specifically at Silverlight Developers. The most important things I want this series to answer is : What is it?  Why do I care? How do I do it? I hope that you enjoy this series. Let’s get started: Tip/Trick #16) What is it? Find out version information about Silverlight and which WebKit it is using by going to http://issilverlightinstalled.com/scriptverify/. Why do I care? I’ve had those users that its just easier to give them a site and say copy/paste the line that says User Agent in order to troubleshoot a Silverlight problem. I’ve also been debugging my own Silverlight applications and needed an easy way to determine if the plugin is disabled or not. How do I do it: Simply navigate to http://issilverlightinstalled.com/scriptverify/ and hit the Verify button. An example screenshot is located below: Results from Chrome 7 Results from Internet Explorer 8 (With Silverlight Disabled) Tip/Trick #17) What is it? Use Lambdas whenever you can. Why do I care?  It is my personal opinion that code is easier to read using Lambdas after you get past the syntax. How do I do it: For example: You may write code like the following: void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += new CheckAndDownloadUpdateCompletedEventHandler(Current_CheckAndDownloadUpdateCompleted); } void Current_CheckAndDownloadUpdateCompleted(object sender, CheckAndDownloadUpdateCompletedEventArgs e) { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } } To me this style forces me to look for the other Method to see what the code is actually doing. The style located below is much easier to read in my opinion and does the exact same thing. void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += (s, e) => { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } }; } Tip/Trick #18) What is it? Prevent development Web Service references from breaking when Visual Studio auto generates a new port number. Why do I care?  We have all been there, we are developing a Silverlight Application and all of a sudden our development web services break. We check and find out that the local port number that Visual Studio assigned has changed and now we need up to update all of our service references. We need a way to stop this. How do I do it: This can actually be prevented with just a few mouse click. Right click on your web solution and goto properties. Click the tab that says, Web. You just need to click the radio button and specify a port number. Now you won’t be bothered with that anymore. Tip/Trick #19) What is it? You can disable the Close Button a ChildWindow. Why do I care?  I wouldn’t blog about it if I hadn’t seen it. Devs trying to override keystrokes to prevent users from closing a Child Window. How do I do it: A property exist on the ChildWindow called “HasCloseButton”, you simply change that to false and your close button is gone. You can delete the “Cancel” button and add some logic to the OK button if you want the user to respond before proceeding. Tip/Trick #20) What is it? Cleanup your XAML. Why do I care?  By removing unneeded namespaces, not naming all of your controls and getting rid of designer markup you can improve code quality and readability. How do I do it: (This is a 3 in one tip) Remove unused Designer markup: 1) Have you ever wondered what the following code snippet does? xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" This code is telling the designer to do something special with this page in “Design mode” Specifically the width and the height of the page. When its running in the browser it will not use this information and it is actually ignored by the XAML parser. In other words, if you don’t need it then delete it. 2) If you are not using a namespace then remove it. In the code sample below, I am using Resharper which will tell me the ones that I’m not using by the grayed out line below. If you don’t have resharper you can look in your XAML and manually remove the unneeded namespaces. 3) Don’t name an control unless you actually need to refer to it in procedural code. If you name a control you will take a slight performance hit that is totally unnecessary if its not being called. <TextBlock Height="23" Text="TextBlock" />   That is the end of the series. I hope that you enjoyed it and please check out Part 1 | Part 2 | Part 3 if your hungry for more.  Subscribe to my feed CodeProject

    Read the article

  • ComboBox Data Binding

    - by Geertjan
    Let's create a databound combobox, levering MVC in a desktop application. The result will be a combobox, provided by the NetBeans ChoiceView, that displays data retrieved from a database: What follows is not much different from the NetBeans Platform CRUD Application Tutorial and you're advised to consult that document if anything that follows isn't clear enough. One kind of interesting thing about the instructions that follow is that it shows that you're able to create an application where each element of the MVC architecture can be located within a separate module: Start by creating a new NetBeans Platform application named "MyApplication". Model We're going to start by generating JPA entity classes from a database connection. In the New Project wizard, choose "Java Class Library". Click Next. Name the Java Class Library "MyEntities". Click Finish. Right-click the MyEntities project, choose New, and then select "Entity Classes from Database". Work through the wizard, selecting the tables of interest from your database, and naming the package "entities". Click Finish. Now a JPA entity is created for each of the selected tables. In the Project Properties dialog of the project, choose "Copy Dependent Libraries" in the Packaging panel. Build the project. In your project's "dist" folder (visible in the Files window), you'll now see a JAR, together with a "lib" folder that contains the JARs you'll need. In your NetBeans Platform application, create a module named "MyModel", with code name base "org.my.model". Right-click the project, choose Properties, and in the "Libraries" panel, click Add Dependency button in the Wrapped JARs subtab to add all the JARs from the previous step to the module. Also include "derby-client.jar" or the equivalent driver for your database connection to the module. Controler In your NetBeans Platform application, create a module named "MyControler", with code name base "org.my.controler". Right-click the module's Libraries node, in the Projects window, and add a dependency on "Explorer & Property Sheet API". In the MyControler module, create a class with this content: package org.my.controler; import org.openide.explorer.ExplorerManager; public class MyUtils { static ExplorerManager controler; public static ExplorerManager getControler() { if (controler == null) { controler = new ExplorerManager(); } return controler; } } View In your NetBeans Platform application, create a module named "MyView", with code name base "org.my.view".  Create a new Window Component, in "explorer" view, for example, let it open on startup, with class name prefix "MyView". Add dependencies on the Nodes API and on the Explorer & Property Sheet API. Also add dependencies on the "MyModel" module and the "MyControler" module. Before doing so, in the "MyModel" module, make the "entities" package and the "javax.persistence" packages public (in the Libraries panel of the Project Properties dialog) and make the one package that you have in the "MyControler" package public too. Define the top part of the MyViewTopComponent as follows: public final class MyViewTopComponent extends TopComponent implements ExplorerManager.Provider { ExplorerManager controler = MyUtils.getControler(); public MyViewTopComponent() { initComponents(); setName(Bundle.CTL_MyViewTopComponent()); setToolTipText(Bundle.HINT_MyViewTopComponent()); setLayout(new BoxLayout(this, BoxLayout.PAGE_AXIS)); controler.setRootContext(new AbstractNode(Children.create(new ChildFactory<Customer>() { @Override protected boolean createKeys(List list) { EntityManager entityManager = Persistence. createEntityManagerFactory("MyEntitiesPU").createEntityManager(); Query query = entityManager.createNamedQuery("Customer.findAll"); list.addAll(query.getResultList()); return true; } @Override protected Node createNodeForKey(Customer key) { Node customerNode = new AbstractNode(Children.LEAF, Lookups.singleton(key)); customerNode.setDisplayName(key.getName()); return customerNode; } }, true))); controler.addPropertyChangeListener(new PropertyChangeListener() { @Override public void propertyChange(PropertyChangeEvent evt) { Customer selectedCustomer = controler.getSelectedNodes()[0].getLookup().lookup(Customer.class); StatusDisplayer.getDefault().setStatusText(selectedCustomer.getName()); } }); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Customers: ")); row1.add(new ChoiceView()); add(row1); } @Override public ExplorerManager getExplorerManager() { return controler; } ... ... ... Now run the application and you'll see the same as the image with which this blog entry started.

    Read the article

  • VB.Net Dynamically Load DLL

    - by hermiod
    I am trying to write some code that will allow me to dynamically load DLLs into my application, depending on an application setting. The idea is that the database to be accessed is set in the application settings and then this loads the appropriate DLL and assigns it to an instance of an interface for my application to access. This is my code at the moment: Dim SQLDataSource As ICRDataLayer Dim ass As Assembly = Assembly. _ LoadFrom("M:\MyProgs\WebService\DynamicAssemblyLoading\SQLServer\bin\Debug\SQLServer.dll") Dim obj As Object = ass.CreateInstance(GetType(ICRDataLayer).ToString, True) SQLDataSource = DirectCast(obj, ICRDataLayer) MsgBox(SQLDataSource.ModuleName & vbNewLine & SQLDataSource.ModuleDescription) I have my interface (ICRDataLayer) and the SQLServer.dll contains an implementation of this interface. I just want to load the assembly and assign it to the SQLDataSource object. The above code just doesn't work. There are no exceptions thrown, even the Msgbox doesn't appear. I would've expected at least the messagebox appearing with nothing in it, but even this doesn't happen! Is there a way to determine if the loaded assembly implements a specific interface. I tried the below but this also doesn't seem to do anything! For Each loadedType As Type In ass.GetTypes If GetType(ICRDataLayer).IsAssignableFrom(loadedType) Then Dim obj1 As Object = ass.CreateInstance(GetType(ICRDataLayer).ToString, True) SQLDataSource = DirectCast(obj1, ICRDataLayer) End If Next EDIT: New code from Vlad's examples: Module CRDataLayerFactory Sub New() End Sub ' class name is a contract, ' should be the same for all plugins Private Function Create() As ICRDataLayer Return New SQLServer() End Function End Module Above is Module in each DLL, converted from Vlad's C# example. Below is my code to bring in the DLL: Dim SQLDataSource As ICRDataLayer Dim ass As Assembly = Assembly. _ LoadFrom("M:\MyProgs\WebService\DynamicAssemblyLoading\SQLServer\bin\Debug\SQLServer.dll") Dim factory As Object = ass.CreateInstance("CRDataLayerFactory", True) Dim t As Type = factory.GetType Dim method As MethodInfo = t.GetMethod("Create") Dim obj As Object = method.Invoke(factory, Nothing) SQLDataSource = DirectCast(obj, ICRDataLayer) EDIT: Implementation based on Paul Kohler's code Dim file As String For Each file In Directory.GetFiles(baseDir, searchPattern, SearchOption.TopDirectoryOnly) Dim assemblyType As System.Type For Each assemblyType In Assembly.LoadFrom(file).GetTypes Dim s As System.Type() = assemblyType.GetInterfaces For Each ty As System.Type In s If ty.Name.Contains("ICRDataLayer") Then MsgBox(ty.Name) plugin = DirectCast(Activator.CreateInstance(assemblyType), ICRDataLayer) MessageBox.Show(plugin.ModuleName) End If Next I get the following error with this code: Unable to cast object of type 'SQLServer.CRDataSource.SQLServer' to type 'DynamicAssemblyLoading.ICRDataLayer'. The actual DLL is in a different project called SQLServer in the same solution as my implementation code. CRDataSource is a namespace and SQLServer is the actual class name of the DLL. The SQLServer class implements ICRDataLayer, so I don't understand why it wouldn't be able to cast it. Is the naming significant here, I wouldn't have thought it would be.

    Read the article

  • Why is ListBoxFor not selecting items, but ListBox is?

    - by Roger Rogers
    I have the following code in my view: <%= Html.ListBoxFor(c => c.Project.Categories, new MultiSelectList(Model.Categories, "Id", "Name", new List<int> { 1, 2 }))%> <%= Html.ListBox("MultiSelectList", new MultiSelectList(Model.Categories, "Id", "Name", new List<int> { 1, 2 }))%> The only difference is that the first helper is strongly typed (ListBoxFor), and it fails to show the selected items (1,2), even though the items appear in the list, etc. The simpler ListBox is working as expected. I'm obviously missing something here. I can use the second approach, but this is really bugging me and I'd like to figure it out. For reference, my model is: public class ProjectEditModel { public Project Project { get; set; } public IEnumerable<Project> Projects { get; set; } public IEnumerable<Client> Clients { get; set; } public IEnumerable<Category> Categories { get; set; } public IEnumerable<Tag> Tags { get; set; } public ProjectSlide SelectedSlide { get; set; } } Update I just changed the ListBox name to Project.Categories (matching my model) and it now FAILS to select the item. <%= Html.ListBox("Project.Categories", new MultiSelectList(Model.Categories, "Id", "Name", new List<int> { 1, 2 }))%> I'm obviously not understanding the magic that is happening here. Update 2 Ok, this is purely naming, for example, this works... <%= Html.ListBox("Project_Tags", new MultiSelectList(Model.Tags, "Id", "Name", Model.Project.Tags.Select(t => t.Id)))%> ...because the field name is Project_Tags, not Project.Tags, in fact, anything other than Tags or Project.Tags will work. I don't get why this would cause a problem (other than that it matches the entity name), and I'm not good enough at this to be able to dig in and find out.

    Read the article

  • problem with NHibernate and iSeries DB2

    - by chrisjlong
    Ok So I have an AS400/iSeries running v5r4. I have an application that was using classic NHibernate to connect and do some basic crud. Now I have pulled that app (which sat for 2 years) off the shelf of TFS and onto a new PC and cannot seem to get it running. Here is my Hibernate Config: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider"> NHibernate.Connection.DriverConnectionProvider </property> <property name="dialect"> NHibernate.Dialect.DB2400Dialect </property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property> <property name="connection.connection_string"> DataSource=207.206.106.19; Database=AS400; userID=XXXXXX; Password=XXXXXXX; LibraryList=FMSFILTST,BEFFILT,HRDBFT,HRCSTFT,J20##X2DEV,GLCUSTDEV,OSL@@F3DEV; Naming=System; Initial Catalog=*SYSBAS; </property> <property name="use_outer_join">true</property> <property name="query.substitutions"> true 1, false 0, yes 'Y', no 'N' </property> <property name="show_sql">false</property> <mapping assembly="BusinessLogic" /> </session-factory> </hibernate-configuration> I have all the proper DLL's included (NHibernate, castle, iesi, antlr3 , log4 etc). Also have this line in my web.config <runtime> <assemblyBinding> <qualifyAssembly partialName="IBM.Data.DB2.iSeries" fullName="IBM.Data.DB2.iSeries,Version=10.0.0.0,PublicKeyToken=9CDB2EBFB1F93A26,Culture=neutral"/> </assemblyBinding> </runtime> Yet I am still getting the following error as soon as I call NHibernate.Cfg.Configuration().Configure().BuildSessionFactory().OpenSession(); The error is as follows Unable to cast object of type 'IBM.Data.DB2.iSeries.iDB2Connection' to type 'System.Data.Common.DbCommand' I am dying to get some help with this. Any assistance is appreciated. Thanks!

    Read the article

  • PHP SimpleXML recursive function to list children and attibutes

    - by Phill Pafford
    I need some help on the SimpleXML calls for a recursive function that lists the elements name and attributes. Making a XML config file system but each script will have it's own config file as well as a new naming convention. So what I need is an easy way to map out all the elements that have attributes, so like in example 1 I need a simple way to call all the processes but I don't know how to do this without hard coding the elements name is the function call. Is there a way to recursively call a function to match a child element name? I did see the xpath functionality but I don't see how to use this for attributes. Any ideas? Also does the XML in the examples look correct? can I structure my XML like this? Example 1: <application> <processes> <process id="123" name="run batch A" /> <process id="122" name="run batch B" /> <process id="129" name="run batch C" /> </processes> <connections> <databases> <database usr="test" pss="test" hst="test" dbn="test" /> </databases> <shells> <ssh usr="test" pss="test" hst="test-2" /> <ssh usr="test" pss="test" hst="test-1" /> </shells> </connections> </application> Example 2: <config> <queues> <queue id="1" name="test" /> <queue id="2" name="production" /> <queue id="3" name="error" /> </queues> </config> Pseudo code: // Would return matching process id getProcess($process_id) { return the process attributes as array that are in the XML } // Would return matching DBN (database name) getDatabase($database_name) { return the database attributes as array that are in the XML } // Would return matching SSH Host getSSHHost($ssh_host) { return the ssh attributes as array that are in the XML } // Would return matching SSH User getSSHUser($ssh_user) { return the ssh attributes as array that are in the XML } // Would return matching Queue getQueue($queue_id) { return the queue attributes as array that are in the XML } EDIT: Can I pass two parms? on the first method you have suggested @Gordon public function findProcessById($id, $name) { $attr = false; $el = $this->xml->xpath("//process[@id='$id']"); // How do I also filter by the name? if($el && count($el) === 1) { $attr = (array) $el[0]->attributes(); $attr = $attr['@attributes']; } return $attr; }

    Read the article

  • Log4r : logger inheritance, yaml configuration, alternatives ?

    - by devlearn
    Hello, I'm pretty new to ruby environments and I was looking for a nice logging framework to use it my ruby and rails applications. In my previous experiences I have successfully used log4j and log4p (the perl port) and was expecting the same level of usability (and maturity) with log4r. However I must say that there are a number of things that are not clear at all in the log4r framework. 1 Logger Inheritance The logger inheritance does not seem to be managed at all ! If I declare a logger named 'myapp' and then try to get a logger name 'myapp::engine', the lookup will end with a NameError. I would expect that the framework returns the root logger according to the naming scheme and to use the 'myapp' logger. Q1 : Of course I can work around this and manage the names by myself with a lookup method, however is there a cleaner way to do this without any extra coding ? 2 YAML configuration Second thing that confuses me is the yaml configuration. On the log4r site there are literally no information about this system, the doc links forward to missing pages, so all the info I can find about is contained in the examples directory of the gem. I was pretty confused with the fact that the yaml configuration must contain the pre_config section, and that I need to define my own levels. If I remove the pre_config secion, or replace all the “custom” levels by the standard ones ( debug, info, warn, fatal ) , the example will throw the following error : log4r/yamlconfigurator.rb:68:in `decode_yaml': Log level must be in 0..7 (ArgumentError) So there seems to be no way of using a simple file where we only declare the loggers and appenders for the framework. Q2 : I realy think that I missed something and that must be a way of providing a simple yaml conf file. Do you have any examples of such an usage ? 3 Variables substitution in XML file Q3 : The Yaml configuration system seems to provide such a feature however I was unable to find a similar feature with XML files. Any ideas ? 4 Alternatives ? I must say that I'm very disappointed by the feature level and the maturity of log4r compared to the log4j and other log4j ports. I run into this framework with a solid background of logging APIs in other languages and find myself working around in all kinds just to make 'basic things' running in a “real world application”. By that I mean a complex application composed of several gems, console/scripting apps, and a rails web front end where the configuration must be mutualized and where we make intensive usage of namespaces and inheritance. I've run several searches in order to find something more suitable or mature, but did not find anything similar. Q4 : Do you guys know any (serious) alternatives to log4r framework that could be used in a enterprise class app ? Thanks reading all of this ! I'd really appreciate any pointers, Kind Regards,

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

  • Gateway Page Between ASP and an ASP.NET Page

    - by ajdams
    I'll admit, I am pretty new with ASP .NET programming and I have been asked to take all our gateway pages (written in classic ASP) and make one universal gateway page to the few C# .NET applications we have (that I wrote). I tried searching here and the web and couldn't find much of anything describing a great way to do this and figured I was either not searching properly or was not properly naming what I am trying to do. I decided to to take one of the main gateway pages we had in classic ASP and use that as a base for my new gateway. Without boring you with a bunch of code I will summarize my gateway in steps and then can take advice/critique from there. EDIT: Basically what I am trying to do is go from a classic ASP page to a ASP .NET page and then back again. EDIT2: If my question is still unclear I am asking if what I have an the right start and if anyone has suggestions as to how this could be better. It can be as generic as need-be, not looking for a specific off-the-shelf code answer. My Gateway page: In the first part of the page I grab session variables and determine if the user is leaving or returning through the gateway: Code (in VB): uid = Request.QueryString("GUID") If uid = "" Then direction = "Leaving" End If ' Gather available user information. userid = Session("lnglrnregid") bankid = Session("strBankid") ' Return location. floor = Request.QueryString("Floor") ' The option chosen will determine if the user continues as SSL or not. ' If they are currently SSL, they should remain if chosen. option1 = Application(bankid & "Option1") If MID(option1, 6, 1) = "1" Then sslHttps = "s" End If Next I enter the uid into a database table (SQL-Server 2005) as a uniqueidentifier field called GUID. I omitted the stored procedure call. Lastly, I use the direction variable to determine if the user is leaving or returning and do redirects from there to the different areas of the site. Code (In VB again): If direction = "Leaving" Then Select Case floor Case "sscat", "ssassign" ' A SkillSoft course Response.Redirect("Some site here") Case "lrcat", "lrassign" ' A LawRoom course Response.Redirect("Some site here") Case Else ' Some other SCORM course like MindLeaders or a custom upload. Response.Redirect("Some site here") End Select Session.Abandon Else ' The only other direction is "Returning" ..... That's about it so far - so like I said, not an expert so any suggestions would be greatly appreciated!

    Read the article

  • JBOSS 7.1 started hanging after 6 months of deployment

    - by PVR
    My application is been live from 6 months. The application is host on jboss 7.1 server. From last few days I am finding numerous problem of hanging of jboss server. Though I restart the jboss server again, it does not invoke. I need to restart the server machine itself. Can anyone please let me know what could be the cause of these problems and the workable resolutions or any suggestion ? Kindly dont degrade the question as I am facing a lot problems due to this hanging issue. Also for the information, the application is based on Java, GWT, Hibernate 3. Please find the standalone.xml file in case if it helps. <extensions> <extension module="org.jboss.as.clustering.infinispan"/> <extension module="org.jboss.as.configadmin"/> <extension module="org.jboss.as.connector"/> <extension module="org.jboss.as.deployment-scanner"/> <extension module="org.jboss.as.ee"/> <extension module="org.jboss.as.ejb3"/> <extension module="org.jboss.as.jaxrs"/> <extension module="org.jboss.as.jdr"/> <extension module="org.jboss.as.jmx"/> <extension module="org.jboss.as.jpa"/> <extension module="org.jboss.as.logging"/> <extension module="org.jboss.as.mail"/> <extension module="org.jboss.as.naming"/> <extension module="org.jboss.as.osgi"/> <extension module="org.jboss.as.pojo"/> <extension module="org.jboss.as.remoting"/> <extension module="org.jboss.as.sar"/> <extension module="org.jboss.as.security"/> <extension module="org.jboss.as.threads"/> <extension module="org.jboss.as.transactions"/> <extension module="org.jboss.as.web"/> <extension module="org.jboss.as.webservices"/> <extension module="org.jboss.as.weld"/> </extensions> <system-properties> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION" value="on"/> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION_MIME_TYPES" value="text/javascript,text/css,text/html,text/xml,text/json"/> </system-properties> <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> </security-realms> <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket-binding native="management-native"/> </native-interface> <http-interface security-realm="ManagementRealm"> <socket-binding http="management-http"/> </http-interface> </management-interfaces> </management> <profile> <subsystem xmlns="urn:jboss:domain:logging:1.1"> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE"> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.apache.tomcat.util.modeler"> <level name="WARN"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <logger category="jacorb"> <level name="WARN"/> </logger> <logger category="jacorb.config"> <level name="ERROR"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> </subsystem> <subsystem xmlns="urn:jboss:domain:configadmin:1.0"/> <subsystem xmlns="urn:jboss:domain:datasources:1.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <security> <user-name>sa</user-name> <password>sa</password> </security> </datasource> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> <subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1"> <deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000"/> </subsystem> <subsystem xmlns="urn:jboss:domain:ee:1.0"/> <subsystem xmlns="urn:jboss:domain:ejb3:1.2"> <session-bean> <stateless> <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> <stateful default-access-timeout="5000" cache-ref="simple"/> <singleton default-access-timeout="5000"/> </session-bean> <pools> <bean-instance-pools> <strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> <strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> </bean-instance-pools> </pools> <caches> <cache name="simple" aliases="NoPassivationCache"/> <cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/> </caches> <passivation-stores> <file-passivation-store name="file"/> </passivation-stores> <async thread-pool-name="default"/> <timer-service thread-pool-name="default"> <data-store path="timer-service-data" relative-to="jboss.server.data.dir"/> </timer-service> <remote connector-ref="remoting-connector" thread-pool-name="default"/> <thread-pools> <thread-pool name="default"> <max-threads count="10"/> <keepalive-time time="100" unit="milliseconds"/> </thread-pool> </thread-pools> </subsystem> <subsystem xmlns="urn:jboss:domain:infinispan:1.2" default-cache-container="hibernate"> <cache-container name="hibernate" default-cache="local-query"> <local-cache name="entity"> <transaction mode="NON_XA"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="local-query"> <transaction mode="NONE"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="timestamps"> <transaction mode="NONE"/> <eviction strategy="NONE"/> </local-cache> </cache-container> </subsystem> <subsystem xmlns="urn:jboss:domain:jaxrs:1.0"/> <subsystem xmlns="urn:jboss:domain:jca:1.1"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="100" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jmx:1.1"> <show-model value="true"/> <remoting-connector/> </subsystem> <subsystem xmlns="urn:jboss:domain:jpa:1.0"> <jpa default-datasource=""/> </subsystem> <subsystem xmlns="urn:jboss:domain:mail:1.0"> <mail-session jndi-name="java:jboss/mail/Default"> <smtp-server outbound-socket-binding-ref="mail-smtp"/> </mail-session> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:1.1"/> <subsystem xmlns="urn:jboss:domain:osgi:1.2" activation="lazy"> <properties> <property name="org.osgi.framework.startlevel.beginning"> 1 </property> </properties> <capabilities> <capability name="javax.servlet.api:v25"/> <capability name="javax.transaction.api"/> <capability name="org.apache.felix.log" startlevel="1"/> <capability name="org.jboss.osgi.logging" startlevel="1"/> <capability name="org.apache.felix.configadmin" startlevel="1"/> <capability name="org.jboss.as.osgi.configadmin" startlevel="1"/> </capabilities> </subsystem> <subsystem xmlns="urn:jboss:domain:pojo:1.0"/> <subsystem xmlns="urn:jboss:domain:remoting:1.1"> <connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:resource-adapters:1.0"/> <subsystem xmlns="urn:jboss:domain:sar:1.0"/> <subsystem xmlns="urn:jboss:domain:security:1.1"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmUsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/> <module-option name="realm" value="ApplicationRealm"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jboss-ejb-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:threads:1.1"/> <subsystem xmlns="urn:jboss:domain:transactions:1.1"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> <coordinator-environment default-timeout="300"/> </subsystem> <subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false"> <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/> <virtual-server name="default-host" enable-welcome-root="false"> <alias name="localhost"/> <alias name="nextenders.com"/> </virtual-server> </subsystem> <subsystem xmlns="urn:jboss:domain:webservices:1.1"> <modify-wsdl-address>true</modify-wsdl-address> <wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> </subsystem> <subsystem xmlns="urn:jboss:domain:weld:1.0"/> </profile> <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> <socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/> <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/> <socket-binding name="ajp" port="8009"/> <socket-binding name="http" port="80"/> <socket-binding name="https" port="443"/> <socket-binding name="osgi-http" interface="management" port="8090"/> <socket-binding name="remoting" port="4447"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group>

    Read the article

  • How can this C and PHP programmer learn Ruby and Rails?

    - by Winston
    I came from a C, php and bash background, it was easy to learn because they all have the same C structure, which I can associate with what I already know. Then 2 years ago I learned Python and I learned it quite well, Python is easier for me to learn than Ruby. Then since last year, I was trying to learn Ruby, then Rails, and I admit, until now I still couldn't get it, the irony is that those are branded as easy to learn, but for a seasoned programmer like me, I just couldn't associate it with what I learned before, I have 2 books on both Ruby and Rails, and when I'm reading it nothing is absorbed into my mind, and I'm close to giving up... In ruby, I'm having a hard time grasping the concepts of blocks, and why there's @variables that can be accessed by other functions, and what does $variable and :variable do? And in Rails, why there's function like this_is_another_function_that_do_this, so thus ruby, is it just a naming convention or it's auto-generated with thisvariable _can_do_this_function. I'm still puzzled that where all those magic concepts and things came from? And now, 1 year of trying and absorbing, but still no progress... Edit: To summarize: How can I learn about blocks, and how can it be related to concepts from PHP/C? Variables, what does does it mean when a variable is prefixed with: @ $ : "Magic concepts", suchs as rails declarations of Records, what happens behind the scenes when I write has_one X OK so, bear with me with my confusion, at least I'm honest with myself, and it's over a year now since I first trying to learn ruby, and I'm not getting younger.. so I learned this in Bash/C/PHP solve_problem($problem) { if [ -e $problem == "trivial" ]; then write_solution(); else breakdown_problem_into_N_subproblems(\; define_relationship_between_subproblems; for i in $( command $each_subproblem ); do solve_problem $i done fi } write_solution(problem) { some_solution=$(command <parameters> "input" | command); command | command $some_solution > output_solved_problem_to_file } breakdown_problem_into_N_subproblems($problems) { for i in $problems; do command $i | command > i_can_output_a_file_right_away done } define_relationship_between_subproblems($problems) { if [ -e $problem == "relationship" ]; then relationship=$(command; command | command; command;) elsif [ -e $problem == "another_relationship" ]; relationship=$(command; command | command; command;) fi } In C/PHP is something like this solve_problem(problem) { if (problem == trivial) write_solution; else { breakdown_problem_into_N_subproblems; define_relationship_between_subproblems; for (each_subproblem) solve_problems(subproblem); } } And now, I just couldn't connect the dots with Ruby, |b|{ blocks }, using @variables, :variables, and variables_with_this_things..

    Read the article

  • PyGTK: dynamic label wrapping

    - by detly
    It's a known bug/issue that a label in GTK will not dynamically resize when the parent changes. It's one of those really annoying small details, and I want to hack around it if possible. I followed the approach at 16 software, but as per the disclaimer you cannot then resize it smaller. So I attempted a trick mentioned in one of the comments (the set_size_request call in the signal callback), but this results in some sort of infinite loop (try it and see). Does anyone have any other ideas? (You can't block the signal just for the duration of the call, since as the print statements seem to indicate, the problem starts after the function is left.) The code is below. You can see what I mean if you run it and try to resize the window larger and then smaller. (If you want to see the original problem, comment out the line after "Connect to the size-allocate signal", run it, and resize the window bigger.) The Glade file ("example.glade"): <?xml version="1.0"?> <glade-interface> <!-- interface-requires gtk+ 2.16 --> <!-- interface-naming-policy project-wide --> <widget class="GtkWindow" id="window1"> <property name="visible">True</property> <signal name="destroy" handler="on_destroy"/> <child> <widget class="GtkLabel" id="label1"> <property name="visible">True</property> <property name="label" translatable="yes">In publishing and graphic design, lorem ipsum[p][1][2] is the name given to commonly used placeholder text (filler text) to demonstrate the graphic elements of a document or visual presentation, such as font, typography, and layout. The lorem ipsum text, which is typically a nonsensical list of semi-Latin words, is a hacked version of a Latin text by Cicero, with words/letters omitted and others inserted, but not proper Latin[1][2] (see below: History and discovery). The closest English translation would be "pain itself" (dolorem = pain, grief, misery, suffering; ipsum = itself).</property> <property name="wrap">True</property> </widget> </child> </widget> </glade-interface> The Python code: #!/usr/bin/python import pygtk import gobject import gtk.glade def wrapped_label_hack(gtklabel, allocation): print "In wrapped_label_hack" gtklabel.set_size_request(allocation.width, -1) # If you uncomment this, we get INFINITE LOOPING! # gtklabel.set_size_request(-1, -1) print "Leaving wrapped_label_hack" class ExampleGTK: def __init__(self, filename): self.tree = gtk.glade.XML(filename, "window1", "Example") self.id = "window1" self.tree.signal_autoconnect(self) # Connect to the size-allocate signal self.get_widget("label1").connect("size-allocate", wrapped_label_hack) def on_destroy(self, widget): self.close() def get_widget(self, id): return self.tree.get_widget(id) def close(self): window = self.get_widget(self.id) if window is not None: window.destroy() gtk.main_quit() if __name__ == "__main__": window = ExampleGTK("example.glade") gtk.main()

    Read the article

  • WPF DataTemplates with VS2010 designer support + reusable - would you do it that way?

    - by Christian
    Ok, I am currently tidying up all my old stuff. I ran into the issue of "code only DataTemplates" - which are really a pain in the ass. You can't see anything, they are really hard to design, and I want to improve my project. So I had the idea to use the following solution. The main benefits are: You have designer support for your data template You can easily include example sample data The file naming is consistent and easy to remember The preview does not require an additional XAML wrapper (even with code only controls) I will try to explain and illustrate my solution using a few pictures. I am interested in feedback, especially if you can imagine a better way to do it. And, of course, if you see any maintenance or performance issues. Ok, lets start with a simple PreviewObject. I want to have some data in it, so I create a subclass which will automatically fill in some dummy data. Then I add a list to the control, and name this list. Afterwards I add a DataTemplate, this is the sole reason for the whole control (to be able to see and edit the DataTemplate in place): Now I use this control to get my DataTemplate, to use it in other places. To make this easier, I added some code in the code behind, see here: Now I want a control to show me a list of PreviewItems, so I created a "code-only" control which creates an instance of my service (or gets one using DI in real world) and fills its list box with it: To view the result of this work, I added this control inside the same named XAML, this is basically only to be able to see the final result: What I do not like in this solution: The need to create the last control in "code only". So I tried something different while writing this post. The following two screenshots illustrate the approach. I am creating an instance of the service inside the DataContext, and I am using bindings to supply the Itemssourc and the ItemTemplate. The reason for the strange "static property" is refactoring support. If I hardcode the path in the designer (e.g. using "Path = PreviewHistory") and I refactor the names (which happens quite often, early design phase) - I screw up my controls without realizing it. Does anyone has a better idea for this? I am using Resharper, btw. Thanks for any input, and sorry for the image overkill. Just easier to explain that way.. Chris

    Read the article

  • Any way to override how <choice> element is binded by xsd.exe

    - by code4life
    I have the following elements in my schema: <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:complexType name="optimizeModelBase"> <xs:attribute name="name" type="xs:string"/> </xs:complexType> <xs:complexType name="riskModel"> <xs:complexContent> <xs:extension base="optimizeModelBase"> <xs:attribute name="type" type="xs:string" use="required"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="fullCovariance"> <xs:complexContent> <xs:extension base="optimizeModelBase"> <xs:attribute name="fromDate" type="xs:date" use="required"/> <xs:attribute name="toDate" type="xs:date" use="required"/> <xs:attribute name="windowSize" type="xs:int" use="required"/> </xs:extension> </xs:complexContent> </xs:complexType> In my main schema body, I use a element to specify a 1-of situation: <xs:choice id="RiskModelParameter"> <xs:element name="RiskModel" type="riskModel"/> <xs:element name="FullCovariance" type="fullCovariance"/> </xs:choice> When I run xsd.exe, the resulting code is: [System.Xml.Serialization.XmlElementAttribute("FullCovariance", typeof(fullCovariance))] [System.Xml.Serialization.XmlElementAttribute("RiskModel", typeof(riskModel))] public optimizeModelBase Item { get { return this.itemField; } set { this.itemField = value; } } The issue is that the element's ID tag is being ignored, and xsd.exe is arbitrarily naming the property "Item". I have to admit, it's not a big issue, but it's starting to annoy me. What makes this extra annoying is that if I have additional elements at the same level, xsd.exe binds them as "Item1", "Item2", etc. Does anyone know if it's possible to not have xsd.exe name my choice elements as "Item", and instead be able to put in my own property names?

    Read the article

  • Release Process Improvements

    - by wallismark
    The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next. I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'. So the question is, specify one painful/time consuming part of your release process and what did you do to improve it? My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases. This works reasonably well but the problems with this approach are:- ALL changes in the Dev database are included, some of which may still be 'works in progress'. Sometimes developers made conflicting changes (that were not noticed until the release was in production) It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2). When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly. It's not an ideal scenario for Continuous Integration. So the solution was:- Enforce a policy of all changes to the database must be scripted. A naming convention was important for ensuring the correct running order of the scripts. Create/Use a tool to run the scripts at release time. Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes') The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script. Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth. We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56  | Next Page >