Search Results

Search found 893 results on 36 pages for 'convention over configura'.

Page 26/36 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • building a hash lookup table during `git filter-branch` or `git-rebase`

    - by intuited
    I've been using the SHA1 hashes of my commits as references in documentation, etc. I've realized that if I need to rewrite those commits, I'll need to create a lookup table to correspond the hashes for the original repo with the hashes for the filtered repo. Since these are effectively UUID's, a simple lookup table would do. I think that it's relatively straightforward to write a script to do this during a filter-branch run; that's not really my question, though if there are some gotchas that make it complicated, I'd certainly like to hear about them. I'm really wondering if there are any tools that provide this functionality, or if there is some sort of convention on where to keep the lookup table/what to call it? I'd prefer not to do things in a completely idiosyncratic way.

    Read the article

  • name of class that manipulates the entities

    - by cyberguest
    hi, i have a general question regarding naming convention. if I separate the data and operations into two separate classes. one has the data elements (entity), the other class manipulates the entity class. what do we usually call that class that manipulates the entity class? (the entity I am referring to has nothing to do with any kind of entity framework) manager? controller? operator? manipulator? thanks in advance

    Read the article

  • ExpressJS: What is the difference between app.local and res.local?

    - by aeyang
    I'm trying to learn Express and in my app I have middleware that passes the session object from the Request object to my Response object so that I can access it in my views: app.use((req, res, next) -> res.locals.session = req.session next() ) But app.locals is available to the view as well right? So is it the same if I do app.locals.session = req.session? Is there a convention for the types of things app.locals and res.locals are used for? I was also confused on what the difference is between res.render() and res.redirect()? When should each be used? Thanks for reading. Any help related to Express is appreciated!

    Read the article

  • Fixing the Model Binding issue of ASP.NET MVC 4 and ASP.NET Web API

    - by imran_ku07
            Introduction:                     Yesterday when I was checking ASP.NET forums, I found an important issue/bug in ASP.NET MVC 4 and ASP.NET Web API. The issue is present in System.Web.PrefixContainer class which is used by both ASP.NET MVC and ASP.NET Web API assembly. The details of this issue is available in this thread. This bug can be a breaking change for you if you upgraded your application to ASP.NET MVC 4 and your application model properties using the convention available in the above thread. So, I have created a package which will fix this issue both in ASP.NET MVC and ASP.NET Web API. In this article, I will show you how to use this package.           Description:                     Create or open an ASP.NET MVC 4 project and install ImranB.ModelBindingFix NuGet package. Then, add this using statement on your global.asax.cs file, using ImranB.ModelBindingFix;                     Then, just add this line in Application_Start method,   Fixer.FixModelBindingIssue(); // For fixing only in MVC call this //Fixer.FixMvcModelBindingIssue(); // For fixing only in Web API call this //Fixer.FixWebApiModelBindingIssue(); .                     This line will fix the model binding issue. If you are using Html.Action or Html.RenderAction then you should use Html.FixedAction or Html.FixedRenderAction instead to avoid this bug(make sure to reference ImranB.ModelBindingFix.SystemWebMvc namespace). If you are using FormDataCollection.ReadAs extension method then you should use FormDataCollection.FixedReadAs instead to avoid this bug(make sure to reference ImranB.ModelBindingFix.SystemWebHttp namespace). The source code of this package is available at github.          Summary:                     There is a small but important issue/bug in ASP.NET MVC 4. In this article, I showed you how to fix this issue/bug by using a package. Hopefully you will enjoy this article too.

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1

    - by shiju
    In this post, I will demonstrate web application development using ASP. NET MVC 3, Razor and EF code First. This post will also cover Dependency Injection using Unity 2.0 and generic Repository and Unit of Work for EF Code First. The following frameworks will be used for this step by step tutorial. ASP.NET MVC 3 EF Code First CTP 5 Unity 2.0 Define Domain Model Let’s create domain model for our simple web application Category class public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } }   Expense class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First.    public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     }   RepositoryBasse – Generic Repository class public abstract class RepositoryBase<T> where T : class { private MyFinanceContext database; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) {     DatabaseFactory = databaseFactory;     dbset = Database.Set<T>(); }   protected IDatabaseFactory DatabaseFactory {     get; private set; }   protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } }   DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work   public interface IUnitOfWork {     void Commit(); }   UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } }   The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>.   public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext.   public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } }   Let’s create controller factory for Unity in the ASP.NET MVC 3 application. public class UnityControllerFactory : DefaultControllerFactory { IUnityContainer container; public UnityControllerFactory(IUnityContainer container) {     this.container = container; } protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType) {     IController controller;     if (controllerType == null)         throw new HttpException(                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   }   Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies.   private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); }   In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity.   protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); }   Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller   public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        }   Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div>   We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Let’s create view page for insert Category information   @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.      @{     Layout = "~/Views/Shared/_Layout.cshtml"; }   Source Code You can download the source code from http://efmvc.codeplex.com/ . The source will be refactored on over time.   Summary In this post, we have created a simple web application using ASP.NET MVC 3 and EF Code First. We have discussed on technologies and practices such as ASP.NET MVC 3, Razor, EF Code First, Unity 2, generic Repository and Unit of Work. In my later posts, I will modify the application and will be discussed on more things. Stay tuned to my blog  for more posts on step by step application building.

    Read the article

  • Upcoming UPK Events

    - by kathryn.lustenberger(at)oracle.com
    February 15th: UPK: Follow Panduit's Lead and Leverage Oracle's User Productivity Kit To Achieve Your Goals - Join us for a live webcast to learn how Oracle's User Productivity Kit can help you meet and exceed your goals. The webcast will feature Jim Boss, from the Panduit Corporation, who will share how Oracle's User Productivity Kit was used with both Oracle and Non-Oracle applications to helped Panduit to meet their goals. Date: February 15th, 2011 at 12:00 PST / 3:00 EST Evite: http://www.oracle.com/us/dm/65630-naod10046029mpp005c010-se-300908.html March 2nd: Synaptis teams with Oracle to deliver a UPK customer success story - Webinar Offering The Value of UPK (Customer Success Story): How to leverage the value of UPK to streamline processes and maximize end user adoption for a global implementation Join us to learn how the power of UPK can be leveraged to train end users globally in a successful and cost effective manner. A valued Oracle UPK customer will share experiences, successes, challenges, and strategies. The webinar will also include a question and answer session to give the attendees an opportunity to interact directly with the Oracle UPK customer, Synaptis, and the Oracle UPK Team. Date: March 2, 2011 Time: 11:00am - 12:00pm EST Register for this webinar March 27 - 30th: The Alliance 2011 conference is an annual event for all higher education, government, and public sector users of Oracle applications. The Alliance conference is organized and managed by the Higher Education User Group (www.heug.org). This is the 14th annual event for the HEUG. This is your opportunity to join with over 3200 other Higher Education, Federal, State and Local Government users to network, learn and share in our amazing combined experiences. The Alliance conference team is hard at work, putting together the best conference ever for 2011 - so don't delay, make your plans now to be part of Alliance 2011! When: Sunday, March 27th, 2011 - Wednesday, March 30, 2011 Where: The Colorado Convention Center (Denver, Colorado) Registration for Alliance 2011 is Now Open! UPK will be represented at this event offering: Pre-Conference Training Learn the Basics of Oracle User Productivity Kit (UPK) Taking Your UPKs to a Whole New Level, Advanced Use of UPK Demo Pod Staff Sessions: Oracle User Productivity Kit: Creating Value throughout the Project Lifecycle Beyond Basic UPK -- User Tracking and SmartHelp Leveraging Oracle and User Productivity Kit (UPK) to Develop a Comprehensive Training Program Oracle User Productivity Kit Strategy and Roadmap -- Key to User Adoption April 10 - 14th: Registration for COLLABORATE 11 has begun - Don't miss the most comprehensive, user-driven conference devoted to Oracle applications and technology. Collaborate with a global network of more than 5,000 peers and experts to share real-world experiences, solve your challenges and gain insights to validate your technology plans. Read below to discover which group to register with for the best value. UPK will be represented at this event offering: Demo Pod Staff Sessions: Oracle User Productivity Kit: Creating Value throughout the Project Lifecycle Centralize all Project Team assets, AND, Deploy Fully Measurable Training with UPK Pro Oracle User Productivity Kit Strategy and Roadmap - Key to User Adoption Registration is Now Open!

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • INETA Community Leadership Summit

    - by Scott Spradlin
    INETA Community Leadership Summit will be taking place on Sunday June 6th at 1PM at Tech·Ed North America in New Orleans. INETA is hosting a free Community Leadership Summit in New Orleans at the Ernest N. Morial Convention Center on Sunday June 6th at 1:00 PM prior to the start of Tech·Ed 2010. The summit is open to Community Leaders from the area, as well as those attending Tech·Ed from across the country and around the world. It is an excellent opportunity for exchanging information and ideas. If you are a user group leader, or are involved in the leadership, planning, promotion, or day-to-day operations of a user group community, this event is for YOU! The summit is an open forum to share ideas, discuss common challenges, and gain from the experience of other leaders. INETA Community Leadership summits are part of an ongoing effort by INETA to create, improve and share resources designed to strengthen individual user groups and the community. This meeting will be the perfect opportunity to meet leaders from other groups, benefit from their success stories, and expand your network of contacts.   Quick FAQs Who can attend? Any leader or volunteer of any INETA User Group. Do I need to be attending Tech·Ed? No, you do NOT need to purchase a pass for Tech·Ed to attend the Leadership Summit. What does it cost to attend? There is NO cost to attend summit, but the knowledge that will be available about User Groups will be priceless. I want to help out, who do I contact? Send an email to [email protected] if you are interested. I want to attend, where do I register? We are putting together a registration link now, it will be published in a future newsletter and on the website. What will the format of the summit be? The summit will be like our Birds of a Feather Sessions but focused on User Group topics. Moderators will be armed with some broad topics to kick off the conversation, however the real value of these sessions is getting the chance to learn from each other. What topics will be covered? We are thinking of focusing on 4 areas: Running a User Group, Effective Content and Presenters, User Group Promotion and Developing Partnerships. However the agenda is yours! If there is a topic you want to see covered, or a topic that you would like to lead then email  [email protected]. Technorati Tags: conference

    Read the article

  • My Body Summary template for Orchard

    - by Bertrand Le Roy
    By default, when Orchard displays a content item such as a blog post in a list, it uses a very basic summary template that removes all markup and then extracts the first 200 characters. Removing the markup has the unfortunate effect of removing all styles and images, in particular the image I like to add to the beginning of my posts. Fortunately, overriding templates in Orchard is a piece of cake. Here is the Common.Body.Summary.cshtml file that I drop into the Views/Parts folder of pretty much all Orchard themes I build: @{ Orchard.ContentManagement.ContentItem contentItem = Model.ContentPart.ContentItem; var bodyHtml = Model.Html.ToString(); var more = bodyHtml.IndexOf("<!--more-->"); if (more != -1) { bodyHtml = bodyHtml.Substring(0, more); } else { var firstP = bodyHtml.IndexOf("<p>"); var firstSlashP = bodyHtml.IndexOf("</p>"); if (firstP >=0 && firstSlashP > firstP) { bodyHtml = bodyHtml.Substring(firstP, firstSlashP + 4 - firstP); } } var body = new HtmlString(bodyHtml); } <p>@body</p> <p>@Html.ItemDisplayLink(T("Read more...").ToString(), contentItem)</p> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This template does not remove any tags, but instead looks for an HTML comment delimiting the end of the post’s intro: <!--more--> This is the same convention that is being used in WordPress, and it’s easy to add from the source view in TinyMCE or Live Writer. If such a comment is not found, the template will extract the first paragraph (delimited by <p> and </p> tags) as the summary. And if it finds neither, it will use the whole post. The template also adds a localizable link to the full post.

    Read the article

  • Going Inside the Store

    - by David Dorf
    Location was the first "killer-tech" for smartphones, and innovators have found several ways to use it. For retail, apps exist to find nearby stores, provide coupons, and give directions to the front door. But once you enter the store, location-finding ceases to work. That's because your location is usually found by finding GPS satellites in they sky, and the store's roof blocks the signal. But it won't take technology long to solve that problem. The first problem to solve is a lack of indoor maps. Navteq and others provide very accurate maps of the outdoors, enabling navigation for cars and pedestrians. Micello is building a business creating digital maps of indoor locations like malls, convention centers, office buildings. They have over 500 live maps, including maps of IKEA stores. They claim it took them only four hours to create a map of the Stanford Shopping Center in Palo Alto with its 1.4 million square feet and 140 retail stores. And within stores, retailers are producing more accurate plan-o-grams. I'm always impressed watching demos of our space planning from AVT. It uses CAD software to allow you to walk the virtual store and see products on the shelves. The second problem is being able to determine location inside the store so it can be overlayed on the map. There are several goals for this endeavor. Your smartphone might direct you straight to particular products, it might summon a sales associate to your location for immediate assistance, and it might send you coupons based on the aisle you're viewing. Companies like Nearbuy, ZuluTime, and Skyhook are working to master indoor location using a combination of GPS signals, WiFi, and cell tower positioning to calculate a location. (Skyhook calls this WPS, as depicted in the chart.) Today they can usually hit 10 meters accuracy, but that number is improving all the time. When it gets inside 3 meters some the goals mentioned earlier will be in easy reach. I for one can't wait until the time my iPhone leads me directly to the sprinkler heads in Lowes and Home Depot.

    Read the article

  • XNA extending an existing Content type

    - by Maarten
    We are doing a game in XNA that reacts to music. We need to do some offline processing of the music data and therefore we need a custom type containing the Song and some additional data: // Project AudioGameLibrary namespace AudioGameLibrary { public class GameTrack { public Song Song; public string Extra; } } We've added a Content Pipeline extension: // Project GameTrackProcessor namespace GameTrackProcessor { [ContentSerializerRuntimeType("AudioGameLibrary.GameTrack, AudioGameLibrary")] public class GameTrackContent { public SongContent SongContent; public string Extra; } [ContentProcessor(DisplayName = "GameTrack Processor")] public class GameTrackProcessor : ContentProcessor<AudioContent, GameTrackContent> { public GameTrackProcessor(){} public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context), Extra = "Some extra data" // Here we can do our processing on 'input' }; } } } Both the Library and the Pipeline extension are added to the Game Solution and references are also added. When trying to use this extension to load "gametrack.mp3" we run into problems however: // Project AudioGame protected override void LoadContent() { AudioGameLibrary.GameTrack gameTrack = Content.Load<AudioGameLibrary.GameTrack>("gametrack"); MediaPlayer.Play(gameTrack.Song); } The error message: Error loading "gametrack". File contains Microsoft.Xna.Framework.Media.Song but trying to load as AudioGameLibrary.GameTrack. AudioGame contains references to both AudioGameLibrary and GameTrackProcessor. Are we maybe missing other references? EDIT Selecting the correct content processor helped, it loads the audio file correctly. However, when I try to process some data, e.g: public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { int count = input.Data.Count; // With this commented out it works fine return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context) }; } It crashes with the following error: Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem in 'C:\Users\Maarten\Documents\Visual Studio 2010\Projects\AudioGame\DebugPipeline\bin\Debug\DebugPipeline.exe'. Additional Information: A call to PInvoke function 'Microsoft.Xna.Framework.Content.Pipeline!Microsoft.Xna.Framework.Content.Pipeline.UnsafeNativeMethods+AudioHelper::OpenAudioFile' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature. Information from logger right before crash: Using "BuildContent" task from assembly "Microsoft.Xna.Framework.Content.Pipel ine, Version=4.0.0.0, Culture=neutral, PublicKeyToken=842cf8be1de50553". Task "BuildContent" Building gametrack.mp3 -> bin\x86\Debug\Content\gametrack.xnb Rebuilding because asset is new Importing gametrack.mp3 with Microsoft.Xna.Framework.Content.Pipeline.Mp3Imp orter Im experiencing exactly this: http://forums.create.msdn.com/forums/t/75996.aspx

    Read the article

  • How to implement an email unsubscribe system for a site with many kinds of emails?

    - by Mike Liu
    I'm working on a website that features many different types of emails. Users have accounts, and when logged in they have access to a setting page that they can use to customize what types of emails they receive. However, I'd like to also give users an easy way to unsubscribe directly in the emails they receive. I've looked into list unsubscribe headers as well as creating some type of one click link that would unsubscribe a user from that type of email without requiring login or further action. The later would probably require me to break convention and make changes to the database in response to a GET on the link. However, am I incorrect in thinking that either of these would require me to generate and permanently store a unique identifier in my database for every email I ever send, really complicating email delivery? Without that, I'm not sure how I would be able to uniquely identify a user and a type of email in order to change their email preferences, and this identifier would need to be stored forever as a user could have an email sitting in their inbox for a long time before they decide to act on it. Alternatively, I was considering having a no-login page for managing email preferences. In contrast to above where I would need one of these identifiers for each email, this would only need one identifier per user, with no generation or other action required on sending an email. All of these raise security issues, and they could potentially be used by people to tamper with others' email preferences. This could be mitigated somewhat by ensuring that the identifier is really difficult to guess. For the once per user identifier approach, I was considering generating the identifier by passing a user's ID through some type of encryption algorithm, is this a sound approach? For the per-email identifiers, perhaps I could use a user's ID appended to the time. However, even this would not eliminate the problem entirely, as this would really just be security through obscurity, and anyone with the URL could tamper, and in the end the main defense would have to be that most people aren't so bored as to tamper with other people's email preferences. Are there any other alternatives I've missed, or issues or solutions with these that anyone can provide insight on? What are best practices in this area?

    Read the article

  • Get to Know a Candidate (12 of 25): Andre Barnett&ndash;Reform Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Barnett is an American politician and entrepreneur. He is the founder of the information technology (IT) company WiseDome Inc.  Barnett was born in Zanesville, Ohio in 1976. He attended Austin Peay State University and Western Governors University.  A former member of the United States Armed Forces, Barnett served in Sarajevo before being wounded in a helicopter accident.  Following his military service, Barnett became a fitness model in New York. In 2001, he founded WiseDome Incorporated, an IT company that provides information technology and data recovery services. Reform Party of the United States of America (RPUSA), generally known as the Reform Party USA or the Reform Party, is a political party in the United States, founded in 1995 by Ross Perot. Perot said Americans were disillusioned with the state of politics—as being corrupt and unable to deal with vital issues—and desired a viable alternative to the Republican and Democratic Parties. The party has nominated different candidates over the years, such as founder Ross Perot, Pat Buchanan, and Ralph Nader. The party's most significant victory came when Jesse Ventura was elected governor of Minnesota in 1998. Since then, the party has been torn by infighting and disagreements, which it seeks to overcome. The Reform Party platform includes the following: * Maintaining a balanced budget, ensured by passing a Balanced Budget Amendment and changing budgeting practices, and paying down the federal debt * Campaign finance reform, including strict limits on campaign contributions and the outlawing of the Political action committee * Enforcement of existing immigration laws and opposition to illegal immigration * Opposition to free trade agreements like the North American Free Trade Agreement and Central America Free Trade Agreement, and a call for withdrawal from the World Trade Organization * Term limits on U.S. Representatives and Senators * Direct election of the United States President by popular vote * Federal elections held on weekends A noticeable absence from the Reform Party platform has been social issues, including abortion and gay rights. Reform Party representatives had long stated beliefs that their party could bring together people from both sides of these issues, which they consider divisive, to address what they considered to be more vital concerns as expressed in their platform. The idea was to form a large coalition of moderates; that intention was overridden in 2001 by the Buchanan takeover which rewrote the RPUSA Constitution to specifically include platform planks opposed to any form of abortion. The Buchananists, in turn, were overridden by the 2002 Convention which specifically reverted the Constitution to its 1996 version and the party's original stated goals. Barnett has Ballot Access in: FL Learn more about Andre Barnett and Reform Party on Wikipedia.

    Read the article

  • Is ASP.NET MVC completely (and exclusively) based on conventions?

    - by Mike Valeriano
    --TL;DR Is there a "Hello World!" ASP.NET MVC tutorial out there that doesn't rely on conventions and "stock" projects? Is it even possible to take advantage of the technology without reusing the default file structure, and start from a single "hello_world.asp" file or something (like in PHP)? Am I completely mistaken and I should be looking somewhere else, maybe this? I'm interested in the MVC framework, not Web Forms --Background I've played a bit with PHP in the past, just for fun, and now I'm back to it since web development became relevant for me once again. I'm no professional, but I try to gain as much knowledge and control over the technology I'm working with as possible. I'm using Visual Studio 2012 for C# - my "desktop" language of choice - and since I got the Professional Edition from Dreamspark, the Web Development Tools are available, including ASP.NET MVC 4. I won't touch Web Forms, but the MVC Framework got my attention because the MVC pattern is something I can really relate to, since it provides the control I want but... not quite. Learning PHP was easy - and right form the start I could just create a "hello_world.php" file and just do something like this for immediate results: <!-- file: hello_world.php --> <?php> echo "Hello World!"; <?> But I couldn't find a single ASP.NET (MVC) tutorial out there (I'll be sure to buy one of the upcoming MVC 4 books, only a month away or so) that would start like that. They all start with a sample project, building up knowledge from the basics and heavily using conventions as they go along. Which is fine, I suppose, but it's now the best way for me to learn things. Even the "Empty" project template for a new ASP.NET MVC 4 Application in VS2012 is not empty at all: several files and folders are created for you - much like a new C# desktop application project, but with C# I can in fact start from scratch, creating the project structure myself. It is not the case with PHP: I can choose from a plethora of different MVC frameworks I can just create my own framework I can just skip frameworks altogether, and toss random PHP along with my HTML on a single file and make it work I understand the framework needs to establish some rules, but what if I just want to create a single page website with some C# logic behind it? Do I really need to create a whole bloat of files and folders for the sake of convention? Also, please understand that I haven't gotten far on any of those tutorials mainly because of this reason, but, if that's the only way to do it, I'll go for it using one of the books I've mentioned before. This is my first contact with ASP.NET but from the few comparisons I've read, I believe I should stay the hell away from Web Forms. Thank you. (Please forgive the broken English - it is not my primary language.)

    Read the article

  • What source code organization approach helps improve modularity and API/Implementation separation?

    - by Berin Loritsch
    Few languages are as restrictive as Java with file naming standards and project structure. In that language, the file name must match the public class declared in the file, and the file must live in a directory structure matching the class package. I have mixed feelings about that approach. While I never have to guess where a file lives, there's still a lot of empty directories and artificial constraints. There's several languages that define everything about a class in one file, at least by convention. C#, Python (I think), Ruby, Erlang, etc. The commonality in most these languages is that they are object oriented, although that statement can probably be rebuffed (there is one non-OO language in the list already). Finally, there's quite a few languages mostly in the C family that have a separate header and implementation file. For C I think this makes sense, because it is one of the few ways to separate the API interface from implementations. With C it seems that feature is used to promote modularity. Yet, with C++ the way header and implementation files are split seems rather forced. You don't get the same clean API separation that you do with C, and you are forced to include some private details in the header you would rather keep only in the implementation. There's quite a few languages that have a concept that overlaps with interfaces like Java, C#, Go, etc. Some languages use what feels like a hack to provide the same concept like C# using pure virtual abstract classes. Still others don't really have an interface concept and rely on "duck" typing--for example Ruby. Ruby has modules, but those are more along the lines of mixing in behaviors to a class than they are for defining how to interact with a class. In OO terms, interfaces are a powerful way to provide separation between an API client and an API implementation. So to hurry up and ask the question, from a personal experience point of view: Does separation of header and implementation help you write more modular code, or does it get in the way? (it helps to specify the language you are referring to) Does the strict file name to class name scheme of Java help maintainability, or is it unnecessary structure for structure's sake? What would you propose to promote good API/Implementation separation and project maintenance, how would you prefer to do it?

    Read the article

  • Your interesting code tricks/ conventions? [closed]

    - by Paul
    What interesting conventions, rules, tricks do you use in your code? Preferably some that are not so popular so that the rest of us would find them as novelties. :) Here's some of mine... Input and output parameters This applies to C++ and other languages that have both references and pointers. This is the convention: input parameters are always passed by value or const reference; output parameters are always passed by pointer. This way I'm able to see at a glance, directly from the function call, what parameters might get modified by the function: Inspiration: Old C code int a = 6, b = 7, sum = 0; calculateSum(a, b, &sum); Ordering of headers My typical source file begins like this (see code below). The reason I put the matching header first is because, in case that header is not self-sufficient (I forgot to include some necessary library, or forgot to forward declare some type or function), a compiler error will occur. // Matching header #include "example.h" // Standard libraries #include <string> ... Setter functions Sometimes I find that I need to set multiple properties of an object all at once (like when I just constructed it and I need to initialize it). To reduce the amount of typing and, in some cases, improve readability, I decided to make my setters chainable: Inspiration: Builder pattern class Employee { public: Employee& name(const std::string& name); Employee& salary(double salary); private: std::string name_; double salary_; }; Employee bob; bob.name("William Smith").salary(500.00); Maybe in this particular case it could have been just as well done in the constructor. But for Real WorldTM applications, classes would have lots more fields that should be set to appropriate values and it becomes unmaintainable to do it in the constructor. So what about you? What personal tips and tricks would you like to share?

    Read the article

  • OBIEE version 11.1.1.7.131017 has been released

    - by inowodwo
    The Business Intelligence Enterprise Edition 11.1.1.7.131017 patch set has been released, and is available to download from My Oracle Support (https:\\support.oracle.com). Per the patch readme: This patch set is available for all customers who are using Oracle Business Intelligence Enterprise Edition 11.1.1.7.0 and 11.1.1.7.1. It is also available for Exalytics customers who have applied the Exalytics PS3 patch. Patch 17530796 - OBIEE BUNDLE PATCH 11.1.1.7.131017 (Patch) is comprised of the following patches, which are not available separately:     Patch 16913445 - Patch 11.1.1.7.131017 (1 of 8) Oracle Business Intelligence Installer (BIINST)     Patch 17463314 - Patch 11.1.1.7.131017 (2 of 8) Oracle Business Intelligence Publisher (BIP)     Patch 17300417 - Patch 11.1.1.7.131017 (3 of 8) Enterprise Performance Management Components Installed from BI Installer 11.1.1.7.0 (BIFNDNEPM))     Patch 17463395 - Patch 11.1.1.7.131017 (4 of 8) Oracle Business Intelligence Server (BIS)     Patch 17463376 - Patch 11.1.1.7.131017 (5 of 8) Oracle Business Intelligence Presentation Services (BIPS)     Patch 17300045 - Patch 11.1.1.7.131017 (6 of 8) Oracle Business Intelligence Presentation Services (BIPS)     Patch 16997936 - Patch 11.1.1.7.131017 (7 of 8) Oracle Business Intelligence Presentation Services (BIPS)     Patch 17463403 - Patch 11.1.1.7.131017 (8 of 8) Oracle Business Intelligence Platform Client Installers and MapViewer Also you must download: Patch 16569379 - Dynamic Monitoring Service patch The instructions to apply the bundle patch are given in the patch readme along with some important notes if you are upgrading from 11.1.1.6.x versions. The new functionality in this patch includes:     Support for Microsoft Internet Explorer 10     Support for Oracle BI Mobile App Designer     Support for improved exporting functionality into Microsoft Excel For more information please refer to document: OBIEE 11g 11.1.1.7.131017 is Available for Oracle Business Intelligence Enterprise Edition and Exalytics (Doc ID 1595219.1) In addition we strongly recommend you review this document: OBIEE Suite Bundle Patches (Doc ID 1591422.1), which explains the new naming convention, the strategy behind bundle patches and other interesting facts about OBIEE patching. Please take some time to review it.

    Read the article

  • JUDCon 2013 Trip Report

    - by reza_rahman
    JUDCon (JBoss Users and Developers Conference) 2013 was held in historic Boston on June 9-11 at the Hynes Convention Center. JUDCon is the largest get together for the JBoss community, has gone global in recent years but has it's roots in Boston. The JBoss folks graciously accepted a Java EE 7 talk from me and actually referenced my talk in their own sessions. I am proud to say this is my third time speaking at JUDCon/the Red Hat Summit over the years (this was the first time on behalf of Oracle). I had great company with many of the rock stars of the JBoss ecosystem speaking such as Lincoln Baxter, Jay Balunas, Gavin King, Mark Proctor, Andrew Lee Rubinger, Emmanuel Bernard and Pete Muir. Notably missing from JUDCon were Bill Burke, Burr Sutter, Aslak Knutsen and Dan Allen. Topics included Java EE, Forge, Arquillian, AeroGear, OpenShift, WildFly, Errai/GWT, NoSQL, Drools, jBPM, OpenJDK, Apache Camel and JBoss Tools/Eclipse. My session titled "JavaEE.Next(): Java EE 7, 8, and Beyond" went very well and it was a full house. This is our main talk covering the changes in JMS 2, the Java API for WebSocket (JSR 356), the Java API for JSON Processing (JSON-P), JAX-RS 2, JPA 2.1, JTA 1.2, JSF 2.2, Java Batch, Bean Validation 1.1, Java EE Concurrency and the rest of the APIs in Java EE 7. I also briefly talked about the possibilities for Java EE 8. The slides for the talk are here: JavaEE.Next(): Java EE 7, 8, and Beyond from reza_rahman Besides presenting my talk, it was great to catch up with the JBoss gang and attend a few interesting sessions. On Sunday night I went to one of my favorite hangouts in Boston - the exalted Middle East Club as Rolling Stone refers to it (other cool spots in an otherwise pretty boring town is "the Church"). As contradictory as it might sound to the uninitiated, the Middle East Club is possibly the best place in Boston to simultaneously get great Middle Eastern (primarily Lebanese) food and great underground metal. For folks with a bit more exposure, this is probably not contradictory at all given bands like Acrassicauda and documentaries like Heavy Metal in Baghdad. Luckily for me they were featuring a few local Thrash metal bands from the greater Boston area. It wasn't too bad considering it was primarily amateur twenty-something guys (although I'm not sure I'm a qualified critic any more since I all but stopped playing about at that age). It's great Boston has the Middle East as an incubator to keep the rock, metal, folk, jazz, blues and indie scene alive. I definitely enjoyed JUDCon/Boston and hope to be part of the conference next year again.

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • Reporting on common code smells : A POC

    - by Dave Ballantyne
    Over the past few blog entries, I’ve been looking at parsing TSQL scripts in a variety of ways for a variety of tasks.  In my last entry ‘How to prevent ‘Select *’ : The elegant way’, I looked at parsing SQL to report upon uses of SELECT *.  The obvious question leading on from this is, “Great, what about other code smells ?”  Well, using the language service parser to do that was turning out to be a bit of a hard job,  sure I was getting tokens but no real context.  I wasn't even being told when an end of statement had been reached. One of the other parsing options available from Microsoft is exposed in the assembly ‘Microsoft.SqlServer.TransactSql.ScriptDom’,  this is ,I believe, installed with the client development tools with SQLServer.  It is much more feature rich than the original parser I had used and breaks a TSQL script into intuitive classes for analysis. So, what sort of smells can I now find using it ?  Well, for an opening gambit quite a nice little list. Use of NOLOCK Set of READ UNCOMMITTED Use of SELECT * Insert without column references Explicit datatype conversion on Sargs Cross server selects Non use of two-part naming convention Table and Query hint usage Changes in set options Use of single line comments Use of ordinal column positions in ORDER BY clause Now, lets not argue the point that “It depends” as smells on some of these, but as an academic exercise it is quite interesting.  The code is available from this link :https://www.dropbox.com/s/rfk32sou4fzl2cw/TSQLDomTest.zip  All the usual disclaimers apply to this code, I cannot be held responsible for anything ranging from mild annoyance through to universe destruction due to the use of this code or examples. The zip file contains a powershell script and my test cases.  The assembly used requires .Net 4 to run, which means that you will need powershell 3 ( though im running through PowerGUI and all works ok ) .  The code searches for all .sql files in the folder hierarchy for the workingpath,  you can override this if you want by simply changing the $Folder variable, and processes each in turn for the smells.  Feedback is not great at the moment, all it does is output to an xml file (Smells.xml) the offset position and a description of the smell found. Right now, I am interested in your feedback.  What do you think ?  Is this (or should it be) more than an academic exercise ?  Can tooling such as this be used as some form of code quality measure ?  Does it Work ? Do you have a case listed above which is not being reported ? Do you have a case that you would love to be reported ? Let me know , please mailto: [email protected]. Thanks

    Read the article

  • BI&EPM in Focus November 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE IBM is Embracing Oracle Exalytics: The Velocity of Thought and Action (link) Customers Ambulance Victoria, Australia, uses analytics and modelling to serve the expanding needs of a growing population (link) Cablemás Selects Oracle to Speed Customer Data Insights (link) National Instruments Introduces New Business Intelligence Solutions—Runs Reports up to 30x Faster, and Expands Customer Insight (link) FLSmidth Ensures Precise, Transparent Financial Reporting at All Business Levels, Reduces Financial Consolidation Time by up to 40% (link) Enterprise Performance Management Partner Edgewater Ranzal Webinar Series Mitigate Your Biggest EPM Project Risk - Thursday, 21st November - Register here:  4.00 GMT Capital Planning in the Energy Industry - Tuesday, 26th November - Register here:  4.00 GMT Driving Value in the Retail Industry Using Hyperion Strategic Finance (HSF)  - Tuesday, 10th December - Register here:  7.00 GMT Dec 11, Look Smarter Selling Hyperion Profitability & Cost Management (HPCM) Webcast (link) EPM System Infrastructure Tips & Tricks Support: November EPM Patch Set Updates released Business Analytics Monthly Index - October 2013 Hyperion Smart View Assistance with OBIEE 11.1.1.7 Hyperion Disclosure Management 11.1.2.3.330 PSU 17444967 [Doc ID 1592645.1] Hyperion Financial Close Management (FCM) 11.1.2.3.100 PSU 16989110 [Doc ID 1592644.1] Business  Intelligence BI-Apps Whitepaper: Packaged Analytic Applications: Accelerating Time and Value By Wayne Eckerson (link) BI Apps Blog: A Closer Look at Oracle Price Analytics (link) Blog: Taking Your Business Scorecard Golfing (link) Blog: Practical Uses of Business Scorecards, from Company-Wide to Process Specific (link) Nov 19, Big Data at Work Series: How Delphi Harnesses Big Data to Improve Warranty Response & Customer Satisfaction (link) Rittman Mead Blog: Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration Support: OBIEE Suite Bundle Patches (understand OBIEE naming convention) [Doc ID 1591422.1] Support Blog: Java update alert: Essbase Administration Services (EAS) 11.1.2.3 (link) Support Blog: OBIEE 11.1.1.7.131017 now available (link) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • How do you automate a SharePoint 2010 deployment?

    - by Enrique Lima
    In the last couple of months SharePoint traffic (consulting, training and speaking) has picked up.  And with that also the requests for deployments.  There are good, great, bad and really bad things around this. But that is for another topic.  However part of the good and great has been the fact of organizations wanting to do a proof of concept deployment (even when WSS or MOSS has been deployed). We can go through a session (Microsoft has the SDPS concept, SharePoint Deployment Planning Services) of discovering what the customer wants to achieve from their investment in the platform and then also proceed to model the solution that would fit their needs.  But it should not stop there.  The next step should be a POC (as many have requested) to test out. Now, on to the meat of this post.  How do I deploy?  While it is a good process to watch and see all of it take place, not many have the time to sit through that.  Even more so, when that has been part of the description of deploying the platform in the sessions mentioned above. I will, though, break it into a deployment for development purposes and a deployment of a farm. Two tools (or scripts) for those two different types of deployment. First, let me address the development environment.  Around the last week in October, Chris Johnson (SharePoint Product Team) announced a SharePoint Easy Setup for Developers.  The kit itself will assist you in installing SharePoint Server (in standalone mode), the tools that go around Visual Studio, Expression Studio and the Office 2010 tools. Here is the link to Chris’ post: http://blogs.msdn.com/b/cjohnson/archive/2010/10/28/announcing-sharepoint-easy-setup-for-developers.aspx The other scenario is the use of a script in assisting you through the deployment of a farm. Now, this is not to override planning.  It should highlight the need for planning even more.  How?  Having your service accounts planned, the structure of the sites and the scale of your deployment.  Enter AutoSPInstaller.  This is a CodePlex project, and the intent behind this is not only to automate the installation but to give some meaning and get some sense out of what goes on during a SharePoint deployment. How?  Take for example the creation of the databases, when we do the initial OOB deployment by using the wizard, more times than not, we leave the names as they are.  How is that a “bad thing”?  Let’s make it a better practice to rename those Databases, and have them take on a name that is not “GUID-ized”. Having a better naming convention will not hurt, on the other hand will allow for consistency. Here is the link to AutoSPInstaller’s site on CodePlex: http://autospinstaller.codeplex.com/

    Read the article

  • Collaborative work (small team) - Best practices

    - by LEM01
    I'm currently working in a very small team of programmers (2-3) and I'm looking for advices/best practices on how to organise our work. We're all working on the same application using PHP. Today we're kind of all working on our way. Today situation: List item that have to be worked on by each dev 1/week. What has to be done is defined at a high functional level (ex: Build the search engine for this product..) Commit / merge our individual branches (git) every week before the next meeting No real dev rules, no code review No test written (aouutch) Problems faced: Code quality issue: discovering someone else code is sometime tough (inline, variable+function+class names, spaces, comments..) Changes in already existing classes (impact on someone else work) Responsibility of each dev unclear: after getting someone else code and discover something messy, should I make the change? Should he make the change? How to plan those things,... What I'm looking for: Basically I'm looking into structuring the way we develop things in order to avoid frustration and improve overall quality. How to define coding standards (naming convention, code rules...)? Do you you any validation script to make sure code is valid before committing? Do you think that defining an architect role in the team is needed? Someone that would actually define what has to be developed during the next phase. By defining interfaces or class descriptions that have to be written. (Does it make sense in such a small team?) Today we're losing time into understanding what others did or tried to do, we're also losing time in discussion like "you should have done it that way! Why is this class doing that and not that..? Shouldn't we have a embedded class rather that this set of data...". I'm looking into a work process, maybe with more defined responsibilities and process in order to improve our performance. If you have experience, advices, best practices or anything to share that we could benefit from it will be much appreciated! Thanks a lot for your time!

    Read the article

  • Down to the Wire - Yet More Solaris Things to See at OpenWorld (and JavaOne!)

    - by Larry Wake
    San Francisco is bracing for the annual invasion. The airport's jammed, the tweets are flying, and the numbers are crazy: more than 50,000 attendees and 2,500+ sessions, taking over Moscone Convention Center, two streets, Union Square, and seemingly every hotel in town (98,000 hotel room nights). So yeah, it's busy. And it's not just OpenWorld--we've also got JavaOne, MySQL Connect, and four other sub-events going on as well. Speaking of JavaOne, you can find Solaris-related activity there, too -- I've highlighted one hands-on lab below. Here's a last pre-event roundup of activities for consideration; enjoy the show(s)! (Remember, Schedule Builder is your friend; use it with the session numbers below to register.) Monday, October 1st: 3:15 PM - General Session: Accelerate Your Business with the Oracle Hardware Advantage(GEN9691, Moscone North Hall D) John Fowler, head of Oracle's Systems organization, will talk about Oracle hardware technology and how it's co-engineered with other key technologies, including Oracle Solaris. Tuesday, October 2nd: 10:15 AM - Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC(CON4431, Moscone South 270)Get the birds-eye lowdown (whatever that means) on how U.S. Cellular  built its Infrastructure as a Service (IaaS) cloud delivery platform with Oracle’s SPARC T4 servers, Oracle Solaris 11, Oracle Solaris Cluster 4, and Oracle VM Server for SPARC. The session covers the high-level design, business case made, implementation details, and lessons learned. 11:45 AM - Oracle Solaris 11 Panel: Insights and Directions from Oracle Solaris Core Engineering(CON8790, Moscone South 252) This has been one of the livelier Solaris-related sessions in years past (and I'm not saying that just because I get to moderate it this year). A panel of core engineers responsible for a wide range of key Solaris technologies will talk about some of the interesting work they've been doing -- but mostly we keep time open for the panel to take questions from attendees, because that's the fun part. Wednesday, October 3rd: 10:00 AM - Tracing Your Java Application Tuning on Oracle Solaris with DTrace(HOL10214, Hilton San Francisco, Franciscan A/B/C/D) This JavaOne hands-on lab will show how to use the DTrace framework to dynamically trace your Java applications on Oracle Solaris and uncover new tuning opportunities. Thursday, October 4th: 12:45 PM - Oracle Solaris 11: Optimized for Oracle Database, Oracle WebLogic Server, and Java(CON8800, Moscone South 252) Explore how Oracle Solaris 11 has been built to be the best platform for the cloud and enterprise applications, with built-in optimizations to improve performance and deliver unique functionality with Oracle Database, Oracle WebLogic Server, and Java.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >