Search Results

Search found 11226 results on 450 pages for 'reverse thinking'.

Page 444/450 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Organising levels / rooms in a MUD-style text based world

    - by Polynomial
    I'm thinking of writing a small text-based adventure game, but I'm not particularly sure how I should design the world from a technical standpoint. My first thought is to do it in XML, designed something like the following. Apologies for the huge pile of XML, but I felt it important to fully explain what I'm doing. <level> <start> <!-- start in kitchen with empty inventory --> <room>Kitchen</room> <inventory></inventory> </start> <rooms> <room> <name>Kitchen</name> <description>A small kitchen that looks like it hasn't been used in a while. It has a table in the middle, and there are some cupboards. There is a door to the north, which leads to the garden.</description> <!-- IDs of the objects the room contains --> <objects> <object>Cupboards</object> <object>Knife</object> <object>Batteries</object> </objects> </room> <room> <name>Garden</name> <description>The garden is wild and full of prickly bushes. To the north there is a path, which leads into the trees. To the south there is a house.</description> <objects> </objects> </room> <room> <name>Woods</name> <description>The woods are quite dark, with little light bleeding in from the garden. It is eerily quiet.</description> <objects> <object>Trees01</object> </objects> </room> </rooms> <doors> <!-- a door isn't necessarily a door. each door has a type, i.e. "There is a <type> leading to..." from and to are references the rooms that this door joins. direction specifies the direction (N,S,E,W,Up,Down) from <from> to <to> --> <door> <type>door</type> <direction>N</direction> <from>Kitchen</from> <to>Garden</to> </door> <door> <type>path</type> <direction>N</direction> <from>Garden</type> <to>Woods</type> </door> </doors> <variables> <!-- variables set by actions --> <variable name="cupboard_open">0</variable> </variables> <objects> <!-- definitions for objects --> <object> <name>Trees01</name> <displayName>Trees</displayName> <actions> <!-- any actions not defined will show the default failure message --> <action> <command>EXAMINE</command> <message>The trees are tall and thick. There aren't any low branches, so it'd be difficult to climb them.</message> </action> </actions> </object> <object> <name>Cupboards</name> <displayName>Cupboards</displayName> <actions> <action> <!-- requirements make the command only work when they are met --> <requirements> <!-- equivilent of "if(cupboard_open == 1)" --> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>EXAMINE</command> <!-- fail message is the message displayed when the requirements aren't met --> <failMessage>The cupboard is closed.</failMessage> <message>The cupboard contains some batteires.</message> </action> <action> <requirements> <require operation="equal" value="0">cupboard_open</require> </requirements> <command>OPEN</command> <failMessage>The cupboard is already open.</failMessage> <message>You open the cupboard. It contains some batteries.</message> <!-- assigns is a list of operations performed on variables when the action succeeds --> <assigns> <assign operation="set" value="1">cupboard_open</assign> </assigns> </action> <action> <requirements> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>CLOSE</command> <failMessage>The cupboard is already closed.</failMessage> <message>You closed the cupboard./message> <assigns> <assign operation="set" value="0">cupboard_open</assign> </assigns> </action> </actions> </object> <object> <name>Batteries</name> <displayName>Batteries</displayName> <!-- by setting inventory to non-zero, we can put it in our bag --> <inventory>1</inventory> <actions> <action> <requirements> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>GET</command> <!-- failMessage isn't required here, it'll just show the usual "You can't see any <blank>." message --> <message>You picked up the batteries.</message> </action> </actions> </object> </objects> </level> Obviously there'd need to be more to it than this. Interaction with people and enemies as well as death and completion are necessary additions. Since the XML is quite difficult to work with, I'd probably create some sort of world editor. I'd like to know if this method has any downfalls, and if there's a "better" or more standard way of doing it.

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • CodePlex Daily Summary for Tuesday, August 12, 2014

    CodePlex Daily Summary for Tuesday, August 12, 2014Popular ReleasesAD4 Application Designer for flow based .NET applications: AD4.AppDesigner.23.26: AD4.Iteration.23.26(Advanced Rendering Features) DesignAttribute to format wire caption of pin (Custom Position): RaiseAlarmFlow of AlarmClockSample.10 extended to test the new attribute RenderWiresCaptions improved to handle LabelPosition parameter Next tutorial finished: Synchronizer Pattern (Version V6) (AD4.AlarmClockSample.V6.Sourcecode) ToDo: Some tutorials are unfinished but coming soon ... Refacturing (I'm not satisfied with the caption rendering steps) Note: The gluing code of...EWSEditor: EwsEditor 1.10 Release: • Export and import of items as a full fidelity steam works - without proxy classes! - I used raw EWS POSTs. • Turned off word wrap for EWS request field in EWS POST windows. • Several windows with scrolling texts boxes were limiting content to 32k - I removed this restriction. • Split server timezone info off to separate menu item from the timezone info windows so that the timezone info window could be used without logging into a mailbox. • Lots of updates to the TimeZone window. • UserAgen...Python Tools for Visual Studio: 2.1 RC: Release notes for PTVS 2.1 RC We’re pleased to announce the release candidate for Python Tools for Visual Studio 2.1. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, editing, IntelliSense, interactive debugging, profiling, Microsoft Azure, IPython, and cross-platform debugging support. PTVS 2.1 RC is available for: Visual Studio Expre...Aspose for Apache POI: Missing Features of Apache POI SS - v 1.2: Release contain the Missing Features in Apache POI SS SDK in comparison with Aspose.Cells What's New ? Following Examples: Create Pivot Charts Detect Merged Cells Sort Data Printing Workbooks Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.Touchmote: Touchmote 1.0 beta 13: Changes Less GPU usage Works together with other Xbox 360 controls Bug fixesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v3.0: Important: I would like to hear more about what you are thinking about the project? I appreciate that you like it (2000 downloads over past 6 months), but may be you have to say something? What do you dislike in the module? Maybe you would love to see some new functionality? Tell, what you think! Installation guide:Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI ...Modern UI for WPF: Modern UI 1.0.6: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE LinkGroup.GroupName renamed to GroupKey NEW FEATURES Improved rendering on high DPI screens, including support for per-monitor DPI awareness available in Windows 8.1 (see also Per-monitor DPI awareness) New ModernProgressRing control with 8 builtin styles New LinkCommands.NavigateLink routed command New Visual Studio project templates 'Modern UI WPF App' and 'Modern UI W...ClosedXML - The easy way to OpenXML: ClosedXML 0.74.0: Multiple thread safe improvements including AdjustToContents XLHelper XLColor_Static IntergerExtensions.ToStringLookup Exception now thrown when saving a workbook with no sheets, instead of creating a corrupt workbook Fix for hyperlinks with non-ASCII Characters Added basic workbook protection Fix for error thrown, when a spreadsheet contained comments and images Fix to Trim function Fix Invalid operation Exception thrown when the formula functions MAX, MIN, and AVG referenc...SEToolbox: SEToolbox 01.042.019 Release 1: Added RadioAntenna broadcast name to ship name detail. Added two additional columns for Asteroid material generation for Asteroid Fields. Added Mass and Block number columns to main display. Added Ellipsis to some columns on main display to reduce name confusion. Added correct SE version number in file when saving. Re-added in reattaching Motor when drag/dropping or importing ships (KeenSH have added RotorEntityId back in after removing it months ago). Added option to export and r...jQuery List DragSort: jQuery List DragSort 0.5.2: Fixed scrollContainer removing deprecated use of $.browser so should now work with latest version of jQuery. Added the ability to return false in dragEnd to revert sort order Project changes Added nuget package for dragsort https://www.nuget.org/packages/dragsort Converted repository from SVN to MercurialBraintree Client Library: Braintree 2.32.0: Allow credit card verification options to be passed outside of the nonce for PaymentMethod.create Allow billingaddress parameters and billingaddress_id to be passed outside of the nonce for PaymentMethod.create Add Subscriptions to paypal accounts Add PaymentMethod.update Add failonduplicatepaymentmethod option to PaymentMethod.create Add support for dispute webhooksThe Mario Kart 8 App: V1.0.2.1: First Codeplex release. WINDOWS INSTALLER ONLYAspose Java for Docx4j: Aspose.Words vs Docx4j - v 1.0: Release contain the Code Comparison for Features in Docx4j SDK and Aspose.Words What's New ?Following Examples: Accessing Document Properties Add Bookmarks Convert to Formats Delete Bookmarks Working with Comments Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.File System Security PowerShell Module: NTFSSecurity 2.4.1: Add-Access and Remove-Access now take multiple accoutsYourSqlDba: YourSqlDba 5.2.1.: This version improves alert message that comes a while after script installation to check for a newer version. Also, it says now to get it from YourSqlDba.CodePlex.com If you don't want to update now, just-rerun the script from your installed version. To get info on actual version running, just run stored procedure YourSqlDba.install.PrintVersionInfo. . You can go to source code / history and click on change set 72957 to see changes in the script.Manipulator: Manipulator: manipulatorXNB filetype plugin for Paint.NET: Paint.NET XNB plugin v0.4.0.0: CHANGELOG Reverted old incomplete changes. Updated library for compatibility with Paint .NET 4. Updated project to NET 4.5. Updated version to 0.4.0.0. INSTALLATION INSTRUCTIONS Extract the ZIP file to your Paint.NET\FileTypes folder.EdiFabric: Release 4.1: Changed MessageContextWix# (WixSharp) - managed interface for WiX: Release 1.0.0.0: Release 1.0.0.0 Custom UI Custom MSI Dialog Custom CLR Dialog External UIMath.NET Numerics: Math.NET Numerics v3.2.0: Linear Algebra: Vector.Map2 (map2 in F#), storage-optimized Linear Algebra: fix RemoveColumn/Row early index bound check (was not strict enough) Statistics: Entropy ~Jeff Mastry Interpolation: use Array.BinarySearch instead of local implementation ~Candy Chiu Resources: fix a corrupted exception message string Portable Build: support .Net 4.0 as well by using profile 328 instead of 344. .Net 3.5: F# extensions now support .Net 3.5 as well .Net 3.5: NuGet package now contains pro...New ProjectsAll Pony Radio - The next generation: This is a third-party mobile application for accessing the Ponyville Live! radio streams. That's all I can think of right now. More later!AngularGo (SPA Project Template): AngularGo is a Visual Studio 2013 Web Project Template for creating SPA web project by integrating ASP.NET MVC + WebApi and AngularJS framework.BasketballRoster app: The BasketballRoster app from the Head First C# book.Code Bank: Code BankDeepSearch: Tool to search for text in multiple files Also searches inside archives recursivelyIIS AutoDeploy Tool: Tool that automates the deployment of IIS sites on machines that are inaccessible to MSBUILD, TFS etc. Includes web.config diff, dependency deploy and much moreIsaachomeEn: infomation of my websiteMWO User Code: User code for working with data for the game - MechWarriror: OnlineNFC First Steps: Work in progress!OpenWebERP.NET: Open Web ERP project created for users to use web based open source ERPPerrypheral Framework : M-V-VM ++: General Purpose C# and WPF / M-V-VM class libraries IOC inside!Sem.BrickPi: C# library to interact with the hardware module BrickPi from mono.Sharepoint geolocation field: Sharepoint geolocation fieldSSRS Deployment Center: The SSRS Deployment Center was created to ease SSRS report and data source deployments.Xaml Development Project's Repository: Colección de todo el código fuente de los distintos Cursos de xamldevelopment.blogspot.com.

    Read the article

  • Make Your Menu Item Highlighted

    - by Shaun
    When I was working on the TalentOn project (Promotion in MSDN Chinese) I was asked to implement a functionality that makes the top menu items highlighted when the currently viewing page was in that section. This might be a common scenario in the web application development I think.   Simple Example When thinking about the solution of the highlighted menu items the biggest problem would be how to define the sections (menu item) and the pages it belongs to rather than making the menu highlighted. With the ASP.NET MVC framework we can use the controller – action infrastructure for us to achieve it. Each controllers would have a related menu item on the master page normally. The menu item would be highlighted if any of the views under this controller are being shown. Some specific menu items would be highlighted of that action was invoked, for example the home page, the about page, etc. The check rule can be specified on-demand. For example I can define the action LogOn and Register of Account controller should make the Account menu item highlighted while the ChangePassword should make the Profile menu item highlighted. I’m going to use the HtmlHelper to render the highlight-able menu item. The key point is that I need to pass the predication to check whether the current view belongs to this menu item which means this menu item should be highlighted or not. Hence I need a delegate as its parameter. The simplest code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace ShaunXu.Blogs.HighlighMenuItem 9: { 10: public static class HighlightMenuItemHelper 11: { 12: public static MvcHtmlString HighlightMenuItem(this HtmlHelper helper, 13: string text, string controllerName, string actionName, object routeData, object htmlAttributes, 14: string highlightText, object highlightHtmlAttributes, 15: Func<HtmlHelper, bool> highlightPredicate) 16: { 17: var shouldHighlight = highlightPredicate.Invoke(helper); 18: if (shouldHighlight) 19: { 20: return helper.ActionLink(string.IsNullOrWhiteSpace(highlightText) ? text : highlightText, 21: actionName, controllerName, routeData, highlightHtmlAttributes == null ? htmlAttributes : highlightHtmlAttributes); 22: } 23: else 24: { 25: return helper.ActionLink(text, actionName, controllerName, routeData, htmlAttributes); 26: } 27: } 28: } 29: } There are 3 groups of the parameters: the first group would be the same as the in-build ActionLink method parameters. It has the link text, controller name and action name, etc passed in so that I can render a valid linkage for the menu item. The second group would be more focus on the highlight link text and Html attributes. I will use them to render the highlight menu item. The third group, which contains one parameter, would be a predicate that tells me whether this menu item should be highlighted or not based on the user’s definition. And then I changed my master page of the sample MVC application. I let the Home and About menu highlighted only when the Index and About action are invoked. And I added a new menu named Account which should be highlighted for all actions/views under its Account controller. So my master would be like this. 1: <div id="menucontainer"> 2:  3: <ul id="menu"> 4: <li><% 1: : Html.HighlightMenuItem( 2: "Home", "Home", "Index", null, null, 3: "[Home]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "Index")%></li> 5:  6: <li><% 1: : Html.HighlightMenuItem( 2: "About", "Home", "About", null, null, 3: "[About]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "About")%></li> 7:  8: <li><% 1: : Html.HighlightMenuItem( 2: "Account", "Account", "LogOn", null, null, 3: "[Account]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Account")%></li> 9: 10: </ul> 11:  12: </div> Note: You need to add the import section for the namespace “ShaunXu.Blogs.HighlighMenuItem” to make the extension method I created below available. So let’s see the result. When the home page was shown the Home menu was highlighted since at this moment it was controller = Home and action = Index. And if I clicked the About menu you can see it turned highlighted as now the action was About. And if I navigated to the register page the Account menu was highlighted since it should be like that when any actions under the Account controller was invoked.   Fluently Language Till now it’s a fully example for the highlight menu item but not perfect yet. Since the most common scenario would be: highlighted when the action invoked, or highlighted when any action was invoked under this controller, we can created 2 shortcut method so for them so that normally the developer will be no need to specify the delegation. Another place we can improve would be, to make the method more user-friendly, or I should say developer-friendly. As you can see when we want to add a highlight menu item we need to specify 8 parameters and we need to remember what they mean. In fact we can make the method more “fluently” so that the developer can have the hints when using it by the Visual Studio IntelliSense. Below is the full code for it. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace Ethos.Xrm.HR 9: { 10: #region Helper 11:  12: public static class HighlightActionMenuHelper 13: { 14: public static IHighlightActionMenuProviderAfterCreated HighlightActionMenu(this HtmlHelper helper) 15: { 16: return new HighlightActionMenuProvider(helper); 17: } 18: } 19:  20: #endregion 21:  22: #region Interfaces 23:  24: public interface IHighlightActionMenuProviderAfterCreated 25: { 26: IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName); 27: } 28:  29: public interface IHighlightActionMenuProviderAfterOn 30: { 31: IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes); 32: } 33:  34: public interface IHighlightActionMenuProviderAfterWith 35: { 36: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate); 37: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch(); 38: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch(); 39: } 40:  41: public interface IHighlightActionMenuProviderAfterHighlightWhen 42: { 43: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText); 44: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes); 45: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText); 46: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass); 47: } 48:  49: public interface IHighlightActionMenuProviderAfterApplyHighlightStyle 50: { 51: MvcHtmlString ToActionLink(); 52: } 53:  54: #endregion 55:  56: public class HighlightActionMenuProvider : 57: IHighlightActionMenuProviderAfterCreated, 58: IHighlightActionMenuProviderAfterOn, IHighlightActionMenuProviderAfterWith, 59: IHighlightActionMenuProviderAfterHighlightWhen, IHighlightActionMenuProviderAfterApplyHighlightStyle 60: { 61: private HtmlHelper _helper; 62:  63: private string _controllerName; 64: private string _actionName; 65: private string _text; 66: private object _routeData; 67: private object _htmlAttributes; 68:  69: private Func<HtmlHelper, bool> _highlightPredicate; 70:  71: private string _highlightText; 72: private object _highlightHtmlAttributes; 73:  74: public HighlightActionMenuProvider(HtmlHelper helper) 75: { 76: _helper = helper; 77: } 78:  79: public IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName) 80: { 81: _actionName = actionName; 82: _controllerName = controllerName; 83: return this; 84: } 85:  86: public IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes) 87: { 88: _text = text; 89: _routeData = routeData; 90: _htmlAttributes = htmlAttributes; 91: return this; 92: } 93:  94: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate) 95: { 96: _highlightPredicate = predicate; 97: return this; 98: } 99:  100: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch() 101: { 102: return HighlightWhen((helper) => 103: { 104: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower(); 105: }); 106: } 107:  108: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch() 109: { 110: return HighlightWhen((helper) => 111: { 112: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower() && 113: helper.ViewContext.RouteData.Values["action"].ToString().ToLower() == _actionName.ToLower(); 114: }); 115: } 116:  117: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText) 118: { 119: _highlightText = highlightText; 120: _highlightHtmlAttributes = highlightHtmlAttributes; 121: return this; 122: } 123:  124: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes) 125: { 126: return ApplyHighlighStyle(highlightHtmlAttributes, _text); 127: } 128:  129: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText) 130: { 131: return ApplyHighlighStyle(new { @class = cssClass }, highlightText); 132: } 133:  134: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass) 135: { 136: return ApplyHighlighStyle(new { @class = cssClass }, _text); 137: } 138:  139: public MvcHtmlString ToActionLink() 140: { 141: if (_highlightPredicate.Invoke(_helper)) 142: { 143: // should be highlight 144: return _helper.ActionLink(_highlightText, _actionName, _controllerName, _routeData, _highlightHtmlAttributes); 145: } 146: else 147: { 148: // should not be highlight 149: return _helper.ActionLink(_text, _actionName, _controllerName, _routeData, _htmlAttributes); 150: } 151: } 152: } 153: } So in the master page when I need the highlight menu item I can “tell” the helper how it should be, just like this. 1: <li> 2: <% 1: : Html.HighlightActionMenu() 2: .On("Index", "Home") 3: .With(SiteMasterStrings.Home, null, null) 4: .HighlightWhenControllerMatch() 5: .ApplyHighlighStyle(new { style = "background:url(../../Content/Images/topmenu_bg.gif) repeat-x;text-decoration:none;color:#feffff;" }) 6: .ToActionLink() %> 3: </li> While I’m typing the code the IntelliSense will advise me that I need a highlight action menu, on the Index action of the Home controller, with the “Home” as its link text and no need the additional route data and Html attributes, and it should be highlighted when the controller was “Home”, and if it’s highlighted the style should be like this and finally render it to me. This is something we call “Fluently Language”. If you had been using Moq you will see that’s very development-friendly, document-ly and easy to read.   Summary In this post I demonstrated how to implement a highlight menu item in ASP.NET MVC by using its controller – action infrastructure. We can see the ASP.NET MVC helps us to organize our web application better. And then I also told a little bit more on the “Fluently Language” and showed how it will make our code better and easy to be used.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • OK - What now? How do we become a Social Business?

    - by Michael Snow
    We hope that those of you that attended yesterday's Webcast with Brian Solis enjoyed Brian's discussion with Christian Finn for our last Webcast of the season for the Oracle Social Business Thought Leaders Series.  For those of you that may have missed the webcast or were stuck at a company holiday party - you'll be glad to hear that the webcast will be available On-Demand starting later today (12/14/12). And any of you who'd like to listen to a quick but informative podcast with Brian - can listen to that here. Some of you may still be left with questions about how to get from point A to point B and even more confused than when you started thinking about this new world of Digital Darwinism. The post below, grabbed from an abundance of great thought leadership prose on Brian's blog may help you frame the path you need to start walking sooner versus later to stay off of the endangered species list.  As you explore your path forward, please keep Oracle in mind - we do offer a wide range of solutions to help your organization 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} optimize the engagement for your customers, employees and partners. The Path from a Social Brand to a Social Business Brian Solis Originally posted May 2, 2012 I’ve been a long-time supporter of MediaTemple’s (MT)Residence program along with Gary Vaynerchuk, Neil Patel, and many others whom I respect. I wanted to share my “7 questions to answer to become a social business” with you here.. Social Media is pervasive and is becoming the new normal in corporate marketing. Brands who get this right are starting to build their own media networks rich with customer connections numbering in the millions. Right now, Coca-Cola has over 34 million fans on Facebook, but they’re hardly alone. Disney follows just behind with 29 million fans, Starbucks boasts 25 million, and Oreo, Red Bull, and Converse play host to over 20 million fans. If we were to look at other networks such as Twitter and Youtube, we would see a recurring theme. People are connecting en masse with the businesses they support and new media represents the ability to cultivate consumer relationships in ways not possible with traditional earned or paid media. Sounds great right? This might sound abrupt, but the truth is that we’re hardly realizing the potential of what lies before us. Everything begins with understanding not just how other brands are marketing themselves in social media, but also seeing what they’re not doing and envisioning what’s possible. We’re already approaching the first of many crossroads that new media will present. Do we take the path of a social brand or that of a social business? What’s the difference? A social brand is just that, a business that is remodeling or retrofitting its existing marketing practices to new media. A social business is something altogether different as it embraces introspection and extrospection to reevaluate internal and external processes, systems, and opportunities to transform into a living, breathing entity that adapts to market conditions and opportunities. It’s a tough decision to make right now especially at a time when all we read about is how much success many businesses are finding without having to answer this very question. With all of the newfound success in social networks, the truth is that we’re only just beginning to learn what’s possible and that’s where you come in. When compared to the investment in time and resources across the board, social media represents only a small part of the mix. But with your help, that’s all about to change. The CMO Survey, an organization that disseminates the opinions of top marketers in order to predict the future of markets, recently published a report that gave credence to the fact that social media is taking off. One of the most profound takeaways from the report was this gem; “The “like button” [in Facebook] packs more customer-acquisition punch than other demand-generating activities.” With insights like this, it’s easy to see why the race to social is becoming heated. The report also highlighted exactly where social fits in the marketing mix today and as you can see, despite all of the hype, it’s not a dominant focus yet. As of August 2011, the percentage of overall marketing budgets dedicated to social media hovered at around 7%. However, in 2012 the investment in social media will climb to 10%. And, in five years, social media is expected to represent almost 18% of the total marketing budget. Think about that for a moment. In 2016, social media will only represent 18%? Queue the sound of a record scratching here. With businesses finding success in social networks, why are businesses failing to realize the true opportunity brought forth by the ability to listen to, connect with, and engage with customers? While there’s value in earning views, driving traffic, and building connections through the 3F’s (friends, fans and followers), success isn’t just defined simply by what really amounts to low-hanging fruit. The truth is that businesses cannot measure what it is they don’t know to value. As a result, innovation in new engagement initiatives is stifled because we’re applying dated or inflexible frameworks to new paradigms. Social media isn’t owned by marketing, but instead the entire organization. This changes everything and makes your role so much more important. It’s up to you to learn how to think outside of the proverbial social media box to see what others don’t, the ability to improve customers experiences through the evolution of a social brand into a social business. Doing so will translate customer insights from what they do and don’t share in social networks into better products, services, and processes. See, customers want something more from their favorite businesses than creative campaigns, viral content, and everyday dialogue in social networks. Customers want to be heard and they want to know that you’re listening. How businesses use social media must remind them that they’re more than just an audience, consumer, or a conduit to “trigger” a desired social effect. Herein lies both the challenge and opportunity of social media. It’s bigger than marketing. It’s also bigger than customer service. It’s about building relationships with customers that improve experiences and more importantly, teaches businesses how to re-imagine products and internal processes to better adapt to potential crises and seize new opportunities. When it comes down to it, Twitter, Facebook, Youtube, Foursquare, are all channels for listening, learning, and engaging. It’s what you do within each channel that builds a community around your brand. And, at the end of the day, the value of the community you build counts for everything. It’s important to understand that we cannot assume that these networks simply exist for people to lineup for our marketing messages or promotional campaigns. Nor can we assume that they’re reeling in anticipation for simple dialogue. They want value. They want recognition. They want access to exclusive information and offers. They need direction, answers and resolution. What we’re talking about here is the multidimensional makeup of consumers and how a one-sided approach to social media forces the needs for social media to expand beyond traditional marketing to socialize the various departments, lines of business, and functions to engage based on the nature of the situation or opportunity. In the same CMO study, it was revealed that marketers believe that social media has a long way to go toward integrating into the overall company strategy. On a scale of 1-7, with one being “not integrated at all” and seven being “very integrated,” 22% chose “one.” Critical functions such as service, HR, sales, R&D, product marketing and development, IR, CSR, etc. are either not engaged or are operating social media within a silo disconnected from other efforts or possibilities. The problem is that customers don’t view a company by silo, instead they see one company, one brand, and their experience in social media forms an impression that eventually contributes to their view of your brand. The first step here is to understand business priorities and objectives to assess how social media can be additive in achieving these goals. Additionally, surveying the landscape to determine other areas of interest as its specifically related to your business. • Are customers seeking help or direction? • Who are your most valuable customers and what are they sharing? • How can you use social media to acquire and retain customers? - What ideas are circulating and how can you harness user generated activity and content to innovate or adapt to better meet the needs of customers? - How can you broaden a single customer view to recognize the varying needs of customers and how your organization can organize around each circumstance? - What insights exist based on how consumers are interacting with one another? How can this intelligence inform marketing, service, products and other important business initiatives? - How can your business extend their current efforts to deliver better customer experiences and in turn more effectively unit internal collaboration and communication? Customer demands far exceed the capabilities of the marketing department. While creating a social brand is a necessary endeavor, building a social business is an investment in customer relevance now and over time. Beyond relevance, a social business fosters a culture of change that unites employees and customers and sets a foundation for meaningful and beneficial relationships. Innovation, communication, and creativity are the natural byproducts of engagement and transformation. As a social brand, we are competing for the moment. As a social business, we are competing the future in all that we do today.

    Read the article

  • SharePoint 2010 Single Page Apps without a Master Page

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2014/06/06/156827.aspxWell, maybe a stretch, but I am inclined to believe it is so.  I have been using  the JavaScript Client Object Model (JCSOM) for about 6 months now and I believe it can do about 80% of my job quickly without much fanfare.  When building sites in SharePoint no one wants the OOTB list views, etc. They want a custom look and feel!  I used to think in previous engagements that this would mean some custom server code or at least a data-form web part.   Since coming on-board in my current engagement, I have been forced because we don’t own the hosting site to come up with innovative ways to customize the UI of SharePoint.  We can push content via sandbox solutions and use JCSOM from within SharePoint Designer to do almost all customizations.  I have been using the following methodology to accomplish this: 1. Create an HTML file, which links CSS and JavaScript Files 2. Create and ASPX Web Part Page, Include a Content Editor Web Part and link to the HTML page created above.   So basically once it was done, I could copy , paste,  and rename those 4 items: CC, JS, HTML. ASPX and using MVVM just change the Model, View, and View-Model in the JavaScript file.  in about 5 minutes, I could create a completely new web part with SharePoint data.  Styling would take a little longer.  Some issues that would crop up: 1.  Multiple(s) of these web parts would not work well together on the same page (context). 2.  To separate the Web parts and context I would create a separate page for each web part and link them to a tabs layout via a Page Viewer web part or I frame.  Easy to do and not a problem but a big load problem as these web part pages even with minimal master had huge footprints.  (master page and page web part zones)   I kept thinking of my experience with SharePoint 2013 and apps!  The JavaScript was loaded from within the app, why can’t we do that in 2010 and skip the master page and web part zones. I thought at first, just link to sp.js but that didn’t work so I searched the web and found a link which did not work at all in my environment but helped me create a solution that would kudos to (Will). <!DOCTYPE html> <%@ Page %> <%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Import Namespace="Microsoft.SharePoint" %> <html> <head> <link rel="Stylesheet" type="text/css" href="../CSS/smoothness/jquery-ui-1.10.4.custom.min.css"> </head> <body > <form runat="server"> <!-- the following 5 js files are required to use CSOM --> <script src="/_layouts/1033/init.js" type="text/javascript" ></script> <script src="/_layouts/MicrosoftAjax.js" type="text/javascript" ></script> <script src="/_layouts/sp.core.js" type="text/javascript" ></script> <script src="/_layouts/sp.runtime.js" type="text/javascript" ></script> <script src="/_layouts/sp.js" type="text/javascript" ></script> <!-- include your app code --> <script src="../scripts/jquery-1.9.1.js" type="text/javascript" ></script> <script src="../Scripts/jquery-ui-1.10.3.custom.min.js" type="text/javascript"></script> <script src="../scripts/App.js" type="text/javascript" ></script> <div ID="Wrapper"> </div> <SharePoint:FormDigest ID="FormDigest1" runat="server"></SharePoint:FormDigest> </form> </body> </html> Notice that I have the scripts loaded within the body! I discovered this by accident in trying to get Will’s solution to work, it made this work just like normal JCSOM from the master page.  I am sure there are other ways to do this, but I am a full time developer, so I’ll let someone else investigate the alternatives.  I have an example page showing an Announcements list as a Booklet which is a JQuery Plug-In.  Here is the page source notice the footprint is light.   <!DOCTYPE html> <html> <head> <link rel="Stylesheet" type="text/css" href="../CSS/smoothness/jquery-ui-1.10.4.custom.min.css"> <link href="../CSS/jquery.booklet.latest.css" type="text/css" rel="stylesheet" media="screen, projection, tv" /> <link href="../CSS/bookletannouncement.css" type="text/css" rel="stylesheet" media="screen, projection, tv" /> </head> <body > <form name="ctl00" method="post" action="BookletAnnouncements2.aspx" onsubmit="javascript:return WebForm_OnSubmit();" id="ctl00"> <div> <input type="hidden" name="__REQUESTDIGEST" id="__REQUESTDIGEST" value="0x3384922A8349572E3D76DC68A3F7A0984CEC14CB9669817CCA584681B54417F7FDD579F940335DCEC95CFFAC78ADDD60420F7AA82F60A8BC1BB4B9B9A57F9309,06 Jun 2014 14:13:27 -0000" /> <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUBMGRk20t+bh/NWY1sZwphwb24pIxjUbo=" /> </div> <script type="text/javascript"> //<![CDATA[ var g_presenceEnabled = true;var _fV4UI=true;var _spPageContextInfo = {webServerRelativeUrl: "\u002fsites\u002fDemo50\u002fTeamSite", webLanguage: 1033, currentLanguage: 1033, webUIVersion:4,pageListId:"{ee707b5f-e246-4246-9e55-8db11d09a8cb}",pageItemId:167,userId:1, alertsEnabled:false, siteServerRelativeUrl: "\u002fsites\u002fdemo50", allowSilverlightPrompt:'True'};//]]> </script> <script type="text/javascript" src="/_layouts/1033/init.js?rev=lEi61hsCxcBAfvfQNZA%2FsQ%3D%3D"></script> <script type="text/javascript"> //<![CDATA[ function WebForm_OnSubmit() { UpdateFormDigest('\u002fsites\u002fDemo50\u002fTeamSite', 1440000); return true; } //]]> </script> <!-- the following 5 js files are required to use CSOM --> <script src="/_layouts/1033/init.js"></script> <script src="/_layouts/MicrosoftAjax.js"></script> <script src="/_layouts/sp.core.js"></script> <script src="/_layouts/sp.runtime.js"></script> <script src="/_layouts/sp.js"></script> <!-- include your app code --> <script src="../scripts/jquery-1.9.1.js"></script> <script src="../Scripts/jquery-ui-1.10.3.custom.min.js" type="text/javascript"></script> <script src="../Scripts/jquery.easing.1.3.js"></script> <script src="../Scripts/jquery.booklet.latest.min.js"></script> <script src="../scripts/Announcementsbooklet.js"></script> <div ID="Accord"> </div> <script type="text/javascript"> //<![CDATA[ var _spFormDigestRefreshInterval = 1440000;//]]> </script> </form> </body> </html> Here is the source to make the booklet work: ExecuteOrDelayUntilScriptLoaded(retrieveListItems, "sp.js"); var context; var collListItem; var web; var listRootFolder; var oList; //retieve the list items from the host web function retrieveListItems() { context = SP.ClientContext.get_current(); web = context.get_web(); oList = context.get_web().get_lists().getByTitle('Announcements'); var camlQuery = new SP.CamlQuery(); camlQuery.set_viewXml('<View><RowLimit>10</RowLimit></View>'); collListItem = oList.getItems(camlQuery); listRootFolder = oList.get_rootFolder(); context.load(listRootFolder); context.load(web); context.load(collListItem); context.executeQueryAsync(onQuerySucceeded, onQueryFailed); } //Model object var Dev = function (id, title, body, expires, url) { var self = this; self.ID = id; self.Title = title; self.Body = body; self.Expires = expires; self.Url = url; } //View model var DevVM = new ListViewModel() function ListViewModel() { var self = this; self.items = new Array(); } function onQuerySucceeded(sender, args) { var listItemEnumerator = collListItem.getEnumerator(); while (listItemEnumerator.moveNext()) { var oListItem = listItemEnumerator.get_current(); var javaDate = oListItem.get_item('Expires'); var fmtExpires = javaDate.format('dd MMM yyyy'); var url = ""; var goodUrl = oListItem.get_item('Url'); if (goodUrl == null) { url = web.get_serverRelativeUrl() + "/Lists/Announcements/EditForm.aspx?ID=" + oListItem.get_item('ID'); } else { url = web.get_serverRelativeUrl() + oListItem.get_item('Url') } DevVM.items.push(new Dev(oListItem.get_item('ID'), oListItem.get_item('Title'), oListItem.get_item('Body'), fmtExpires, url)); } $.each(DevVM.items, function (index) { $("#Accord").append(createAccordNode(DevVM.items[index].Title, DevVM.items[index].Body, " Expires: " + DevVM.items[index].Expires, DevVM.items[index].Url)); }); $("#Accord").booklet(); } function createAccordNode(title, body, expires, url) { return ( $("<div><h3>" + title + "</h3><p><span class='titlespan'><a href='" + url + "'>" + title + "</a></span><span class='dicussionspan'>" + body + "</span><span class='expiresspan'>" + expires + "</span></p></div>") ); } function onQueryFailed(sender, args) { alert('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace()); } The idea behind this post is that this could be used to: 1.   Create landing pages that are very un-SharePoint like! 2.   Make lightweight pages that could be used in page viewer web part or I Frame. 3.  Utilize Deep Zoom Composer and Sea-Dragon/or Silver light I will be using this for much of my development work!

    Read the article

  • Toon shader with Texture. Can this be optimized?

    - by Alex
    I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline AND their original texture at the same time. The result is actually a very nice Cel Shading effect of the model texture, but it is havling the speed of the program, it's quite very slow even with just 3 models on screen... Since the result was kind of hacked together, I am thinking that maybe I am performing some extra steps or extra rendering tasks that maybe are not needed, and are slowing down the game? Something unnecessary that maybe you guys could spot? Both MD2 and 3DS loader have an InitToon() function called upon creation to load the shader initToon(){ int i; // Looping Variable ( NEW ) char Line[255]; // Storage For 255 Characters ( NEW ) float shaderData[32][3]; // Storate For The 96 Shader Values ( NEW ) FILE *In = fopen ("Shader.txt", "r"); // Open The Shader File ( NEW ) if (In) // Check To See If The File Opened ( NEW ) { for (i = 0; i < 32; i++) // Loop Though The 32 Greyscale Values ( NEW ) { if (feof (In)) // Check For The End Of The File ( NEW ) break; fgets (Line, 255, In); // Get The Current Line ( NEW ) shaderData[i][0] = shaderData[i][1] = shaderData[i][2] = float(atof (Line)); // Copy Over The Value ( NEW ) } fclose (In); // Close The File ( NEW ) } else return false; // It Went Horribly Horribly Wrong ( NEW ) glGenTextures (1, &shaderTexture[0]); // Get A Free Texture ID ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind This Texture. From Now On It Will Be 1D ( NEW ) // For Crying Out Loud Don't Let OpenGL Use Bi/Trilinear Filtering! ( NEW ) glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage1D (GL_TEXTURE_1D, 0, GL_RGB, 32, 0, GL_RGB , GL_FLOAT, shaderData); // Upload ( NEW ) } This is the drawing for the animated MD2 model: void MD2Model::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) // ORIGINAL PART OF THE FUNCTION //Figure out the two frames between which we are interpolating int frameIndex1 = (int)(time * (endFrame - startFrame + 1)) + startFrame; if (frameIndex1 > endFrame) { frameIndex1 = startFrame; } int frameIndex2; if (frameIndex1 < endFrame) { frameIndex2 = frameIndex1 + 1; } else { frameIndex2 = startFrame; } MD2Frame* frame1 = frames + frameIndex1; MD2Frame* frame2 = frames + frameIndex2; //Figure out the fraction that we are between the two frames float frac = (time - (float)(frameIndex1 - startFrame) / (float)(endFrame - startFrame + 1)) * (endFrame - startFrame + 1); // I ADDED THESE FROM NEHE'S TUTORIAL FOR FIRST PASS (TOON SHADE) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) // ORIGINAL DRAWING CODE //Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); // ADDED THESE FROM NEHE'S FOR SECOND PASS (OUTLINE) glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) // HERE I AM PARSING THE VERTICES AGAIN (NOT IN THE ORIGINAL FUNCTION) FOR THE OUTLINE AS PER NEHE'S TUT glBegin (GL_TRIANGLES); // Tell OpenGL What We Want To Draw for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd (); // Tell OpenGL We've Finished glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); } Whereas this is the drawToon function in the 3DS loader void Model_3DS::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) //ORIGINAL CODE if (visible) { glPushMatrix(); // Move the model glTranslatef(pos.x, pos.y, pos.z); // Rotate the model glRotatef(rot.x, 1.0f, 0.0f, 0.0f); glRotatef(rot.y, 0.0f, 1.0f, 0.0f); glRotatef(rot.z, 0.0f, 0.0f, 1.0f); glScalef(scale, scale, scale); // Loop through the objects for (int i = 0; i < numObjects; i++) { // Enable texture coordiantes, normals, and vertices arrays if (Objects[i].textured) glEnableClientState(GL_TEXTURE_COORD_ARRAY); if (lit) glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); // Point them to the objects arrays if (Objects[i].textured) glTexCoordPointer(2, GL_FLOAT, 0, Objects[i].TexCoords); if (lit) glNormalPointer(GL_FLOAT, 0, Objects[i].Normals); glVertexPointer(3, GL_FLOAT, 0, Objects[i].Vertexes); // Loop through the faces as sorted by material and draw them for (int j = 0; j < Objects[i].numMatFaces; j ++) { // Use the material's texture Materials[Objects[i].MatFaces[j].MatIndex].tex.Use(); // AFTER THE TEXTURE IS APPLIED I INSERT THE TOON FUNCTIONS HERE (FIRST PASS) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) // THIS IS AN ADDED SECOND PASS AT THE VERTICES FOR THE OUTLINE glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) for (int j = 0; j < Objects[i].numMatFaces; j ++) { glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); glPopMatrix(); } Finally this is the tex.Use() function that loads a BMP texture and somehow gets blended perfectly with the Toon shading void GLTexture::Use() { glEnable(GL_TEXTURE_2D); // Enable texture mapping glBindTexture(GL_TEXTURE_2D, texture[0]); // Bind the texture as the current one }

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • CodePlex Daily Summary for Wednesday, August 13, 2014

    CodePlex Daily Summary for Wednesday, August 13, 2014Popular ReleasesLozzi's SharePoint 2013 Scripts: Lozzi.Fields (without Site Col Admin): This file is the same as the primary download, however I've removed the override allowing Site Collection Administrators access regardless of groups. This applies to the disableWithAllowance and hideWithAllowance functions.bitboxx bbcontact: 01.00.00: Release Notes bitboxx bbcontact 01.00.00bbcontact 01.00.00 will work for any DNN version 6.1.0 and up. - Initial releaseAD4 Application Designer for flow based .NET applications: AD4.AppDesigner.23.27: AD4.Iteration.23.27(Advanced Rendering Features) Refacturing: RenderStepPinsCaptions simplified by extending FlowChartStepPinDecoratorExtensions RenderFlowChartPinsCaptions simplified by FlowChartFlowPinDecoratorExtensions RenderWiresCaptions simplified by FlowChartWireDecoratorExtensions Design of tutorial samples updated Next tutorial finished: ThreadAsynchronizer Pattern (Version V7) ToDo: Some tutorials are unfinished but coming soon ... Note: The gluing code of the AD4.AppDesig...OooPlayer: 1.1: Added: Support for speex, TAK and OptimFrog files Added: An option to not to load cover art Added: Smaller package size Fixed: Unable to drag&drop audio files to playlist Updated: FLAC, WacPack and Opus playback libraries Updated: ID3v1 and ID3v2 tag librariesEWSEditor: EwsEditor 1.10 Release: • Export and import of items as a full fidelity steam works - without proxy classes! - I used raw EWS POSTs. • Turned off word wrap for EWS request field in EWS POST windows. • Several windows with scrolling texts boxes were limiting content to 32k - I removed this restriction. • Split server timezone info off to separate menu item from the timezone info windows so that the timezone info window could be used without logging into a mailbox. • Lots of updates to the TimeZone window. • UserAgen...Python Tools for Visual Studio: 2.1 RC: Release notes for PTVS 2.1 RC We’re pleased to announce the release candidate for Python Tools for Visual Studio 2.1. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, editing, IntelliSense, interactive debugging, profiling, Microsoft Azure, IPython, and cross-platform debugging support. PTVS 2.1 RC is available for: Visual Studio Expre...Aspose for Apache POI: Missing Features of Apache POI SS - v 1.2: Release contain the Missing Features in Apache POI SS SDK in comparison with Aspose.Cells What's New ? Following Examples: Create Pivot Charts Detect Merged Cells Sort Data Printing Workbooks Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.AngularGo (SPA Project Template): AngularGo.VS2013.vsix: First ReleaseDaphne 2014 - application for playing Czech Draughts: Daphne 2014 verze 0.9.0.21: Daphne 2014 verze 0.9.0.21MFCBDAINF: MFCBDAINF: Added recognition of TBS, Hauppauge, DVBWorld and FireDTV proprietary GUID'sFluffy: Fluffy 0.3.35.4: Change log: Text editorSKGL - Serial Key Generating Library: SKGL Extension Methods 4 (1.0.5.1): This library contains methods for: Time change check (make sure the time has not been changed on the client computer) Key Validation (this will use http://serialkeymanager.com/ to validate keys against the database) Key Activation (this will, depending on the settings, activate a key with a specific machine code) Key Activation Trial (allows you to update a key if it is a trial key) Get Machine Code (calculates a machine code given any hash function) Get Eight Byte Hash (returns an...Touchmote: Touchmote 1.0 beta 13: Changes Less GPU usage Works together with other Xbox 360 controls Bug fixesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v3.0: Important: I would like to hear more about what you are thinking about the project? I appreciate that you like it (2000 downloads over past 6 months), but may be you have to say something? What do you dislike in the module? Maybe you would love to see some new functionality? Tell, what you think! Installation guide:Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI ...Modern UI for WPF: Modern UI 1.0.6: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE LinkGroup.GroupName renamed to GroupKey NEW FEATURES Improved rendering on high DPI screens, including support for per-monitor DPI awareness available in Windows 8.1 (see also Per-monitor DPI awareness) New ModernProgressRing control with 8 builtin styles New LinkCommands.NavigateLink routed command New Visual Studio project templates 'Modern UI WPF App' and 'Modern UI W...Roll20 Custom Power Card Macro Generator: R20CPCMG 0.2.0.0 Public Beta: This is the beta release for version 0.2.0.0. Its still very much a work in progress, but I'd rather get this out now before another lapse in updates so we can solicit feedback from the community. The two main updates for this version is that you can now import macros and you can customize the tag buttons and group them by game system. The Import Macro function turns the raw text, like the following, into something you can easily edit inside the program. !power --name|Whirling Assault --us...Utility Database: UtilityDB.2.0: Release Notes: Version 2.0 This is the second release of the UtilityDB, it builds on top of and includes the Version 1.8 of code. This release focus on performance metrics in particular Disk I/O. The deployment scripts have been rewritten to utilize transactions to insure completeness of script execution. This project releases the source code as a SQL Server 2012 project file. The intended way to deliver the scripts to the database is through the execution of the @BuildScript.sql in the ...ClosedXML - The easy way to OpenXML: ClosedXML 0.74.0: Multiple thread safe improvements including AdjustToContents XLHelper XLColor_Static IntergerExtensions.ToStringLookup Exception now thrown when saving a workbook with no sheets, instead of creating a corrupt workbook Fix for hyperlinks with non-ASCII Characters Added basic workbook protection Fix for error thrown, when a spreadsheet contained comments and images Fix to Trim function Fix Invalid operation Exception thrown when the formula functions MAX, MIN, and AVG referenc...SEToolbox: SEToolbox 01.042.019 Release 1: Added RadioAntenna broadcast name to ship name detail. Added two additional columns for Asteroid material generation for Asteroid Fields. Added Mass and Block number columns to main display. Added Ellipsis to some columns on main display to reduce name confusion. Added correct SE version number in file when saving. Re-added in reattaching Motor when drag/dropping or importing ships (KeenSH have added RotorEntityId back in after removing it months ago). Added option to export and r...jQuery List DragSort: jQuery List DragSort 0.5.2: Fixed scrollContainer removing deprecated use of $.browser so should now work with latest version of jQuery. Added the ability to return false in dragEnd to revert sort order Project changes Added nuget package for dragsort https://www.nuget.org/packages/dragsort Converted repository from SVN to MercurialNew Projectsangle.works: Sample AngularJS projectArtezio spTree for SharePoint 2013: spTree is a jQuery plugin to display SharePoint websites and lists in a tree view. It uses jsTree plugin to display data, expand and complement its settings.DigitalProject: no project ,no project ,no project ,no project ,no project ,no project ,no project ,no project ,no project ,no project ,no project ,no project ,no project ,no popenelecmedicrec: its an open source emrPetriFlow: A new solution for workflow using Petri NetReflexive .Net: Stream transformation over durable and transient channelsrunner-prototype: Small game prototype for self-practicing, likely no use at all for anyone else.Spizzi PowerShell Module: This project provides different PowerShell CmdLet's combined into one module to extend the built-in PowerShell Modules.Testill: Test Web Fabricator: A highly composable web fabricwingate log parser: wingate log parser

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • .htaccess not working (mod_rewrite)

    - by Mike Curry
    Edit: I am pretty sure my .htaccess file is NOT being executed, and the problem is NOT with my rewrite rules. I have not having any luck getting my .htaccess with mod_rewrite working. Basically all I am trying to do is remove 'www' from "http://www.site.com" and "https://www.site.com". If there is anything I am missing (conf files, etc let me know I willl update this) I jsut can't see whats wrong here... I am using a 1&1 VPS III Virtual private server... anyone ever have this issue? I am using Ubuntu 8.04 Server LTS. Here is my .htaccess file (located @ /var/www/site/trunk/html/) Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^www\.(.*) [NC] RewriteRule (.*) //%1/$1 [L,R=301] My mod_rewrite is enabled: The auto regenerated sym link is there in mods-available and /usr/lib/apache2/modules/ contains mod_rewrite.so root@s15348441:/etc/apache2/mods-available# more rewrite.load LoadModule rewrite_module /usr/lib/apache2/modules/mod_rewrite.so root@s15348441:/var/log# apache2ctl -t -D DUMP_MODULES Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK My apache config files: apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minor | Minimal | Major | Prod # where Full conveys the most information, and Prod the least. # ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # ServerSignature On # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /usr/share/apache2/error/include/ files and copying them to /your/include/path/, # even on a per-VirtualHost basis. The default include files will display # your Apache version number and your ServerAdmin email address regardless # of the setting of ServerSignature. # # The internationalized error documents require mod_alias, mod_include # and mod_negotiation. To activate them, uncomment the following 30 lines. # Alias /error/ "/usr/share/apache2/error/" # # <Directory "/usr/share/apache2/error"> # AllowOverride None # Options IncludesNoExec # AddOutputFilter Includes html # AddHandler type-map var # Order allow,deny # Allow from all # LanguagePriority en cs de es fr it nl sv pt-br ro # ForceLanguagePriority Prefer Fallback # </Directory> # # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ My default config file for www on apache NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] #SSLEnable #SSLVerifyClient none #SSLCertificateFile /usr/local/ssl/crt/public.crt #SSLCertificateKeyFile /usr/local/ssl/private/private.key DocumentRoot /var/www/site/trunk/html <Directory /> Options FollowSymLinks AllowOverride all </Directory> <Directory /var/www/site/trunk/html> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> My ssl config file NameVirtualHost *:443 <VirtualHost *:443> ServerAdmin [email protected] #SSLEnable #SSLVerifyClient none #SSLCertificateFile /usr/local/ssl/crt/public.crt #SSLCertificateKeyFile /usr/local/ssl/private/private.key DocumentRoot /var/www/site/trunk/html <Directory /> Options FollowSymLinks AllowOverride all </Directory> <Directory /var/www/site/trunk/html> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn SSLEngine On SSLCertificateFile /usr/local/ssl/crt/public.crt SSLCertificateKeyFile /usr/local/ssl/private/private.key CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> My /etc/apache2/httpd.conf is blank The directory /etc/apache2/conf.d has nothing in it but one file (charset) contents of /etc/apache2/conf.dcharset # Read the documentation before enabling AddDefaultCharset. # In general, it is only a good idea if you know that all your files # have this encoding. It will override any encoding given in the files # in meta http-equiv or xml encoding tags. #AddDefaultCharset UTF-8 My apache error.log [Wed Jun 03 00:12:31 2009] [error] [client 216.168.43.234] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Jun 03 05:03:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:03:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:48 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:57 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:17:28 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:26:23 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 05:26:34 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 06:03:41 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 06:03:51 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 06:25:07 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 06:25:17 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 12:09:25 2009] [error] [client 61.139.105.163] File does not exist: /var/www/site/trunk/html/fastenv [Wed Jun 03 15:04:42 2009] [notice] Graceful restart requested, doing restart [Wed Jun 03 15:04:43 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 15:29:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:29:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:30:32 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:45:54 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 15:46:05 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations

    Read the article

  • Should one replace the usage addJSONData of jqGrid to the usage of setGridParam(), and trigger('relo

    - by Oleg
    Hi everybody who use jqGrid! I am a new on stackoverflow.com and it seems to me that a lot of peoples who use stackoverflow.com are not only the persons who have a problem which must be quickly solved. A lot of people read stackoverflow.com to look at well-known things from the other side. Sometime perhaps the reason is a self-training (to stay in the good form) during solving of problems other people. For all these gays, who not want only to solve his problem is my question. I wrote recently an answer to the question "jqGrid display default “loading” message when updating a table / on custom update". During writing of the answer I thought: why he uses addJSONData() function for refresh of data in the grid instead of changing URL with respect of setGridParam() and refreshing jqGrid data with respect of trigger('reloadGrid')? At the beginning I wanted to recommend using of 'reloadGrid', but after thinking about this I understood, that I am not quite sure what the best way is. At least I can't explain in two sentences why I prefer the second way. So I decide that it could be an interesting subject of a discussion. So to be exactly: We have a typical situation. We have a web page with at least one jqGrid and some other controls like combo-boxes (selects), checkboxes etc. which give user possibilities to change scope on information displayed in a jqGrid. Typically we define some event handler like jQuery("#selector").change(myRefresh).keyup(myKeyRefresh) and we need reload the jqGrid contain based on users choose. After reading and analyzing of the information from additional users input we can refresh jqGrid contain in at least two ways: Make call of $.ajax() manual and then inside of success or complete handle of $.ajax call jQuery.parseJSON() (or eval) and then call addJSONData function of jqGrid. I found a lot of examples on stackoverflow.com who use addJSONData. Update url of jqGrid based on users input, reset current page number to 1 and optionally change the caption of the grid. All these can be done with respect of setGridParam(), and optionally setCaption() jqGrid methods. At the end one call trigger('reloadGrid') method of the grid. To construct the url, by the way I use mostly jQuery.param function to be sure, that I all url parameters packed correctly with respect of encodeURIComponent. I want that we discuss advantages and disadvantages of both ways. I use currently the second way, so I start with advantages of this one. One can say me: I call existing Web Service, convert received data to the jqGrid format and call addJSONData. This is the reason why I use addJSONData method! OK, I choose another way. jqGrid can make a call of the Web Service directly and fill results inside of grid. There are a lot of jqGrid options, which allow you to customize this process. First of all, one can delete or rename any standard parameter sent to server with respect of prmNames option of jqGrid or add any more additional parameters with respect of postData option (see http://www.trirand.com/jqgridwiki/doku.php?id=wiki:options). One can modify all constructed parameters immediately before jqGrid makes corresponding $.ajax request by defining of serializeGridData() function (one more option of jqGrid). More than that, one can change every $.ajax parameter by setting ajaxGridOptions option of jqGrid. I use ajaxGridOptions: {contentType: "application/json"} for example as a general setting of $.jgrid.defaults. The ajaxGridOptions option is very powerful. With respect of ajaxGridOptions option one can redefine any parameter of $.ajax request sending by jqGrid, like error, complete and beforeSend events. I see potentially interesting to define dataFilter event to be able makes any modification of the row data responded from the server. One more argument for using of trigger('reloadGrid') way is blocking of jqGrid during ajax request processing. Mostly I use parameter loadui: 'block' to block jqGrid during JSON request sending to server. With respect of jQuery blockUI plugin http://malsup.com/jquery/block/ one can block more parts of web page as the grid only. To do this one can call jQuery('#main').block({ message: '<h1>Die Daten werden vom Server geladen...</h1>' }); before calling of trigger('reloadGrid') method and jQuery('#main').unblock() inside of loadComplete and loadError functions. The loadui option could be set to 'disable' in this case. So I don’t see why the function addJSONData() should be used. Can somebody who use addJSONData() function explain me advantages of its usage? Should one replace the usage addJSONData of jqGrid to the usage of setGridParam(), and trigger('reloadGrid')? I am opened to the discussion.

    Read the article

  • Android "java.lang.noclassdeffounderror" exception

    - by wpbnewbie
    Hello, I have a android webservice client application. I am trying to use the java standard WS library support. I have stripped the application down to the minimum, as shown below, to try and isolate the issue. Below is the applicaiton, package fau.edu.cse; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class ClassMap extends Activity { TextView displayObject; @Override public void onCreate(Bundle savedInstanceState) { // Build Screen Display String String screenString = "Program Started\n\n"; // Set up the display super.onCreate(savedInstanceState); setContentView(R.layout.main); displayObject = (TextView)findViewById(R.id.TextView01); screenString = screenString + "Inflate Disaplay\n\n"; try { // Set up Soap Service TempConvertSoap service = new TempConvert().getTempConvertSoap(); // Successful Soap Object Build screenString = screenString + "SOAP Object Correctly Build\n\n"; // Display Response displayObject.setText(screenString); } catch(Throwable e){ e.printStackTrace(); displayObject.setText(screenString +"Try Error...\n" + e.toString()); } } } The classes tempConvert and tempConvertSoap are in the package fau.edu.cse. I have included the java SE javax libraries in the java build pasth. When the android application tries to create the "service" object I get a "java.lang.noclassdeffounderror" exception. The two classes tempConvertSoap and TempConvet() are generated by wsimport. I am also using several libraries from javax.jws.. and javax.xml.ws.. Of course the application compiles without error and loads correctly. I know the application is running becouse my "try/catch" routine is successfully catching the error and printing it out. Here is what is in the logcat says (notice that it cannot find TempConvert), 06-12 22:58:39.340: WARN/dalvikvm(200): Unable to resolve superclass of Lfau/edu/cse/TempConvert; (53) 06-12 22:58:39.340: WARN/dalvikvm(200): Link of class 'Lfau/edu/cse/TempConvert;' failed 06-12 22:58:39.340: ERROR/dalvikvm(200): Could not find class 'fau.edu.cse.TempConvert', referenced from method fau.edu.cse.ClassMap.onCreate 06-12 22:58:39.340: WARN/dalvikvm(200): VFY: unable to resolve new-instance 21 (Lfau/edu/cse/TempConvert;) in Lfau/edu/cse/ClassMap; 06-12 22:58:39.340: DEBUG/dalvikvm(200): VFY: replacing opcode 0x22 at 0x0027 06-12 22:58:39.340: DEBUG/dalvikvm(200): Making a copy of Lfau/edu/cse/ClassMap;.onCreate code (252 bytes) 06-12 22:58:39.490: DEBUG/dalvikvm(30): GC freed 2 objects / 48 bytes in 273ms 06-12 22:58:39.530: DEBUG/ddm-heap(119): Got feature list request 06-12 22:58:39.620: WARN/Resources(200): Converting to string: TypedValue{t=0x12/d=0x0 a=2 r=0x7f050000} 06-12 22:58:39.620: WARN/System.err(200): java.lang.NoClassDefFoundError: fau.edu.cse.TempConvert 06-12 22:58:39.830: WARN/System.err(200): at fau.edu.cse.ClassMap.onCreate(ClassMap.java:26) 06-12 22:58:39.830: WARN/System.err(200): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2459) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 06-12 22:58:39.880: WARN/System.err(200): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 06-12 22:58:39.880: WARN/System.err(200): at android.os.Handler.dispatchMessage(Handler.java:99) 06-12 22:58:39.880: WARN/System.err(200): at android.os.Looper.loop(Looper.java:123) 06-12 22:58:39.880: WARN/System.err(200): at android.app.ActivityThread.main(ActivityThread.java:4363) 06-12 22:58:39.880: WARN/System.err(200): at java.lang.reflect.Method.invokeNative(Native Method) 06-12 22:58:39.880: WARN/System.err(200): at java.lang.reflect.Method.invoke(Method.java:521) 06-12 22:58:39.880: WARN/System.err(200): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 06-12 22:58:39.880: WARN/System.err(200): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 06-12 22:58:39.880: WARN/System.err(200): at dalvik.system.NativeStart.main(Native Method) ...bla...bla...bla It would be great if someone just had an answer, however I am looking at debug strategies. I have taken this same application and created a standard java client application and it works fine -- of course with all of the android stuff taken out. What would be a good debug strategy? What methods and techniques would you recommend I try and isolate the problem? I am thinking that there is some sort of Dalvik VM incompatibility that is causing the TempConvert class not to load. TempConvert is an interface class that references a lot of very tricky webservice attributes. Any help with debug strategies would be gladly appreciated. Thanks for the help, Steve

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • Calling AuditQuerySystemPolicy() (advapi32.dll) from C# returns "The parameter is incorrect"

    - by JCCyC
    The sequence is like follows: Open a policy handle with LsaOpenPolicy() (not shown) Call LsaQueryInformationPolicy() to get the number of categories; For each category: Call AuditLookupCategoryGuidFromCategoryId() to turn the enum value into a GUID; Call AuditEnumerateSubCategories() to get a list of the GUIDs of all subcategories; Call AuditQuerySystemPolicy() to get the audit policies for the subcategories. All of these work and return expected, sensible values except the last. Calling AuditQuerySystemPolicy() gets me a "The parameter is incorrect" error. I'm thinking there must be some subtle unmarshaling problem. I'm probably misinterpreting what exactly AuditEnumerateSubCategories() returns, but I'm stumped. You'll see (commented) I tried to dereference the return pointer from AuditEnumerateSubCategories() as a pointer. Doing or not doing that gives the same result. Code: #region LSA types public enum POLICY_INFORMATION_CLASS { PolicyAuditLogInformation = 1, PolicyAuditEventsInformation, PolicyPrimaryDomainInformation, PolicyPdAccountInformation, PolicyAccountDomainInformation, PolicyLsaServerRoleInformation, PolicyReplicaSourceInformation, PolicyDefaultQuotaInformation, PolicyModificationInformation, PolicyAuditFullSetInformation, PolicyAuditFullQueryInformation, PolicyDnsDomainInformation } public enum POLICY_AUDIT_EVENT_TYPE { AuditCategorySystem, AuditCategoryLogon, AuditCategoryObjectAccess, AuditCategoryPrivilegeUse, AuditCategoryDetailedTracking, AuditCategoryPolicyChange, AuditCategoryAccountManagement, AuditCategoryDirectoryServiceAccess, AuditCategoryAccountLogon } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct POLICY_AUDIT_EVENTS_INFO { public bool AuditingMode; public IntPtr EventAuditingOptions; public UInt32 MaximumAuditEventCount; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct GUID { public UInt32 Data1; public UInt16 Data2; public UInt16 Data3; public Byte Data4a; public Byte Data4b; public Byte Data4c; public Byte Data4d; public Byte Data4e; public Byte Data4f; public Byte Data4g; public Byte Data4h; public override string ToString() { return Data1.ToString("x8") + "-" + Data2.ToString("x4") + "-" + Data3.ToString("x4") + "-" + Data4a.ToString("x2") + Data4b.ToString("x2") + "-" + Data4c.ToString("x2") + Data4d.ToString("x2") + Data4e.ToString("x2") + Data4f.ToString("x2") + Data4g.ToString("x2") + Data4h.ToString("x2"); } } #endregion #region LSA Imports [DllImport("kernel32.dll")] extern static int GetLastError(); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern UInt32 LsaNtStatusToWinError( long Status); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaOpenPolicy( ref LSA_UNICODE_STRING SystemName, ref LSA_OBJECT_ATTRIBUTES ObjectAttributes, Int32 DesiredAccess, out IntPtr PolicyHandle ); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaClose(IntPtr PolicyHandle); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaFreeMemory(IntPtr Buffer); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern void AuditFree(IntPtr Buffer); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern long LsaQueryInformationPolicy( IntPtr PolicyHandle, POLICY_INFORMATION_CLASS InformationClass, out IntPtr Buffer); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditLookupCategoryGuidFromCategoryId( POLICY_AUDIT_EVENT_TYPE AuditCategoryId, IntPtr pAuditCategoryGuid); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditEnumerateSubCategories( IntPtr pAuditCategoryGuid, bool bRetrieveAllSubCategories, out IntPtr ppAuditSubCategoriesArray, out ulong pCountReturned); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditQuerySystemPolicy( IntPtr pSubCategoryGuids, ulong PolicyCount, out IntPtr ppAuditPolicy); #endregion Dictionary<string, UInt32> retList = new Dictionary<string, UInt32>(); long lretVal; uint retVal; IntPtr pAuditEventsInfo; lretVal = LsaQueryInformationPolicy(policyHandle, POLICY_INFORMATION_CLASS.PolicyAuditEventsInformation, out pAuditEventsInfo); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) { LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception((int)retVal); } POLICY_AUDIT_EVENTS_INFO myAuditEventsInfo = new POLICY_AUDIT_EVENTS_INFO(); myAuditEventsInfo = (POLICY_AUDIT_EVENTS_INFO)Marshal.PtrToStructure(pAuditEventsInfo, myAuditEventsInfo.GetType()); IntPtr subCats = IntPtr.Zero; ulong nSubCats = 0; for (int audCat = 0; audCat < myAuditEventsInfo.MaximumAuditEventCount; audCat++) { GUID audCatGuid = new GUID(); if (!AuditLookupCategoryGuidFromCategoryId((POLICY_AUDIT_EVENT_TYPE)audCat, new IntPtr(&audCatGuid))) { int causingError = GetLastError(); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } if (!AuditEnumerateSubCategories(new IntPtr(&audCatGuid), true, out subCats, out nSubCats)) { int causingError = GetLastError(); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } // Dereference the first pointer-to-pointer to point to the first subcategory // subCats = (IntPtr)Marshal.PtrToStructure(subCats, subCats.GetType()); if (nSubCats > 0) { IntPtr audPolicies = IntPtr.Zero; if (!AuditQuerySystemPolicy(subCats, nSubCats, out audPolicies)) { int causingError = GetLastError(); if (subCats != IntPtr.Zero) AuditFree(subCats); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } AUDIT_POLICY_INFORMATION myAudPol = new AUDIT_POLICY_INFORMATION(); for (ulong audSubCat = 0; audSubCat < nSubCats; audSubCat++) { // Process audPolicies[audSubCat], turn GUIDs into names, fill retList. // http://msdn.microsoft.com/en-us/library/aa373931%28VS.85%29.aspx // http://msdn.microsoft.com/en-us/library/bb648638%28VS.85%29.aspx IntPtr itemAddr = IntPtr.Zero; IntPtr itemAddrAddr = new IntPtr(audPolicies.ToInt64() + (long)(audSubCat * (ulong)Marshal.SizeOf(itemAddr))); itemAddr = (IntPtr)Marshal.PtrToStructure(itemAddrAddr, itemAddr.GetType()); myAudPol = (AUDIT_POLICY_INFORMATION)Marshal.PtrToStructure(itemAddr, myAudPol.GetType()); retList[myAudPol.AuditSubCategoryGuid.ToString()] = myAudPol.AuditingInformation; } if (audPolicies != IntPtr.Zero) AuditFree(audPolicies); } if (subCats != IntPtr.Zero) AuditFree(subCats); subCats = IntPtr.Zero; nSubCats = 0; } lretVal = LsaFreeMemory(pAuditEventsInfo); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) throw new System.ComponentModel.Win32Exception((int)retVal); lretVal = LsaClose(policyHandle); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) throw new System.ComponentModel.Win32Exception((int)retVal);

    Read the article

  • WPF Custom Buttons below ListBox Items

    - by Ryan
    WPF Experts - I am trying to add buttons below my custom listbox and also have the scroll bar go to the bottom of the control. Only the items should move and not the buttons. I was hoping for some guidance on the best way to achieve this. I was thinking the ItemsPanelTemplate needed to be modified but was not certain. Thanks My code is below <!-- List Item Selected --> <LinearGradientBrush x:Key="GotFocusStyle" EndPoint="0.5,1" StartPoint="0.5,0"> <LinearGradientBrush.GradientStops> <GradientStop Color="Black" Offset="0.501"/> <GradientStop Color="#FF091F34"/> <GradientStop Color="#FF002F5C" Offset="0.5"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> <!-- List Item Hover --> <LinearGradientBrush x:Key="MouseOverFocusStyle" StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF013B73" Offset="0.501"/> <GradientStop Color="#FF091F34"/> <GradientStop Color="#FF014A8F" Offset="0.5"/> <GradientStop Color="#FF003363" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> <!-- List Item Selected --> <LinearGradientBrush x:Key="LostFocusStyle" EndPoint="0.5,1" StartPoint="0.5,0"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <ScaleTransform CenterX="0.5" CenterY="0.5"/> <SkewTransform CenterX="0.5" CenterY="0.5"/> <RotateTransform CenterX="0.5" CenterY="0.5"/> <TranslateTransform/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <GradientStop Color="#FF091F34" Offset="1"/> <GradientStop Color="#FF002F5C" Offset="0.4"/> </LinearGradientBrush> <!-- List Item Highlight --> <SolidColorBrush x:Key="ListItemHighlight" Color="#FFE38E27" /> <!-- List Item UnHighlight --> <SolidColorBrush x:Key="ListItemUnHighlight" Color="#FF6FB8FD" /> <Style TargetType="ListBoxItem"> <EventSetter Event="GotFocus" Handler="ListItem_GotFocus"></EventSetter> <EventSetter Event="LostFocus" Handler="ListItem_LostFocus"></EventSetter> </Style> <DataTemplate x:Key="CustomListData" DataType="{x:Type ListBoxItem}"> <Border BorderBrush="Black" BorderThickness="1" Margin="-2,0,0,-1"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type ListBoxItem}}, Path=ActualWidth}" /> </Grid.ColumnDefinitions> <Label VerticalContentAlignment="Center" BorderThickness="0" BorderBrush="Transparent" Foreground="{StaticResource ListItemUnHighlight}" FontSize="24" Tag="{Binding .}" Grid.Column="0" MinHeight="55" Cursor="Hand" FontFamily="Arial" FocusVisualStyle="{x:Null}" KeyboardNavigation.TabNavigation="None" Background="{StaticResource LostFocusStyle}" MouseMove="ListItem_MouseOver" > <Label.ContextMenu> <ContextMenu Name="editMenu"> <MenuItem Header="Edit"/> </ContextMenu> </Label.ContextMenu> <TextBlock Text="{Binding .}" Margin="15,0,40,0" TextWrapping="Wrap"></TextBlock> </Label> <Image Tag="{Binding .}" Source="{Binding}" Margin="260,0,0,0" Grid.Column="1" Stretch="None" Width="16" Height="22" HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </Border> </DataTemplate> </Window.Resources> <Window.DataContext> <ObjectDataProvider ObjectType="{x:Type local:ImageLoader}" MethodName="LoadImages" /> </Window.DataContext> <ListBox ItemsSource="{Binding}" Width="320" Background="#FF021422" BorderBrush="#FF1C4B79" > <ListBox.Resources> <SolidColorBrush x:Key="{x:Static SystemColors.HighlightBrushKey}">Transparent</SolidColorBrush> <Style TargetType="{x:Type ListBox}"> <Setter Property="ScrollViewer.HorizontalScrollBarVisibility" Value="Disabled" /> <Setter Property="ItemTemplate" Value="{StaticResource CustomListData }" /> </Style> </ListBox.Resources> </ListBox>

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • What does a Software Developer actually do?

    - by chobo2
    Hi I am graduating from my Computer Science degree in a few weeks from now!! I started to look for my first job. For the last couple years I gotten really into web programming(Asp.net). My first choice would be to get a junior asp.net MVC developer but I don't any companies in my area use MVC yet or if they do they are not hiring. So my second choice would be a junior asp.net Webforms developer. My other choices after that would be forms applications, mobile applications using .Net and C#. As you can see I am looking for something with .Net. I spent the last couple years doing .Net projects for school, on my free time and love the Language and it would pain me right now to switch to something like php. So now I found a posting in my area for an Entry Software Developer. I like the fact that they are using .net and that it is entry job(I never worked in this industry and never had more then like a tutoring job so I want to for like intermediate jobs). Posting Are you looking for an exciting challenge within a dynamic, people-oriented culture where you can launch your technical career? Company Name Inc. is a technology consulting company, located in Canada, that designs, develops, and delivers real-time interactive applications accessed via the Internet as well as back-end tools to support these applications. Company Name provides a combination of out-of-the-box and customized solutions to an expanding list of partners and customers. POSITION SUMMARY As a member of our team, the successful candidate will be responsible for helping us increase the quality and stability of our software systems by working jointly and directly with both the Software Development teams and the QA Team. The primary mission of this role will be to substantially enhance our test automation suite. The incumbent will design and program automated tests (unit, integration, system, stress and load) in Visual Studio using C# and will develop sound processes that help us identify and resolve defects as early as possible. The successful incumbent will help us improve and enhance system functionality, reliability, performance and scalability. This role is specifically designed for an eager, bright, new graduate who is looking for a stepping stone into a software engineering role. We promote from within and invite new graduates to apply for this important position - which may lead to new opportunities. We also offer a generous professional development plan to help you on your way. You will be a key part of a team of experts that is responsible for improving the quality of our software by: • Designing, writing, and executing test plans and programmatic tests in Visual Studio using C# and NUnit for functional testing of our code, new features, regression, and performance test procedures. • Working with the engineers to design and build the stress and load testing framework which emulates tens and even hundreds of thousands of concurrent users via a distributed network interfacing with our Load Testing Lab. • Interfacing with both the Development Team and the QA Team to ensure risks are identified and managed. • Mentoring and leading the QA Team in programmatic test automation technologies and tools. MUST HAVE SKILLS / QUALIFICATIONS: • Diploma or higher Degree in Computer Science, or equivalent formal training. • Fundamental C# programming skills. • Knowledge of Internet technologies and Microsoft Windows platforms. • Knowledge of PC hardware. • Excellent communication skills (both oral and written). • Self-starter who takes initiative, requires minimal supervision, can handle multiple simultaneous tasks. • Detail-oriented, able to concentrate, and work quickly. • Proven diagnostic, analytical, and problem solving skills. NICE TO HAVE SKILLS: • Exposure to Visual Studio Team System or Visual Studio Test Edition. • Exposure in C# using NUnit. • Exposure to NUnit, HTTPUnit, and other automation tool suites. • Exposure to Performance/Stress/Load Testing. • Good understanding of relational databases (MS SQL Server). • Familiar with video and online multi-player games. As part of our team you will have the opportunity to work with a supportive team of experts, drive your own success, and ride the wave as we continually expand our team of experts. If you are interested in this opportunity, please send your resume to [email protected] with “Entry Level Software Developer” in the subject line. So that is the posting. To me it sounds like it is QA job. I don't have anything against QA jobs but alot of them seems to be your just clicking buttons and running scripts. Is this what a typical software developer does? Like I am so on the fence to apply for this job. On one side I am not sure how much programming I would be doing. Like I want to be at least half the time programming otherwise my skills will never improve since I will never be programming in teams and stuff. At the same time I have no experience in the industry so on the other side I am thinking just go for it and then maybe a year later try to get a full programming job(provided that I got the job). Yet if I am not programming in that job then that experience will not help me for the next job I find as I will be back a square one.

    Read the article

  • Using NHibernate Criteria API to select sepcific set of data together with a count

    - by mfloryan
    I have the following domain set up for persistence with NHibernate: I am using the PaperConfiguration as the root aggregate. I want to select all PaperConfiguration objects for a given Tier and AcademicYearConfiguration. This works really well as per the following example: ICriteria criteria = session.CreateCriteria<PaperConfiguration>() .Add(Restrictions.Eq("AcademicYearConfiguration", configuration)) .CreateCriteria("Paper") .CreateCriteria("Unit") .CreateCriteria("Tier") .Add(Restrictions.Eq("Id", tier.Id)) return criteria.List<PaperConfiguration>(); (Perhaps there is a better way of doing this though). Yet also need to know how many ReferenceMaterials there are for each PaperConfiguration and I would like to get it in the same call. Avoid HQL - I already have an HQL solution for it. I know this is what projections are for and this question suggests an idea but I can't get it to work. I have a PaperConfigurationView that has, instead of IList<ReferenceMaterial> ReferenceMaterials the ReferenceMaterialCount and was thinking along the lines of ICriteria criteria = session.CreateCriteria<PaperConfiguration>() .Add(Restrictions.Eq("AcademicYearConfiguration", configuration)) .CreateCriteria("Paper") .CreateCriteria("Unit") .CreateCriteria("Tier") .Add(Restrictions.Eq("Id", tier.Id)) .SetProjection( Projections.ProjectionList() .Add(Projections.Property("IsSelected"), "IsSelected") .Add(Projections.Property("Paper"), "Paper") // and so on for all relevant properties .Add(Projections.Count("ReferenceMaterials"), "ReferenceMaterialCount") .SetResultTransformer(Transformers.AliasToBean<PaperConfigurationView>()); return criteria.List< PaperConfigurationView >(); unfortunately this does not work. What am I doing wrong? The following simplified query: ICriteria criteria = session.CreateCriteria<PaperConfiguration>() .CreateCriteria("ReferenceMaterials") .SetProjection( Projections.ProjectionList() .Add(Projections.Property("Id"), "Id") .Add(Projections.Count("ReferenceMaterials"), "ReferenceMaterialCount") ).SetResultTransformer(Transformers.AliasToBean<PaperConfigurationView>()); return criteria.List< PaperConfigurationView >(); creates this rather unexpected SQL: SELECT this_.Id as y0_, count(this_.Id) as y1_ FROM Domain.PaperConfiguration this_ inner join Domain.ReferenceMaterial referencem1_ on this_.Id=referencem1_.PaperConfigurationId The above query fails with ADO.NET error as it obviously is not a correct SQL since it is missing a group by or the count being count(referencem1_.Id) rather than (this_.Id). NHibernate mappings: <class name="PaperConfiguration" table="PaperConfiguration"> <id name="Id" type="Int32"> <column name="Id" sql-type="int" not-null="true" unique="true" index="PK_PaperConfiguration"/> <generator class="native" /> </id> <!-- IPersistent --> <version name="VersionLock" /> <!-- IAuditable --> <property name="WhenCreated" type="DateTime" /> <property name="CreatedBy" type="String" length="50" /> <property name="WhenChanged" type="DateTime" /> <property name="ChangedBy" type="String" length="50" /> <property name="IsEmeEnabled" type="boolean" not-null="true" /> <property name="IsSelected" type="boolean" not-null="true" /> <many-to-one name="Paper" column="PaperId" class="Paper" not-null="true" access="field.camelcase"/> <many-to-one name="AcademicYearConfiguration" column="AcademicYearConfigurationId" class="AcademicYearConfiguration" not-null="true" access="field.camelcase"/> <bag name="ReferenceMaterials" generic="true" cascade="delete" lazy="true" inverse="true"> <key column="PaperConfigurationId" not-null="true" /> <one-to-many class="ReferenceMaterial" /> </bag> </class> <class name="ReferenceMaterial" table="ReferenceMaterial"> <id name="Id" type="Int32"> <column name="Id" sql-type="int" not-null="true" unique="true" index="PK_ReferenceMaterial"/> <generator class="native" /> </id> <!-- IPersistent --> <version name="VersionLock" /> <!-- IAuditable --> <property name="WhenCreated" type="DateTime" /> <property name="CreatedBy" type="String" length="50" /> <property name="WhenChanged" type="DateTime" /> <property name="ChangedBy" type="String" length="50" /> <property name="Name" type="String" not-null="true" /> <property name="ContentFile" type="String" not-null="false" /> <property name="Position" type="int" not-null="false" /> <property name="CommentaryName" type="String" not-null="false" /> <property name="CommentarySubjectTask" type="String" not-null="false" /> <property name="CommentaryPointScore" type="String" not-null="false" /> <property name="CommentaryContentFile" type="String" not-null="false" /> <many-to-one name="PaperConfiguration" column="PaperConfigurationId" class="PaperConfiguration" not-null="true"/> </class>

    Read the article

  • "Randomly" occurring errors...

    - by ClarkeyBoy
    Hi, My website has a setup whereby when the application starts a module called SiteContent is "created". This runs a clearup function which basically deletes any irrelevant data from the database, in case any has been left in there from previously run functions. The module has instances of Manager classes - namely RangeManager, CollectionManager, DesignManager. There are others but I will just use these as an example. Each Manager class contains an array of items - items may be of type Range, Collection or Design, whichever one is relevant. Data for each range is then read into an instance of Range, Collection or Design. I know this is basically duplicating data - not very efficient but its my final year project at the moment so I can always change it to use Linq or something similar later, when I am not pressured by the one month deadline. I have a form which, on clicking the Save button, saves data by calling SiteContent.RangeManager.Create(vars) or SiteContent.RangeManager.Update(Range As Range, vars) (or the equivalent for other manager classes, whichever one happens to be relevant). These functions call a stored procedure to insert or update in the relevant table. Classes Range, Collection and Design all have attributes such as Name, Description, Display and several others. When the Create or Update function is called, the Manager loops through all the other items to check if an item with the same name already exists. The Update function ensures that it does not compare the item being updated to itself. A custom exception (ItemAlreadyExistsException) is thrown if another item with the same name is found. For some weird reason, if I go into a Range, Collection or Design in edit mode, change something and try to update it, it occasionally doesnt update the item. When I say occasionally I mean every 3 - 4 page loads, sometimes more. I see absolutely no pattern in when or why it occurs. I have a try-catch statement which catches ItemAlreadyExistsException, and outputs "An item with this name already exists" when caught. Occasionally it will output this; other times it will not. Does anyone have any idea why this could happen? Maybe a mistake which someone has made and solved before? I used to have regular expressions in place that the names were compared to - I believe it was [a-zA-Z]{1, 100} (between 1 and 100 lower- or upper-case characters). For some reason the customer who I am developing the site for used to get errors saying its not in the correct format. Yet he could try the same text 5 minutes later and it would work fine. I am thinking this could well be the same problem, since both problems occur at random. Many thanks in advance. Regards, Richard Clarke Edit: After much time spent narrowing down the code, I have decided to wait till my brother, who has been a programmer for at least 8 years more than I have, to come down over Easter and get him to have a look at it. If he cant solve it then I will zip the files up and put them somewhere for people to access and have a go at. I narrowed it down literally to the minimum number of files possible, and it still occurs. It seems to be about every 10th time. Having said that, I force the manager classes to refresh every 10 page loads or 5 minutes (whichever one is sooner). I may look into this - this could be causing a problem. Basically each Manager contains an array of an object. This array is populated using data from the database. The Update function takes an instance of the item and the new values to be set for the object. If it happens to be a page load where the array is reset (ie the data is loaded freshly from the database) then the object instance with the same ID wont be the same instance as the one being passed in. This explains the fact that it throws an ItemAlreadyExistsException now and then. It all makes sense now the more I think about it. If I were to pass in the ID of the object to be altered, rather than the object itself, then it should work perfectly. I will answer the question if I solve it..

    Read the article

  • Help:Graph contest problem: maybe a modified Dijkstra or another alternative algorithm

    - by newba
    Hi you all, I'm trying to do this contest exercise about graphs: XPTO is an intrepid adventurer (a little too temerarious for his own good) who boasts about exploring every corner of the universe, no matter how inhospitable. In fact, he doesn't visit the planets where people can easily live in, he prefers those where only a madman would go with a very good reason (several millions of credits for instance). His latest exploit is trying to survive in Proxima III. The problem is that Proxima III suffers from storms of highly corrosive acids that destroy everything, including spacesuits that were especially designed to withstand corrosion. Our intrepid explorer was caught in a rectangular area in the middle of one of these storms. Fortunately, he has an instrument that is capable of measuring the exact concentration of acid on each sector and how much damage it does to his spacesuit. Now, he only needs to find out if he can escape the storm. Problem The problem consists of finding an escape route that will allow XPTOto escape the noxious storm. You are given the initial energy of the spacesuit, the size of the rectangular area and the damage that the spacesuit will suffer while standing in each sector. Your task is to find the exit sector, the number of steps necessary to reach it and the amount of energy his suit will have when he leaves the rectangular area. The escape route chosen should be the safest one (i.e., the one where his spacesuit will be the least damaged). Notice that Rodericus will perish if the energy of his suit reaches zero. In case there are more than one possible solutions, choose the one that uses the least number of steps. If there are at least two sectors with the same number of steps (X1, Y1) and (X2, Y2) then choose the first if X1 < X2 or if X1 = X2 and Y1 < Y2. Constraints 0 < E = 30000 the suit's starting energy 0 = W = 500 the rectangle's width 0 = H = 500 rectangle's height 0 < X < W the starting X position 0 < Y < H the starting Y position 0 = D = 10000 the damage sustained in each sector Input The first number given is the number of test cases. Each case will consist of a line with the integers E, X and Y. The following line will have the integers W and H. The following lines will hold the matrix containing the damage D the spacesuit will suffer whilst in the corresponding sector. Notice that, as is often the case for computer geeks, (1,1) corresponds to the upper left corner. Output If there is a solution, the output will be the remaining energy, the exit sector's X and Y coordinates and the number of steps of the route that will lead Rodericus to safety. In case there is no solution, the phrase Goodbye cruel world! will be written. Sample Input 3 40 3 3 7 8 12 11 12 11 3 12 12 12 11 11 12 2 1 13 11 11 12 2 13 2 14 10 11 13 3 2 1 12 10 11 13 13 11 12 13 12 12 11 13 11 13 12 13 12 12 11 11 11 11 13 13 10 10 13 11 12 8 3 4 7 6 4 3 3 2 2 3 2 2 5 2 2 2 3 3 2 1 2 2 3 2 2 4 3 3 2 2 4 1 3 1 4 3 2 3 1 2 2 3 3 0 3 4 10 3 4 7 6 3 3 1 2 2 1 0 2 2 2 4 2 2 5 2 2 1 3 0 2 2 2 2 1 3 3 4 2 3 4 4 3 1 1 3 1 2 2 4 2 2 1 Sample Output 12 5 1 8 Goodbye cruel world! 5 1 4 2 Basically, I think we have to do a modified Dijkstra, in which the distance between nodes is the suit's energy (and we have to subtract it instead of suming up like is normal with distances) and the steps are the ....steps made along the path. The pos with the bester binomial (Energy,num_Steps) is our "way out". Important : XPTO obviously can't move in diagonals, so we have to cut out this cases. I have many ideas, but I have such a problem implementing them... Could someone please help me thinking about this with some code or, at least, ideas? Am I totally wrong?

    Read the article

  • Generic Http Module

    - by MartinF
    The problem I am trying to make a generic http module in asp.net C# for handling roles defined by an enum which i want to be able to change by a generic parameter. This will make it possible to use the generic module with any kind of enum defined for each project. The module hooks into the Authenticate event of the FormsAuthenticationModule, and is called on each request to the website. The module exposes public events which could be defined in the global.asax. But i cant seem to figure out how to make the generic http module work like a non generic module. There is 3 main problems. I cant register the generic http module in the web.config like any other module as i cant specify the generic parameter, or is possible somehow ? The way to solve that as far as i can figure out is to create a non-generic http module that intializes the generic HttpModule (the generic parameter is defined in a custom section for the module in the web.config). But that introduces the next problem. I cant find out how to make the public events exposed by the generic module available to hook into through the global.asax as you would normally do with a non-generic module by just making a public method with the name like ModuleClassName_PublicEventName. The init() method on the http module gets an reference to the HttpApplication object created in the global.asax. I dont know if it somehow could be possible with reflection to search for the methods and if they are defined in the global.asax (HttpApplication super class) hook them up with the correct event handler ? or if any methods on the HttpApplication object can be used? How would i store and later get a reference to the generic module created in the non-generic module ? I can get the non-generic module with HttpContext.Current.ApplicationInstance.Modules.Get("TheModule"); but is there any way i can store a reference to the generic module in the non-generic module (cant figure out how it should be possible), or store it somewhere else so i can always get it? If I can get a reference to the generic module from the global.asax etc. the events mentioned in nr. 2 can be manually wired to the methods. Thoughts and other possible solutions Instead of registering the module in the web.config it can be manually initialized by overridding the Init method of the HttpApplication and calling the Init method on the module. But that will introduce some new problems. The module will no longer be added to the the ModulesCollection. So I will need to store a reference somewhere else. This could be done with a property in the global.asax, and by implementing an interface, or by creating an generic abstract base type inheriting from HttpApplication, that the global.asax could inherit from. In the generic abstract base type i could also override the init method. It will still not automatically hook up methods in the global.asax with events in the generic module. If it is possible with reflection to search for defined methods in the super type of the HttpApplication it could be automatically done that way. But i can wire the methods in the global.asax with the events in the generic module manually either in the Init method or anywhere else by getting reference to the generic module. It doesnst really need to implement the IHttpModule interface if i choose to manually initalize the generic module. I could just aswell move all the code to the abstract base type inheriting from the HttpApplication. I would prefer to register the module simply by defining it in the web.config as it will be the easiest and most natural / logical solution. Also it would be great if it could be kept as a HttpModule instead of having to define a an abstract base type inheriting from HttpApplication, else it will be more thighed up and not as loose and plugable as i wanted it to be (but maybe it is not possible). Another alternative would be to make it all static. As far as i can figure out i would have to somehow make sure that only one method can be added to the public static events, so it wont add a reference each time a new instance of the global.asax is created. I simply cant find out what is the best solution. I have been messing around with this and thinking about it for days now. Maybe there is an option that i havent thought of ? Hope anyone out there can help me.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450  | Next Page >