Search Results

Search found 10091 results on 404 pages for 'third person shooter'.

Page 182/404 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Box2D Joints in entity components system

    - by Johnmph
    I search a way to have Box2D joints in an entity component system, here is what i found : 1) Having the joints in Box2D/Body component as parameters, we have a joint array with an ID by joint and having in the other body component the same joint ID, like in this example : Entity1 - Box2D/Body component { Body => (body parameters), Joints => { Joint1 => (joint parameters), others joints... } } // Joint ID = Joint1 Entity2 - Box2D/Body component { Body => (body parameters), Joints => { Joint1 => (joint parameters), others joints... } } // Same joint ID than in Entity1 There are 3 problems with this solution : The first problem is the implementation of this solution, we must manage the joints ID to create joints and to know between which bodies they are connected. The second problem is the parameters of joint, where are they got ? on the Entity1 or Entity2 ? If they are the same parameters for the joint, there is no problem but if they are differents ? The third problem is that we can't limit number of bodies to 2 by joint (which is mandatory), a joint can only link 2 bodies, in this solution, nothing prevents to create more than 2 entities with for each a body component with the same joint ID, in this case, how we know the 2 bodies to joint and what to do with others bodies ? 2) Same solution than the first solution but by having entities ID instead of Joint ID, like in this example : Entity1 - Box2D/Body component { Body => (body parameters), Joints => { Entity2 => (joint parameters), others joints... } } Entity2 - Box2D/Body component { Body => (body parameters), Joints => { Entity1 => (joint parameters), others joints... } } With this solution, we fix the first problem of the first solution but we have always the two others problems. 3) Having a Box2D/Joint component which is inserted in the entities which contains the bodies to joint (we share the same joint component between entities with bodies to joint), like in this example : Entity1 - Box2D/Body component { Body => (body parameters) } - Box2D/Joint component { Joint => (Joint parameters) } // Shared, same as in Entity2 Entity2 - Box2D/Body component { Body => (body parameters) } - Box2D/Joint component { Joint => (joint parameters) } // Shared, same as in Entity1 There are 2 problems with this solution : The first problem is the same problem than in solution 1 and 2 : We can't limit number of bodies to 2 by joint (which is mandatory), a joint can only link 2 bodies, in this solution, nothing prevents to create more than 2 entities with for each a body component and the shared joint component, in this case, how we know the 2 bodies to joint and what to do with others bodies ? The second problem is that we can have only one joint by body because entity components system allows to have only one component of same type in an entity. So we can't put two Joint components in the same entity. 4) Having a Box2D/Joint component which is inserted in the entity which contains the first body component to joint and which has an entity ID parameter (this entity contains the second body to joint), like in this example : Entity1 - Box2D/Body component { Body => (body parameters) } - Box2D/Joint component { Entity2 => (Joint parameters) } // Entity2 is the entity ID which contains the other body to joint, the first body being in this entity Entity2 - Box2D/Body component { Body => (body parameters) } There are exactly the same problems that in the third solution, the only difference is that we can have two differents joints by entity instead of one (by putting one joint component in an entity and another joint component in another entity, each joint referencing to the other entity). 5) Having a Box2D/Joint component which take in parameter the two entities ID which contains the bodies to joint, this component can be inserted in any entity, like in this example : Entity1 - Box2D/Body component { Body => (body parameters) } Entity2 - Box2D/Body component { Body => (body parameters) } Entity3 - Box2D/Joint component { Joint => (Body1 => Entity1, Body2 => Entity2, others parameters of joint) } // Entity1 is the ID of the entity which have the first body to joint and Entity2 is the ID of the entity which have the second body to joint (This component can be in any entity, that doesn't matter) With this solution, we fix the problem of the body limitation by joint, we can only have two bodies per joint, which is correct. And we are not limited by number of joints per body, because we can create an another Box2D/Joint component, referencing to Entity1 and Entity2 and put this component in a new entity. The problem of this solution is : What happens if we change the Body1 or Body2 parameter of Joint component at runtime ? We need to add code to sync the Body1/Body2 parameters changes with the real joint object. 6) Same as solution 3 but in a better way : Having a Box2D/Joint component Box2D/Joint which is inserted in the entities which contains the bodies to joint, we share the same joint component between these entities BUT the difference is that we create a new entity to link the body component with the joint component, like in this example : Entity1 - Box2D/Body component { Body => (body parameters) } // Shared, same as in Entity3 Entity2 - Box2D/Body component { Body => (body parameters) } // Shared, same as in Entity4 Entity3 - Box2D/Body component { Body => (body parameters) } // Shared, same as in Entity1 - Box2D/Joint component { Joint => (joint parameters) } // Shared, same as in Entity4 Entity4 - Box2D/Body component { Body => (body parameters) } // Shared, same as in Entity2 - Box2D/Joint component { Joint => (joint parameters) } // Shared, same as in Entity3 With this solution, we fix the second problem of the solution 3, because we can create an Entity5 which will have the shared body component of Entity1 and an another joint component so we are no longer limited in the joint number per body. But the first problem of solution 3 remains, because we can't limit the number of entities which have the shared joint component. To resolve this problem, we can add a way to limit the number of share of a component, so for the Joint component, we limit the number of share to 2, because we can only joint 2 bodies per joint. This solution would be perfect because there is no need to add code to sync changes like in the solution 5 because we are notified by the entity components system when components / entities are added to/removed from the system. But there is a conception problem : How to know easily and quickly between which bodies the joint operates ? Because, there is no way to find easily an entity with a component instance. My question is : Which solution is the best ? Is there any other better solutions ? Sorry for the long text and my bad english.

    Read the article

  • Mocking the Unmockable: Using Microsoft Moles with Gallio

    - by Thomas Weller
    Usual opensource mocking frameworks (like e.g. Moq or Rhino.Mocks) can mock only interfaces and virtual methods. In contrary to that, Microsoft’s Moles framework can ‘mock’ virtually anything, in that it uses runtime instrumentation to inject callbacks in the method MSIL bodies of the moled methods. Therefore, it is possible to detour any .NET method, including non-virtual/static methods in sealed types. This can be extremely helpful when dealing e.g. with code that calls into the .NET framework, some third-party or legacy stuff etc… Some useful collected resources (links to website, documentation material and some videos) can be found in my toolbox on Delicious under this link: http://delicious.com/thomasweller/toolbox+moles A Gallio extension for Moles Originally, Moles is a part of Microsoft’s Pex framework and thus integrates best with Visual Studio Unit Tests (MSTest). However, the Moles sample download contains some additional assemblies to also support other unit test frameworks. They provide a Moled attribute to ease the usage of mole types with the respective framework (there are extensions for NUnit, xUnit.net and MbUnit v2 included with the samples). As there is no such extension for the Gallio platform, I did the few required lines myself – the resulting Gallio.Moles.dll is included with the sample download. With this little assembly in place, it is possible to use Moles with Gallio like that: [Test, Moled] public void SomeTest() {     ... What you can do with it Moles can be very helpful, if you need to ‘mock’ something other than a virtual or interface-implementing method. This might be the case when dealing with some third-party component, legacy code, or if you want to ‘mock’ the .NET framework itself. Generally, you need to announce each moled type that you want to use in a test with the MoledType attribute on assembly level. For example: [assembly: MoledType(typeof(System.IO.File))] Below are some typical use cases for Moles. For a more detailed overview (incl. naming conventions and an instruction on how to create the required moles assemblies), please refer to the reference material above.  Detouring the .NET framework Imagine that you want to test a method similar to the one below, which internally calls some framework method:   public void ReadFileContent(string fileName) {     this.FileContent = System.IO.File.ReadAllText(fileName); } Using a mole, you would replace the call to the File.ReadAllText(string) method with a runtime delegate like so: [Test, Moled] [Description("This 'mocks' the System.IO.File class with a custom delegate.")] public void ReadFileContentWithMoles() {     // arrange ('mock' the FileSystem with a delegate)     System.IO.Moles.MFile.ReadAllTextString = (fname => fname == FileName ? FileContent : "WrongFileName");       // act     var testTarget = new TestTarget.TestTarget();     testTarget.ReadFileContent(FileName);       // assert     Assert.AreEqual(FileContent, testTarget.FileContent); } Detouring static methods and/or classes A static method like the below… public static string StaticMethod(int x, int y) {     return string.Format("{0}{1}", x, y); } … can be ‘mocked’ with the following: [Test, Moled] public void StaticMethodWithMoles() {     MStaticClass.StaticMethodInt32Int32 = ((x, y) => "uups");       var result = StaticClass.StaticMethod(1, 2);       Assert.AreEqual("uups", result); } Detouring constructors You can do this delegate thing even with a class’ constructor. The syntax for this is not all  too intuitive, because you have to setup the internal state of the mole, but generally it works like a charm. For example, to replace this c’tor… public class ClassWithCtor {     public int Value { get; private set; }       public ClassWithCtor(int someValue)     {         this.Value = someValue;     } } … you would do the following: [Test, Moled] public void ConstructorTestWithMoles() {     MClassWithCtor.ConstructorInt32 =            ((@class, @value) => new MClassWithCtor(@class) {ValueGet = () => 99});       var classWithCtor = new ClassWithCtor(3);       Assert.AreEqual(99, classWithCtor.Value); } Detouring abstract base classes You can also use this approach to ‘mock’ abstract base classes of a class that you call in your test. Assumed that you have something like that: public abstract class AbstractBaseClass {     public virtual string SaySomething()     {         return "Hello from base.";     } }      public class ChildClass : AbstractBaseClass {     public override string SaySomething()     {         return string.Format(             "Hello from child. Base says: '{0}'",             base.SaySomething());     } } Then you would set up the child’s underlying base class like this: [Test, Moled] public void AbstractBaseClassTestWithMoles() {     ChildClass child = new ChildClass();     new MAbstractBaseClass(child)         {                 SaySomething = () => "Leave me alone!"         }         .InstanceBehavior = MoleBehaviors.Fallthrough;       var hello = child.SaySomething();       Assert.AreEqual("Hello from child. Base says: 'Leave me alone!'", hello); } Setting the moles behavior to a value of  MoleBehaviors.Fallthrough causes the ‘original’ method to be called if a respective delegate is not provided explicitly – here it causes the ChildClass’ override of the SaySomething() method to be called. There are some more possible scenarios, where the Moles framework could be of much help (e.g. it’s also possible to detour interface implementations like IEnumerable<T> and such…). One other possibility that comes to my mind (because I’m currently dealing with that), is to replace calls from repository classes to the ADO.NET Entity Framework O/R mapper with delegates to isolate the repository classes from the underlying database, which otherwise would not be possible… Usage Since Moles relies on runtime instrumentation, mole types must be run under the Pex profiler. This only works from inside Visual Studio if you write your tests with MSTest (Visual Studio Unit Test). While other unit test frameworks generally can be used with Moles, they require the respective tests to be run via command line, executed through the moles.runner.exe tool. A typical test execution would be similar to this: moles.runner.exe <mytests.dll> /runner:<myframework.console.exe> /args:/<myargs> So, the moled test can be run through tools like NCover or a scripting tool like MSBuild (which makes them easy to run in a Continuous Integration environment), but they are somewhat unhandy to run in the usual TDD workflow (which I described in some detail here). To make this a bit more fluent, I wrote a ReSharper live template to generate the respective command line for the test (it is also included in the sample download – moled_cmd.xml). - This is just a quick-and-dirty ‘solution’. Maybe it makes sense to write an extra Gallio adapter plugin (similar to the many others that are already provided) and include it with the Gallio download package, if  there’s sufficient demand for it. As of now, the only way to run tests with the Moles framework from within Visual Studio is by using them with MSTest. From the command line, anything with a managed console runner can be used (provided that the appropriate extension is in place)… A typical Gallio/Moles command line (as generated by the mentioned R#-template) looks like that: "%ProgramFiles%\Microsoft Moles\bin\moles.runner.exe" /runner:"%ProgramFiles%\Gallio\bin\Gallio.Echo.exe" "Gallio.Moles.Demo.dll" /args:/r:IsolatedAppDomain /args:/filter:"ExactType:TestFixture and Member:ReadFileContentWithMoles" -- Note: When using the command line with Echo (Gallio’s console runner), be sure to always include the IsolatedAppDomain option, otherwise the tests won’t use the instrumentation callbacks! -- License issues As I already said, the free mocking frameworks can mock only interfaces and virtual methods. if you want to mock other things, you need the Typemock Isolator tool for that, which comes with license costs (Although these ‘costs’ are ridiculously low compared to the value that such a tool can bring to a software project, spending money often is a considerable gateway hurdle in real life...).  The Moles framework also is not totally free, but comes with the same license conditions as the (closely related) Pex framework: It is free for academic/non-commercial use only, to use it in a ‘real’ software project requires an MSDN Subscription (from VS2010pro on). The demo solution The sample solution (VS 2008) can be downloaded from here. It contains the Gallio.Moles.dll which provides the here described Moled attribute, the above mentioned R#-template (moled_cmd.xml) and a test fixture containing the above described use case scenarios. To run it, you need the Gallio framework (download) and Microsoft Moles (download) being installed in the default locations. Happy testing…

    Read the article

  • Asserting with JustMock

    - by mehfuzh
    In this post, i will be digging in a bit deep on Mock.Assert. This is the continuation from previous post and covers up the ways you can use assert for your mock expectations. I have used another traditional sample of Talisker that has a warehouse [Collaborator] and an order class [SUT] that will call upon the warehouse to see the stock and fill it up with items. Our sample, interface of warehouse and order looks similar to : public interface IWarehouse {     bool HasInventory(string productName, int quantity);     void Remove(string productName, int quantity); }   public class Order {     public string ProductName { get; private set; }     public int Quantity { get; private set; }     public bool IsFilled { get; private set; }       public Order(string productName, int quantity)     {         this.ProductName = productName;         this.Quantity = quantity;     }       public void Fill(IWarehouse warehouse)     {         if (warehouse.HasInventory(ProductName, Quantity))         {             warehouse.Remove(ProductName, Quantity);             IsFilled = true;         }     }   }   Our first example deals with mock object assertion [my take] / assert all scenario. This will only act on the setups that has this “MustBeCalled” flag associated. To be more specific , let first consider the following test code:    var order = new Order(TALISKER, 0);    var wareHouse = Mock.Create<IWarehouse>();      Mock.Arrange(() => wareHouse.HasInventory(Arg.Any<string>(), 0)).Returns(true).MustBeCalled();    Mock.Arrange(() => wareHouse.Remove(Arg.Any<string>(), 0)).Throws(new InvalidOperationException()).MustBeCalled();    Mock.Arrange(() => wareHouse.Remove(Arg.Any<string>(), 100)).Throws(new InvalidOperationException());      //exercise    Assert.Throws<InvalidOperationException>(() => order.Fill(wareHouse));    // it will assert first and second setup.    Mock.Assert(wareHouse); Here, we have created the order object, created the mock of IWarehouse , then I setup our HasInventory and Remove calls of IWarehouse with my expected, which is called by the order.Fill internally. Now both of these setups are marked as “MustBeCalled”. There is one additional IWarehouse.Remove that is invalid and is not marked.   On line 9 ,  as we do order.Fill , the first and second setups will be invoked internally where the third one is left  un-invoked. Here, Mock.Assert will pass successfully as  both of the required ones are called as expected. But, if we marked the third one as must then it would fail with an  proper exception. Here, we can also see that I have used the same call for two different setups, this feature is called sequential mocking and will be covered later on. Moving forward, let’s say, we don’t want this must call, when we want to do it specifically with lamda. For that let’s consider the following code: //setup - data var order = new Order(TALISKER, 50); var wareHouse = Mock.Create<IWarehouse>();   Mock.Arrange(() => wareHouse.HasInventory(TALISKER, 50)).Returns(true);   //exercise order.Fill(wareHouse);   //verify state Assert.True(order.IsFilled); //verify interaction Mock.Assert(()=> wareHouse.HasInventory(TALISKER, 50));   Here, the snippet shows a case for successful order, i haven’t used “MustBeCalled” rather i used lamda specifically to assert the call that I have made, which is more justified for the cases where we exactly know the user code will behave. But, here goes a question that how we are going assert a mock call if we don’t know what item a user code may request for. In that case, we can combine the matchers with our assert calls like we do it for arrange: //setup - data  var order = new Order(TALISKER, 50);  var wareHouse = Mock.Create<IWarehouse>();    Mock.Arrange(() => wareHouse.HasInventory(TALISKER, Arg.Matches<int>( x => x <= 50))).Returns(true);    //exercise  order.Fill(wareHouse);    //verify state  Assert.True(order.IsFilled);    //verify interaction  Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), Arg.Matches<int>(x => x <= 50)));   Here, i have asserted a mock call for which i don’t know the item name,  but i know that number of items that user will request is less than 50.  This kind of expression based assertion is now possible with JustMock. We can extent this sample for properties as well, which will be covered shortly [in other posts]. In addition to just simple assertion, we can also use filters to limit to times a call has occurred or if ever occurred. Like for the first test code, we have one setup that is never invoked. For such, it is always valid to use the following assert call: Mock.Assert(() => wareHouse.Remove(Arg.Any<string>(), 100), Occurs.Never()); Or ,for warehouse.HasInventory we can do the following: Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), 0), Occurs.Once()); Or,  to be more specific, it’s even better with: Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), 0), Occurs.Exactly(1));   There are other filters  that you can apply here using AtMost, AtLeast and AtLeastOnce but I left those to the readers. You can try the above sample that is provided in the examples shipped with JustMock.Please, do check it out and feel free to ping me for any issues.   Enjoy!!

    Read the article

  • Creating and maintaining Orchard translations

    - by Bertrand Le Roy
    Many volunteers have already stepped up to provide translations for Orchard. There are many challenges to overcome with translating such a project. Orchard is a very modular CMS, so the translation mechanism needs to account for the core as well as first and third party modules and themes. Another issue is that every new version of Orchard or of a module changes some localizable strings and adds new ones as others enter obsolescence. In order to address those problems, I've built a small Orchard module that automates some of the most complex tasks that maintaining a translation implies. In this post, I'll walk you through the operations I had to do to update the French translation for Orchard 1.0. In order to make sure you translate all the first party modules, I would recommend that you start from a full source code enlistment. The reason is that I'll show how you can extract the default en-US translation from any source code enlistment. That enables you to create a translation that is even more up-to-date than what is currently on the site. Alternatively, you could start by downloading the current en-US translation. If you decide to do so, just skip the relevant paragraphs. First, let's install the Orchard Translation Manager. I'm starting from a vanilla clone of the latest in the code repository. After you've setup the site, go into the dashboard and click on Gallery. Locate the Orchard Translation Manager in the list of modules and click "Install". Once the module is installed, you need to enable its one feature by going into Configuration/Features and clicking "Enable" next to Vandelay.TranslationManager. We're done with the setup that we need in order to start our translation work. We'll now switch to the command-line and to our favorite text editor. Open a command-line on the Orchard web site folder. I found the easiest way to do this is to do a SHIFT+right-click on the Orchard.Web folder in Windows Explorer and to click "Open command window here". Type bin\orchard to enter the Orchard command-line environment. If you do a "help commands" you should see four commands in the list that came from the module we just installed: extract default translation, install translation, package translation and sync translation. First, we're going to generate the default translation. Note that it is possible to generate that default translation for a specific list of modules and themes by using the /Extensions: switch, which should facilitate the translation of third party extensions, but in this tutorial we're going to generate it for the whole of the Orchard source code. extract default translation /Output:\temp .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This should have created an Orchard.en-us.po.zip file in the temp directory. Extract that archive into an orchard.po folder under \temp. The next step depends on whether you have an existing translation that you want to update or not. If you do have an existing translation, just extract it into the same \temp\orchard.po directory. That should result in a file structure where you have the default en-US translation alongside your own. If you don't have an existing translation, just continue, the commands will be the same. We are now going to synchronize those translations (or generate the stub for a new one if you didn't start from an existing translation). sync translation /Input:\temp\orchard.po /Culture:fr-FR After this command (where you should of course substitute fr-FR with the culture you're working on), we now have updated files that contain a few useful flags. Open each of the .po files under the culture you are working on (there should be around 36) with your favorite text editor. For all the strings that are still valid in the latest version, nothing changes and you don't need to do anything. For all the strings that disappeared from the default culture, the old translation will still be there but they will be prefixed with the following comment: # Obsolete translation Conveniently, all the obsolete strings will be grouped at the end of the file. You can select all those and delete them. For all the new strings, you will see the following comment: # Untranslated string This is where the hard work begins. You'll need to translate each of those new strings by entering the translation between the quotes in: msgstr "" Don't introduce hard carriage returns in the strings, just stay on one line (your text editor should do some reasonable wrapping so this shouldn't be a big deal). Once you're done with a file, save it. Make sure, and this is very important, that your text editor is saving using the UTF-8 encoding. In Notepad, that setting can be found in the file saving dialog by doing a "Save As" rather than a plain "Save": When all the po files have been edited, you are ready to package the translation for submission (a.k.a. sending e-mail to the localization mailing list). package translation /Culture:fr-FR /Input:\temp\orchard.po /Output:\temp You should now see a Orchard.fr-FR.po.zip file in temp that is ready to be submitted. That is, once you've tested it, which can be done by deploying it into the site: install translation \temp\orchard.fr-fr.po.zip Once this is done you can go into the dashboard under Configuration/Settings and click on "Add or remove supported cultures for the site". Choose your culture and click "Add". You can go back to settings and set the default culture. Save. You may now take a tour of the application and verify that everything works as expected: And that's it really. Creating a translation for Orchard is a matter of a few hours. If you don't see a translation for your culture, please consider creating it.

    Read the article

  • How to deal with a poor team leader and a tester manager from hell? [closed]

    - by Google
    Let me begin by explaining my situation and give a little context to the situation. My company has around 15 developers but we're split up on two different areas. We have a fresh product team and the old product team. The old product team does mostly bug fixes/maintenance and a feature here and there. The fresh product had never been released and was new from the ground up. I am on the fresh product team. The team consists of three developers (myself, another developer and a senior developer). The senior is also our team leader. Our roles are as follows: Myself: building the administration client as well as build/release stuff Other dev: building the primary client Team lead: building the server In addition to the dev team, we interact with the test manager often. By "we" I mean me since I do the build stuff and give him the builds to test. Trial 1: The other developer on my team and I have both tried to talk to our manager about our team leader. About two weeks before release we went in his office and had a closed door meeting before our team lead got to work. We expressed our concerns about the product, its release date and our team leader. We expressed our team leader had a "rosey" image of the product's state. Our manager seemed to listen to what we said and thanked us for taking the initiative to speak with him about it. He got us an extra two weeks before release. The situation with the leader didn't change. In fact, it got a little worse. While we were using the two weeks to fix issues he was slacking off quite a bit. Just to name a few things, he installed Windows 8 on his dev machine during this time (claimed him machine was broke), he wrote a plugin for our office messenger that turned turned messages into speech, and one time when I went in his office he was making a 3D model in Blender (for "fun"). He felt the product was "pretty good" and ready for release. During this time I dealt with the test manager on a daily basis. Every bug or issue that popped up he would pretty much attack me personally (regardless of which component the bug was in). The test manager would often push his "views" of what needed to be done with the product. He virtually ordered me to change text on our installer and to add features to the installer and administration client. I tried to express how his suggestions were "valid ideas" but it was too close to release to do those kinds of things and to make matters worse, our technical writer had already finished documentation and such a change would not only affect the dev team but would affect the technical writer and marketing as well. I expressed I wasn't going to make those changes without marketing's consent as well as the technical writer and my manager's. He pretty much said I don't care about the product and said I don't do my job. I would like to take a moment to say I take my job seriously and I do my best. I am the kind of person that goes to work 30-40 mins early and usually leaves 30 minutes later than everyone else. Saying I don't care or do my job is just insulting. His "attacks" on me grew from day to day. Every bug that popped up he would usually comment on in some manner that jabbed me and the other developer. "Oh that bug! Yeah that should have been fixed by now, figures! If someone would do their job!" and other similar kinds of comments. Keep in mind 8 out of 10 bugs were in the server and had nothing to do with me and the other developer. That didn't seem to matter.. On one occasion they got pretty bad and we almost got into a yelling match so I decided to stop talking to him all together. I carried all communication through office email (with my manager cc'd). He never attacked me via email. He still attempted to get aggressive with me in person but I completely ignore him and my only response to any question is, "Ask my team leader." or "Ask a product manager." The product launched after our two week extension. Trial 2: The day after the product launch our team leader went on vacation (thanks....). At this time we got a lot of questions from the tech support... major issues with the product. All of these issues were bugs marked "resolved" by our lovely team leader (a typical situation that often popped up). This is where we currently are. The other developer has been with the company for about three years (I've been there only five months) and told me he was going to speak with our manager alone and hoped it would help get our concerns across a little better in a one-on-one. He spoke with the manager and directly addressed all of our concerns regarding our team leader and the test manager giving us (mostly me) hell. Our manager basically said he understood how hard we work and said he noticed it and there's no doubt about it. He said he spoke with the test manager about his temper. Regarding the team leader, he didn't say a whole lot. He suggested we sit down with the team leader and address our concerns (isn't that the manager's job?). We're still waiting to see if anything has changed but we doubt it. What can we do next? 1) Talk to the team leader (may stress relationship and make work awkward) I admit the team leader is generally a nice guy. He is just a horrible leader and working closely with him is painful. I still don't believe bringing this directly to the team leader would help at all and may negatively impact the situation. 2) I could quit. Other than this situation the job is pretty fantastic. I really like my other coworkers and we have quite a bit of freedom. 3) I could take the situation with the team leader to one of the owners. I would then be throwing my manager under the bus. 4) I could take the situation with the test manager to HR. Any suggestions? Comments?

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • Java FAQ: Tudo o que você precisa saber

    - by Bruno.Borges
    Com frequência recebo e-mails de clientes com dúvidas sobre "quando sairá a próxima versão do Java?", ou então "quando vai expirar o Java?" ou ainda "quais as mudanças da próxima versão?". Por isso resolvi escrever aqui um FAQ, respondendo estas dúvidas e muitas outras. Este post estará sempre atualizado, então se você possui alguma dúvida, envie para mim no Twitter @brunoborges. Qual a diferença entre o Oracle JDK e o OpenJDK?O projeto OpenJDK funciona como a implementação de referência Open Source do Java Standard Edition. Empresas como a Oracle, IBM, e Azul Systems suportam e investem no projeto OpenJDK para continuar evoluindo a plataforma Java. O Oracle JDK é baseado no OpenJDK, mas traz outras ferramentas como o Mission Control, e a máquina virtual traz algumas features avançadas como por exemplo o Flight Recorder. Até a versão 6, a Oracle oferecia duas máquinas virtuais: JRockit (BEA) e HotSpot (Sun). A partir da versão 7 a Oracle unificou as máquinas virtuais, e levou as features avançadas do JRockit para dentro da VM HotSpot. Leia também o OpenJDK FAQ. Onde posso obter binários beta Early Access do JDK 7, JDK 8, JDK 9 para testar?A partir do projeto OpenJDK, existe um projeto específico para cada versão do Java. Nestes projetos você pode encontrar binários beta Early Access, além do código-fonte. JDK 6 - http://jdk6.java.net/ JDK 7 - http://jdk7.java.net/ JDK 8 - http://jdk8.java.net/ JDK 9 - http://jdk9.java.net/ Quando acaba o suporte do Oracle Java SE 6, 7, 8? Somente produtos e versões com release oficial são suportados pela Oracle (exemplo: não há suporte para binários beta do JDK 7, JDK 8, ou JDK 9). Existem duas categorias de datas que o usuriário do Java deve estar ciente:  EOPU - End of Public UpdatesMomento em que a Oracle não mais disponibiliza publicamente atualizações Oracle SupportPolítica de suporte da Oracle para produtos, incluindo o Oracle Java SE O Oracle Java SE é um produto e portando os períodos de suporte são regidos pelo Oracle Lifetime Support Policy. Consulte este documento para datas atualizadas e específicas para cada versão do Java. O Oracle Java SE 6 já atingiu EOPU (End of Public Updates) e agora é mantido e atualizado somente para clientes através de contrato comercial de suporte. Para maiores informações, consulte a página sobre Oracle Java SE Support.  O mais importante aqui é você estar ciente sobre as datas de EOPU para as versões do Java SE da Oracle.Consulte a página do Oracle Java SE Support Roadmap e busque nesta página pela tabela com nome Java SE Public Updates. Nela você encontrará a data em que determinada versão do Java irá atingir EOPU. Como funciona o versionamento do Java?Em 2013, a Oracle divulgou um novo esquema de versionamento do Java para facilmente identificar quando é um release CPU e quando é um release LFR, e também para facilitar o planejamento e desenvolvimento de correções e features para futuras versões. CPU - Critical Patch UpdateAtualizações com correções de segurança. Versão será múltipla de 5, ou com soma de 1 para manter o número ímpar. Exemplos: 7u45, 7u51, 7u55. LFR - Limited Feature ReleaseAtualizações com correções de funcionalidade, melhorias de performance, e novos recursos. Versões de números pares múltiplos de 20, com final 0. Exemplos: 7u40, 7u60, 8u20. Qual a data da próxima atualização de segurança (CPU) do Java SE?Lançamentos do tipo CPU são controlados e pré-agendados pela Oracle e se aplicam a todos os produtos, inclusive o Oracle Java SE. Estes releases acontecem a cada 3 meses, sempre na Terça-feira mais próxima do dia 17 dos meses de Janeiro, Abril, Julho, e Outubro. Consulte a página Critical Patch Updates, Security Alerts and Third Party Bulleting para saber das próximas datas. Caso tenha interesse, você pode acompanhar através de recebimentos destes boletins diretamente no seu email. Veja como assinar o Boletim de Segurança da Oracle. Qual a data da próxima atualização de features (LFR) do Java SE?A Oracle reserva o direito de não divulgar estas datas, assim como o faz para todos os seus produtos. Entretanto é possível acompanhar o desenvolvimento da próxima versão pelos sites do projeto OpenJDK. A próxima versão do JDK 7 será o update 60 e binários beta Early Access já estão disponíveis para testes. A próxima versão doJDK 8 será o update 20 e binários beta Early Access já estão disponíveis para testes. Onde posso ver as mudanças e o que foi corrigido para a próxima versão do Java?A Oracle disponibiliza um changelog para cada binário beta Early Access divulgado no portal Java.net. JDK 7 update 60 changelogs JDK 8 update 20 changelogs Quando o Java da minha máquina (ou do meu usuário) vai expirar?Conheçendo o sistema de versionamento do Java e a periodicidade dos releases de CPU, o usuário pode determinar quando que um update do Java irá expirar. De todo modo, a cada novo update, a Oracle já informa quando que este update deverá expirar diretamente no release notes da versão. Por exemplo, no release notes da versão Oracle Java SE 7 update 55, está escrito na seção JRE Expiration Date o seguinte: The JRE expires whenever a new release with security vulnerability fixes becomes available. Critical patch updates, which contain security vulnerability fixes, are announced one year in advance on Critical Patch Updates, Security Alerts and Third Party Bulletin. This JRE (version 7u55) will expire with the release of the next critical patch update scheduled for July 15, 2014. For systems unable to reach the Oracle Servers, a secondary mechanism expires this JRE (version 7u55) on August 15, 2014. After either condition is met (new release becoming available or expiration date reached), the JRE will provide additional warnings and reminders to users to update to the newer version. For more information, see JRE Expiration Date.Ou seja, a versão 7u55 irá expirar com o lançamento do próximo release CPU, pré-agendado para o dia 15 de Julho de 2014. E caso o computador do usuário não possa se comunicar com o servidor da Oracle, esta versão irá expirar forçadamente no dia 15 de Agosto de 2014 (através de um mecanismo embutido na versão 7u55). O usuário não é obrigado a atualizar para versões LFR e portanto, mesmo com o release da versão 7u60, a versão atual 7u55 não irá expirar.Veja o release notes do Oracle Java SE 8 update 5. Encontrei um bug. Como posso reportar bugs ou problemas no Java SE, para a Oracle?Sempre que possível, faça testes com os binários beta antes da versão final ser lançada. Qualquer problema que você encontrar com estes binários beta, por favor descreva o problema através do fórum de Project Feebdack do JDK.Caso você encontre algum problema em uma versão final do Java, utilize o formulário de Bug Report. Importante: bugs reportados por estes sistemas não são considerados Suporte e portanto não há SLA de atendimento. A Oracle reserva o direito de manter o bug público ou privado, e também de informar ou não o usuário sobre o progresso da resolução do problema. Tenho uma dúvida que não foi respondida aqui. Como faço?Se você possui uma pergunta que não foi respondida aqui, envie para bruno.borges_at_oracle.com e caso ela seja pertinente, tentarei responder neste artigo. Para outras dúvidas, entre em contato pelo meu Twitter @brunoborges.

    Read the article

  • CodePlex Daily Summary for Thursday, April 15, 2010

    CodePlex Daily Summary for Thursday, April 15, 2010New ProjectsApplication Logging Repository (ALR): The ALR is a light-weight logging framework that allows applications to log events and exceptions to a central repository.Arkane.FileProperties.DSS: Arkane.FileProperties.Dss is a library for parsing the file header of a .DSS file (as used by Olympus digital dictaphone systems) to obtain time, v...B in conTrol project: This project enables controling log-in and locking your workstation automatically, identifyng you bluetooth.DarkBook: DarkBook is a personal library project.Direct2D for Microsoft .Net: Direct2D, DirectWrite and Windows Imaging wrappers for .Net. This library allows to access Direct2D, DirectWrite and Windows Imaging Windows API f...DJ Ware: DJ Ware is an extensible music player with plugin support and innovative features to organize and explore music files. It is developed with C#, WPF...gpsMe: gpsMe is a Windows Mobile 6.x mapping solution allowing to place the user on a personnalized map. The screen requirements are VGA or WVGA but, you ...jErrorLog: jErrorLog is an error logging component for use in DotNet 2.0 or later applications. It can log error messages to any of the following: database, e...KEMET_API: Java Library (open - source). This library is a help to study egyptian hieroglyphs.Meadow: A web site project for a Swedish floorball team called Slackers. Home page built with ASP.NET 2.0, ASP.NET AJAX and SQL Server 2005.Mustang Math: Mustang Math makes it easier for young children to practice basic math facts on the computer. No keyboard or mouse required - just say the answer!...Net.Formats.oEmbed: oEmbed format implementation in c#. oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows...Normlize O/R Mapper: Open source O/RM tool that participates with traditional inheritance object models as well as Hibernate/nHibernate style class shells. As I have t...N-Twill Twitter Client for VB.NET: Proyecto de cliente twitter hecho con la libreria TwitterVB2 y hecho en VB.net 2008.SIQM: Spatial Information Quality Management Toolset TIMETABLEASY Web: Under developmentTweetSharp: TweetSharp is a complete .NET library for micro-blogging platforms that allows you to write short and sweet expressions that fly to Twitter, Yammer...UISandbox: UISandbox is a sample C# source code showing how to deal with plugins requiring sandbox, when those plugins must interact with WPF application inte...WinForm SharePoint Web Part Manager: The SharePoint Web Part Manager is a WinForm tool using the SharePoint object model that enables developers and power users to add, update, delete,...WoW Character Viewer: View your World of Warcraft character (or anyone else's character), using this application. Written using Visual Basic Express 2008, then ported t...Xrns2XMod: Xrns2XMod converts from Renoise format (xrns) to mod or xm, which are more compatible formats playable from xmplay or vlc.New ReleasesArkane.FileProperties.DSS: 1.0 stable release: Executables and merge module for 1.0. (See documentation.)Bluetooth Radar: Version 2.0: Add IrDA reference for Bluetooth sending using Obex Add Project icon Add Bluetooth detection mode (Auto close application is there is no blueto...BUtil: BUtil 5.0 Alpha: Backup tasks adding.... in progressChronos WPF: Chronos v1.0 RC 1: Chronos v1.0 RC 1. Development will be feature frozen after this release, only bug fixes will be allowed. Updated nRoute assembly to v0.4 (http:...clipShow: Version 2.5: Release that addresses the canonical syntax issues in search discoverd by Tschachim (thanks again!). Also, the play list and play all menu items s...DarkBook: DarkBook alpha: Hi, here comes the alpha version of Darkbook. It has all the functions already but is still in developing. I hope it's helpful for you, at least it...DirectQ: Release 1.8.3a: Improvements to 1.8.2, which will be shortly be removed. This replaces the original 1.8.3 release from earlier today which had some late-breaking ...Effect Custom Tool for Visual Studio: Effect Custom Tool v1.1: Effect Custom Tool for Visual Studio is a visual studio 2008 extension that helps you generate c# classes from effect (*.fx) files for use with Xna...Folder Bookmarks: Folder Bookmarks 1.4.3: This is the latest version of Folder Bookmarks (1.4.3), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...gpsMe: gpsMe v0.3: Required Hardware Windows Mobile 6 .Net Compact Framework 3.5 integrated gps device VGA or WVGA screen (normally works on others)IST435: Lab 4 - Enterprise Level CMS with DotNetNuke: Lab 4 - Enterprise Level CMS with DotNetNukeThis is the "starter kit" that you must base your Lab 4 on. This lab must be completed in-class.Mouse Jiggler: MouseJiggle-1.1: 1.1 release of Mouse Jiggler, now with x64 compatibility and the ability to start jiggling on run with the --jiggle or -j command-line switch.Mustang Math: MustangMath.exe: This is a quick and dirty "0.1" prototype to demonstrate the speech recognition idea. It starts asking you questions automatically on launch and k...MvcContrib: a Codeplex Foundation project: 2.0.36.0 for MVC2 (RTW): Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcC...Nito.LINQ: Beta (v0.3): New features for this release: Several new supported platforms (see below). PDBs that are source-indexed to the appropriate CodePlex changeset. ...OpenIdPortableArea: 0.1.0.2 OpenIdPortableArea: OpenIdPortableArea.Release: DotNetOpenAuth.dll DotNetOpenAuth.xml MvcContrib.dll MvcContrib.xml OpenIdPortableArea.dll OpenIdPortableAre...PokeIn Comet Ajax Library: PokeIn Sample with Library v0.2: New version of PokeIn library with sample. v0.2 There are new features in this release and no bug detected yet.Project Tru Tiên: Elements-test V1-fix (v2): Là EL test được fix tiếp theo bản fix V1, tạm gọi đây là bản fix V2 của ELtest Trong bản fix này EL được fix thêm vụ Quest, Quest chỉnh sửa đúng t...Rule 18 - Love your clipboard: Rule 18: This is the third public beta for the first version of Rule 18. This version has been updated to support Visual Studio 2010 RTM and .NET 4.0 RTM. ...SevenZipSharp: SevenZipSharp 0.62: Added: Extraction from SFX archives. Now it is possible to unrar RAR self-extractors, unzip ZIP self-extractors, etc. Extraction from DOC, XLS, (...SharePoint Labs: SPLab3001A-FRA-Level200: SPLab3001A-FRA-Level200 This SharePoint Lab will teach the persistence object layer that SharePoint uses to centraly store configuration data and o...TTXPathNavigator: TTXPathNavigator for VS2010: Version for Visual Studio 2010turing machine simulator: SDS: SDS documentVecDraw: VecDraw_0.2.25: Alpha release for test purposesWinForm SharePoint Web Part Manager: Beta 1: First release of the WinForm SharePoitn web part manager toolXrns2XMod: Xrns2XMod 0.5.1: Mod and XM conversion format - No sample data conversion at momentZip Solution: ZipSolution 5.3: Features: 1. Added WaitMsec for visual studio support with getting access to files in post build event; 2. Added ShowTextInToolbars to app.config ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRasFacebook Developer Toolkit

    Read the article

  • Clarity is important, both in question and in answer.

    - by gerrylowry
    clarity is important ... i'm often reminded of the Clouseau movie in which Peter Sellers as Chief Inspector Clouseau asks a hotel clerk "Does your dog bite?" ... the clerk answers "no" ... after Clouseau has been bitten by the dog, he looks at the hotel clerk who says "That's not my dog".  Clarity is important, both in question and in answer. i've been a member of forums.asp.net since 2008 ... like many of my peers at forums.asp.net, i've answered my fair share of questions. FWIW, the purpose of this, my first web log post to http://weblogs.asp.net/gerrylowry is to help new members ask better questions and in turn get better answers. TIMTOWTDI  =.  there is more than one way to do it imho, the best way to ask a question in any forum, or even person to person, is to first formulate your question and then ask yourself to answer your own question. Things to consider when asking (the more complete your question, the more likely you'll get the answer you require): -- have you searched Google and/or your favourite search engine(s) before posting your question to forums.asp.net; examples: site:msdn.microsoft.com entity framework 5.0 c#http://lmgtfy.com/?q=site%3Amsdn.microsoft.com+entity+framework+5.0+c%23 site:forums.asp.net MVC tutorial c#http://lmgtfy.com/?q=site%3Aforums.asp.net+MVC+tutorial+c%23 -- are you asking your question in the correct forum?  look at the forums' descriptions at http://forums.asp.net/; examples: Getting Started If you have a general ASP.NET question on a topic that's not covered by one of the other more specific forums - ask it here. MVC Discussions regarding ASP.NET Model-View-Controller (MVC) C# Questions about using C# for ASP.NET development Note:  if your question pertains more to c# than to MVC, choosing the C# forum is likely to be more appropriate. -- is your post subject clear and concise, yet not too vague? compare these three subjects (all three had something to do with GridView):     (1)    please help     (2)    gridview      (3)    How to show newline in GridView  -- have you clearly explained your scenario? compare:  my leg hurts   with   when i walk too much, my right knee hurts in the knee joint  compare:  my code does not work    with    when i enter a date as 2012-11-8, i get a FormatException -- have you checked your spelling, your grammar, and your English? for better or worse, English is the language of forums.asp.net ... many of the currently 170000++ forums.asp.net are not native speakers of English; that's okay ... however, there are times when choosing the more appropriate words will likely get one a better answer; fortunately, there are web tools to help you formulate your question, for example, http://translate.google.com/.  -- have you provided relevant information about your environment? here are a few examples ... feel free to include other items to your question ... rule of thumb:  if you think a given detail is relevant, it likely is -- what technology are you using?    ASP.NET MVC 4, ASP.NET MVC 3, WebForms, ...  -- what version of Visual Studio are you using?  vs2012 (ultimate, professional, express), vs2010, vs2008 ... -- are you hosting your own website?  are you using a shared hosting service? -- are you experience difficulties in just one browser? more than one browser? -- what browser version(s) are you using?   ie8? ie9? ... -- what is your operating system?     win8, win7, vista, XP, server 2008 R2 ... -- what is your database?   SQL Server 2008 R2, ss2005, MySQL, Oracle, ... -- what is your web server?  iis 7.5, iis 6, .... -- have you provided enough information for someone to be able to answer your question? Here's an actual example from an O.P. that i hope is self-explanatory: I'm trying to make a simple calculator when i write the code in windows application it worked when i tried it in web application it doesn't work and there are no errors what should i do ??!! -- have you included unnecessary information? more than once, i've seen the O.P. (original post, original poster) include many extra lines of code that were not relevant to the actual question; the more unnecessary code that you include, the less likely your volunteer peers will be motivated to donate their time to help you. -- have you asked the question that you want answered? "Does this dog bite?" -- are your expectations reasonable? -- generally, persons who are going to answer your questions are your peers ... they are unpaid volunteers ... -- are you looking for help with your homework, work assignment, or hobby? or, are you expecting someone else to do your work for you?  -- do you expect a complete solution or are you simply looking for guidance and direction? -- you are likely to get more help by first making a reasonable effort to help yourself first Clarity is important, both in question and in answer. if you are answering someone else's question, please remember that clear answers are just as important as clear questions; would you understand your own answer? Things to consider when answering: -- have you tested your code example?  if you have, say so; if you've not tested your code example, also say so -- imho, it's okay to guess as long as you clearly state that you're guessing ... sometimes a wrong guess can still help the O.P. find her/his way to the right answer -- meanness does not contribute to being helpful; sometimes one may become frustrated with the O.P. and/or others participating in a thread, if that happens to you, be kind regardless; speaking from my own experience, at least once i've allowed myself to be frustrated into writing something inappropriate that i've regretted later ... being a meany does not feel good ... being kind and helpful feels fantastic! Tip:  before asking your question, read more than a few existing questions and answers to get a sense of how your peers ask and answer questions. Gerry P.S.:  try to avoid necroposting and piggy backing. necroposting is adding to an old post, especially one that was resolved months ago. piggy backing is adding your own question to someone else's thread.

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • How To Remove People and Objects From Photographs In Photoshop

    - by Eric Z Goodnight
    You might think that it’s a complicated process to remove objects from photographs. But really Photoshop makes it quite simple, even when removing all traces of a person from digital photographs. Read on to see just how easy it is. Photoshop was originally created to be an image editing program, and it excels at it. With hardly any Photoshop experience, any beginner can begin removing objects or people from their photos. Have some friends that photobombed an otherwise great pic? Tell them to say their farewells, because here’s how to get rid of them with Photoshop! Tools for Removing Objects Removing an object is not really “magical” work. Your goal is basically to cover up the information you don’t want in an image with information you do want. In this sample image, we want to remove the cigar smoking man, and leave the geisha. Here’s a couple of the tools that can be useful to work with when attempting this kind of task. Clone Stamp and Pattern Stamp Tool: Samples parts of your image from your background, and allows you to paint into your image with your mouse or stylus. Eraser and Brush Tools: Paint flat colors and shapes, and erase cloned layers of image information. Basic, down and dirty photo editing tools. Pen, Quick Selection, Lasso, and Crop tools: Select, isolate, and remove parts of your image with these selection tools. All useful in their own way. Some, like the pen tool, are nightmarishly tough on beginners. Remove a Person with the Clone Stamp Tool (Video) The video above uses the Clone Stamp tool to sample and paint with the background texture. It’s a simple tool to use, although it can be confusing, possibly counter-intuitive. Here’s some pointers, in addition to the video above. Select shortcut key to choose the Clone tool stamp from the Tools Panel. Always create a copy of your background layer before doing heavy edits by right clicking on the background in your Layers Panel and selecting “Duplicate.” Hold with the Clone Tool selected, and click anywhere in your image to sample that area. When you’re sampling an area, your cursor is “Aligned” with your sample area. When you paint, your sample area moves. You can turn the “Aligned” setting off by clicking the in the Options Panel at the top of your screen if you want. Change your brush size and hardness as shown in the video by right-clicking in your image. Use your lasso to copy and paste pieces of your image in order to cover up any parts that seem appropriate. Photoshop Magic with the “Content-Aware Fill” One of the hallmark features of CS5 is the “Content-Aware Fill.” Content aware fill can be an excellent shortcut to removing objects and even people in Photoshop, but it is somewhat limited, and can get confused. Here’s a basic rundown on how it works. Select an object using your Lasso tool, shortcut key . The Lasso works fine as this selection can be rough. Navigate to Edit > Fill, and select “Content-Aware,” as illustrated above, from the pull-down menu. It’s surprisingly simple. After some processing, Photoshop has done the work of removing the object for you. It takes a few moments, and it is not perfect, so be prepared to touch it up with some Copy-Paste, or some Clone stamp action. Content Aware Fill Has Its Limits Keep in mind that the Content Aware Fill is meant to be used with other techniques in mind. It doesn’t always perform perfectly, but can give you a great starting point. Take this image for instance. It is actually plausible to hide this figure and make this image look like he was never there at all. With a selection made with the Lasso tool, navigate to Edit > Fill and select “Content Aware” again. The result is surprisingly good, but as you can see, worthy of some touch up. With a result like this one, you’ll have to get your hands dirty with copy-paste to create believable lines in the background. With many photographs, Content Aware Fill will simply get confused and give you results you won’t be happy with. Additional Touch Up for Bad Background Textures with the Pattern Stamp Tool For the perfectionist, cleaning up the lumpy looking textures that the Clone Stamp can leave is fairly simple using the Pattern Stamp Tool. Sample an piece of your image with your Marquee Tool, shortcut key . Navigate to Edit > Define Pattern to create a new Pattern from your selection. Click OK to continue. Click and hold down on the Clone Stamp tool in your Tools Panel until you can select the Pattern Stamp Tool. Pick your new pattern from the Options at the top of your screen, in the Options Panel. Then simply right click in your image in order to pick as soft a brush as possible to paint with. Paint into your image until your background is as smooth as you want it to be, making your painted out object more and more invisible. If you get lines from your repeated texture, experiment turning the on and off and paint over them. In addition to this, simple use of the Crop Tool, shortcut , can recompose an image, making it look as if it never had another object in it at all. Combine these techniques to find a method that works best for your images. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: Geisha Kyoto Gion by Todd Laracuenta via Wikipedia, used under Creative Commons. Moai Rano raraku by Aurbina, in Public Domain. Chris Young visits Wrigley by TonyTheTiger, via Wikipedia, used under Creative Commons. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • How to Easily Put a Windows PC into Kiosk Mode With Assigned Access

    - by Chris Hoffman
    Windows 8.1′s Assigned Access feature allows you to easily lock a Windows PC to a single application, such as a web browser. This feature makes it easy for anyone to configure Windows 8.1 devices as point-of-sale or other kiosk systems. In the past, setting up a Windows PC in kiosk mode involved much more work, requiring the use of third-party software, group policy, or Linux distributions designed around kiosk mode. Assigned Access is available on Windows 8.1 RT, Windows 8.1 Professional, and Windows 8.1 Enterprise. The standard edition of Windows 8.1 doesn’t support Assigned Access. Create a User Account for Assigned Access Rather than turn your entire computer into a locked-down kiosk system, Assigned Access allows you to create a separate user account that can only launch a single app — such as a web browser. To set this up, you must be logged into Windows as a user with administrator permissions. First, open the PC settings app — swipe in from the right or press Windows Key + C to open the charms bar, tap Settings, and tap Change PC settings. In the PC settings app, select Accounts and select Other accounts. Use the Add an account button to create a new Windows account. Select  the “Sign in without a Microsoft account” option and select Local account to create a local user account. You could also create a Microsoft account, but you may not want to do this if you just want a locked-down account with only browser access. If you need to install apps from the Windows Store to use in Assigned Access mode, you’ll have to set up a Microsoft account instead of a local account. A local account will still allow you access to the preinstalled apps, such as Internet Explorer. You may want to create a user account with a blank password. This would make it simple for anyone to access kiosk mode, even if the system becomes locked or needs to be rebooted. The account will be created as a standard user account with limited permissions. Leave it as a standard user account — don’t make it an administrator account. Set Up Assigned Access Once you’ve created an account, you’ll first need to sign into it. If you don’t, you’ll see a “This account has no apps” message when trying to enable Assigned Access. Go back to the welcome screen, log in to the new account you created, and allow Windows to go through the first-time account setup process. If you want to use a non-default app in kiosk mode, install it while logged in as that user account. Once you’re done, log out of the other account, log back in as your administrator account, and go back to the Other accounts screen. Click the Set up an account for assigned access option to continue. Select the user account you created and select the app you want to limit the account to. For a web-based kiosk, this can be a web browser such as the Modern version of Internet Explorer. Businesses can also create their own Modern apps and set them to run in kiosk mode in this way. Note that Microsoft’s documentation says “web browsers are not good choices for assigned access” because they require more permissions than average Modern (or “Windows Store”) apps. However, if you want to provide a kiosk for web-browsing, using Assigned Access is a much better option than using Guest Mode and offering up a full Windows desktop. When you’re done, restart your PC and log in as the Assigned Access account. Windows will automatically open the app you chose and won’t allow a user to leave that app. Standard Windows 8 features like the charms bar, app switcher, and Start screen won’t appear. Pressing the Windows key once will do nothing. To sign out of Assigned Access mode, press the Windows key five times — quickly — while signed in. You’ll be sent back to the standard login screen. The account will actually still be logged in and the app will remain running — this method just “locks” the screen and allows another user to log in. Automatically Log Into Assigned Access Whenever your Windows device boots, you can log into the Assigned Access account and turn it into a kiosk system. While this isn’t ideal for all kiosk systems, you may want the device to automatically launch the specific app when it boots without requiring any login process. To do so, you’ll just need to have Windows automatically log into the Assigned Access account when it boots. This option is hidden and not available in the standard Control Panel. You’ll need to use the hidden netplwiz Control Panel tool to set up automatic login on boot. If you didn’t create a password for the user account, leave the Password field empty while configuring this. Security Considerations If you’re using this feature to turn a Windows 8.1 system into a kiosk and leaving it open to the public, remember to consider security. Anyone could come up to the system, press the Windows key five times, and try to log into your standard administrator user account. Ensure the administrator user account has a strong password so people won’t be able to get past the kiosk system’s limitations and tamper with the system. Even Windows 8′s detractors have to admit that it’s an ideal system for a touch-screen kiosk device, running either a browser or another specific application. Assigned Access finally makes this easy to set up on Windows systems in the real world — no IT experience, third-party software, or Linux distributions necessary.     

    Read the article

  • Silverlight for Everyone!!

    - by subodhnpushpak
    Someone asked me to compare Silverlight / HTML development. I realized that the question can be answered in many ways: Below is the high level comparison between a HTML /JavaScript client and Silverlight client and why silverlight was chosen over HTML / JavaScript client (based on type of users and major functionalities provided): 1. For end users Browser compatibility Silverlight is a plug-in and requires installation first. However, it does provides consistent look and feel across all browsers. For HTML / DHTML, there is a need to tweak JavaScript for each of the browser supported. In fact, tags like <span> and <div> works differently on different browser / version. So, HTML works on most of the systems but also requires lot of efforts coding-wise to adhere to all standards/ browsers / versions. Out of browser support No support in HTML. Third party tools like  Google gears offers some functionalities but there are lots of issues around platform and accessibility. Out of box support for out-of-browser support. provides features like drag and drop onto application surface. Cut and copy paste in HTML HTML is displayed in browser; which, in turn provides facilities for cut copy and paste. Silverlight (specially 4) provides rich features for cut-copy-paste along with full control over what can be cut copy pasted by end users and .advanced features like visual tree printing. Rich user experience HTML can provide some rich experience by use of some JavaScript libraries like JQuery. However, extensive use of JavaScript combined with various versions of browsers and the supported JavaScript makes the solution cumbersome. Silverlight is meant for RIA experience. User data storage on client end In HTML only small amount of data can be stored that too in cookies. In Silverlight large data may be stored, that too in secure way. This increases the response time. Post back In HTML / JavaScript the post back can be stopped by use of AJAX. Extensive use of AJAX can be a bottleneck as browser stack is used for the calls. Both look and feel and data travel over network.                           In Silverlight everything run the client side. Calls are made to server ONLY for data; which also reduces network traffic in long run. 2. For Developers Coding effort HTML / JavaScript can take considerable amount to code if features (requirements) are rich. For AJAX like interfaces; knowledge of third party kits like DOJO / Yahoo UI / JQuery is required which has steep learning curve. ASP .Net coding world revolves mostly along <table> tags for alignments whereas most popular tools provide <div> tags; which requires lots of tweaking. AJAX calls can be a bottlenecks for performance, if the calls are many. In Silverlight; coding is in C#, which is managed code. XAML is also very intuitive and Blend can be used to provide look and feel. Event handling is much clean than in JavaScript. Provides for many clean patterns like MVVM and composable application. Each call to server is asynchronous in silverlight. AJAX is in built into silverlight. Threading can be done at the client side itself to provide for better responsiveness; etc. Debugging Debugging in HTML / JavaScript is difficult. As JavaScript is interpreted; there is NO compile time error handling. Debugging in Silverlight is very helpful. As it is compiled; it provides rich features for both compile time and run time error handling. Multi -targeting browsers HTML / JavaScript have different rendering behaviours in different browsers / and their versions. JavaScript have to be written to sublime the differences in browser behaviours. Silverlight works exactly the same in all browsers and works on almost all popular browser. Multi-targeting desktop No support in HTML / JavaScript Silverlight is very close to WPF. Bot the platform may be easily targeted while maintaining the same source code. Rich toolkit HTML /JavaScript have limited toolkit as controls Silverlight provides a rich set of controls including graphs, audio, video, layout, etc. 3. For Architects Design Patterns Silverlight provides for patterns like MVVM (MVC) and rich (fat)  client architecture. This segregates the "separation of concern" very clearly. Client (silverlight) does what it is expected to do and server does what it is expected of. In HTML / JavaScript world most of the processing is done on the server side. Extensibility Silverlight provides great deal of extensibility as custom controls may be made. Extensibility is NOT restricted by browser but by the plug-in silverlight runs in. HTML / JavaScript works in a certain way and extensibility is generally done on the server side rather than client end. Client side is restricted by the limitations of the browser. Performance Silverlight provides localized storage which may be used for cached data storage. this reduces the response time. As processing can be done on client side itself; there is no need for server round trips. this decreases the round about time. Look and feel of the application is downloaded ONLY initially, afterwards ONLY data is fetched form the server. Security Silverlight is compiled code downloaded as .XAP; As compared to HTML / JavaScript, it provides more secure sandboxed approach. Cross - scripting is inherently prohibited in silverlight by default. If proper guidelines are followed silverlight provides much robust security mechanism as against HTML / JavaScript world. For example; knowing server Address in obfuscated JavaScript is easier than a compressed compiled obfuscated silverlight .XAP file. Some of these like (offline and Canvas support) will be available in HTML5. However, the timelines are not encouraging at all. According to Ian Hickson, editor of the HTML5 specification, the specification to reach the W3C Candidate Recommendation stage during 2012, and W3C Recommendation in the year 2022 or later. see http://en.wikipedia.org/wiki/HTML5 for details. The above is MY opinion. I will love to hear yours; do let me know via comments. Technorati Tags: Silverlight

    Read the article

  • Using JQuery tabs in an HTML 5 page

    - by nikolaosk
    In this post I will show you how to create a simple tabbed interface using JQuery,HTML 5 and CSS.Make sure you have downloaded the latest version of JQuery (minified version) from http://jquery.com/download.Please find here all my posts regarding JQuery.Also have a look at my posts regarding HTML 5.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. Let me move on to the actual example.This is the sample HTML 5 page<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">    <script type="text/javascript" src="jquery-1.8.2.min.js"> </script>     <script type="text/javascript" src="tabs.js"></script>       </head>  <body>    <header>        <h1>Liverpool Legends</h1>    </header>     <section id="tabs">        <ul>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#first-tab">Defenders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#second-tab">Midfielders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#third-tab">Strikers</a></li>        </ul>   <div id="first-tab">     <h3>Liverpool Defenders</h3>     <p> The best defenders that played for Liverpool are Jamie Carragher, Sami Hyypia , Ron Yeats and Alan Hansen.</p>   </div>   <div id="second-tab">     <h3>Liverpool Midfielders</h3>     <p> The best midfielders that played for Liverpool are Kenny Dalglish, John Barnes,Ian Callaghan,Steven Gerrard and Jan Molby.        </p>   </div>   <div id="third-tab">     <h3>Liverpool Strikers</h3>     <p>The best strikers that played for Liverpool are Ian Rush,Roger Hunt,Robbie Fowler and Fernando Torres.<br/>      </p>   </div> </div></section>            <footer>        <p>All Rights Reserved</p>      </footer>     </body>  </html>  This is very simple HTML markup. I have styled this markup using CSS.The contents of the style.css file follow* {    margin: 0;    padding: 0;}header{font-family:Tahoma;font-size:1.3em;color:#505050;text-align:center;}#tabs {    font-size: 0.9em;    margin: 20px 0;}#tabs ul {    float: left;    background: #777;    width: 260px;    padding-top: 24px;}#tabs li {    margin-left: 8px;    list-style: none;}* html #tabs li {    display: inline;}#tabs li, #tabs li a {    float: left;}#tabs ul li.active {    border-top:2px red solid;    background: #15ADFF;}#tabs ul li.active a {    color: #333333;}#tabs div {    background: #15ADFF;    clear: both;    padding: 15px;    min-height: 200px;}#tabs div h3 {    margin-bottom: 12px;}#tabs div p {    line-height: 26px;}#tabs ul li a {    text-decoration: none;    padding: 8px;    color:#0b2f20;    font-weight: bold;}footer{background-color:#999;width:100%;text-align:center;font-size:1.1em;color:#002233;}There are some CSS rules that style the various elements in the HTML 5 file. These are straight-forward rules. The JQuery code lives inside the tabs.js file $(document).ready(function(){$('#tabs div').hide();$('#tabs div:first').show();$('#tabs ul li:first').addClass('active'); $('#tabs ul li a').click(function(){$('#tabs ul li').removeClass('active');$(this).parent().addClass('active');var currentTab = $(this).attr('href');$('#tabs div').hide();$(currentTab).show();return false;});}); I am using some of the most commonly used JQuery functions like hide , show, addclass , removeClass I hide and show the tabs when the tab becomes the active tab. When I view my page I get the following result Hope it helps!!!!!

    Read the article

  • Summit Time!

    - by Ajarn Mark Caldwell
    Boy, how time flies!  I can hardly believe that the 2011 PASS Summit is just one week away.  Maybe it snuck up on me because it’s a few weeks earlier than last year.  Whatever the cause, I am really looking forward to next week.  The PASS Summit is the largest SQL Server conference in the world and a fantastic networking opportunity thrown in for no additional charge.  Here are a few thoughts to help you maximize the week. Networking As Karen Lopez (blog | @DataChick) mentioned in her presentation for the Professional Development Virtual Chapter just a couple of weeks ago, “Don’t wait until you need a new job to start networking.”  You should always be working on your professional network.  Some people, especially technical-minded people, get confused by the term networking.  The first image that used to pop into my head was the image of some guy standing, awkwardly, off to the side of a cocktail party, trying to shmooze those around him.  That’s not what I’m talking about.  If you’re good at that sort of thing, and you can strike up a conversation with some stranger and learn all about them in 5 minutes, and walk away with your next business deal all but approved by the lawyers, then congratulations.  But if you’re not, and most of us are not, I have two suggestions for you.  First, register for Don Gabor’s 2-hour session on Tuesday at the Summit called Networking to Build Business Contacts.  Don is a master at small talk, and at teaching others, and in just those two short hours will help you with important tips about breaking the ice, remembering names, and smooth transitions into and out of conversations.  Then go put that great training to work right away at the Tuesday night Welcome Reception and meet some new people; which is really my second suggestion…just meet a few new people.  You see, “networking” is about meeting new people and being friendly without trying to “work it” to get something out of the relationship at this point.  In fact, Don will tell you that a better way to build the connection with someone is to look for some way that you can help them, not how they can help you. There are a ton of opportunities as long as you follow this one key point: Don’t stay in your hotel!  At the least, get out and go to the free events such as the Tuesday night Welcome Reception, the Wednesday night Exhibitor Reception, and the Thursday night Community Appreciation Party.  All three of these are perfect opportunities to meet other professionals with a similar job or interest as you, and you never know how that may help you out in the future.  Maybe you just meet someone to say HI to at breakfast the next day instead of eating alone.  Or maybe you cross paths several times throughout the Summit and compare notes on different sessions you attended.  And you just might make new friends that you look forward to seeing year after year at the Summit.  Who knows, it might even turn out that you have some specific experience that will help out that other person a few months’ from now when they run into the same challenge that you just overcame, or vice-versa.  But the point is, if you don’t get out and meet people, you’ll never have the chance for anything else to happen in the future. One more tip for shy attendees of the Summit…if you can’t bring yourself to strike up conversation with strangers at these events, then at the least, after you sit through a good session that helps you out, go up to the speaker and introduce yourself and thank them for taking the time and effort to put together their presentation.  Ideally, when you do this, tell them WHY it was beneficial to you (e.g. “Now I have a new idea of how to tackle a problem back at the office.”)  I know you think the speakers are all full of confidence and are always receiving a ton of accolades and applause, but you’re wrong.  Most of them will be very happy to hear first-hand that all the work they put into getting ready for their presentation is paying off for somebody. Training With over 170 technical sessions at the Summit, training is what it’s all about, and the training is fantastic!  Of course there are the big-name trainers like Paul Randall, Kimberly Tripp, Kalen Delaney, Itzik Ben-Gan and several others, but I am always impressed by the quality of the training put on by so many other “regular” members of the SQL Server community.  It is amazing how you don’t have to be a published author or otherwise recognized as an “expert” in an area in order to make a big impact on others just by sharing your personal experience and lessons learned.  I would rather hear the story of, and lessons learned from, “some guy or gal” who has actually been through an issue and came out the other side, than I would a trained professor who is speaking just from theory or an intellectual understanding of a topic. In addition to the three full days of regular sessions, there are also two days of pre-conference intensive training available.  There is an extra cost to this, but it is a fantastic opportunity.  Think about it…you’re already coming to this area for training, so why not extend your stay a little bit and get some in-depth training on a particular topic or two?  I did this for the first time last year.  I attended one day of extra training and it was well worth the time and money.  One of the best reasons for it is that I am extremely busy at home with my regular job and family, that it was hard to carve out the time to learn about the topic on my own.  It worked out so well last year that I am doubling up and doing two days or “pre-cons” this year. And then there are the DVDs.  I think these are another great option.  I used the online schedule builder to get ready and have an idea of which sessions I want to attend and when they are (much better than trying to figure this out at the last minute every day).  But the problem that I have run into (seems this happens every year) is that nearly every session block has two different sessions that I would like to attend.  And some of them have three!  ACK!  That won’t work!  What is a guy supposed to do?  Well, one option is to purchase the DVDs which are recordings of the audio and projected images from each session so you can continue to attend sessions long after the Summit is officially over.  Yes, many (possibly all) of these also get posted online and attendees can access those for no extra charge, but those are not necessarily all available as quickly as the DVD recording are, and the DVDs are often more convenient than downloading, especially if you want to share the training with someone who was not able to attend in person. Remember, I don’t make any money or get any other benefit if you buy the DVDs or from anything else that I have recommended here.  These are just my own thoughts, trying to help out based on my experiences from the 8 or so Summits I have attended.  There is nothing like the Summit.  It is an awesome experience, fantastic training, and a whole lot of fun which is just compounded if you’ll take advantage of the first part of this article and make some new friends along the way.

    Read the article

  • Routes on a sphere surface - Find geodesic?

    - by CaNNaDaRk
    I'm working with some friends on a browser based game where people can move on a 2D map. It's been almost 7 years and still people play this game so we are thinking of a way to give them something new. Since then the game map was a limited plane and people could move from (0, 0) to (MAX_X, MAX_Y) in quantized X and Y increments (just imagine it as a big chessboard). We believe it's time to give it another dimension so, just a couple of weeks ago, we began to wonder how the game could look with other mappings: Unlimited plane with continous movement: this could be a step forward but still i'm not convinced. Toroidal World (continous or quantized movement): sincerely I worked with torus before but this time I want something more... Spherical world with continous movement: this would be great! What we want Users browsers are given a list of coordinates like (latitude, longitude) for each object on the spherical surface map; browsers must then show this in user's screen rendering them inside a web element (canvas maybe? this is not a problem). When people click on the plane we convert the (mouseX, mouseY) to (lat, lng) and send it to the server which has to compute a route between current user's position to the clicked point. What we have We began writing a Java library with many useful maths to work with Rotation Matrices, Quaternions, Euler Angles, Translations, etc. We put it all together and created a program that generates sphere points, renders them and show them to the user inside a JPanel. We managed to catch clicks and translate them to spherical coords and to provide some other useful features like view rotation, scale, translation etc. What we have now is like a little (very little indeed) engine that simulates client and server interaction. Client side shows points on the screen and catches other interactions, server side renders the view and does other calculus like interpolating the route between current position and clicked point. Where is the problem? Obviously we want to have the shortest path to interpolate between the two route points. We use quaternions to interpolate between two points on the surface of the sphere and this seemed to work fine until i noticed that we weren't getting the shortest path on the sphere surface: We though the problem was that the route is calculated as the sum of two rotations about X and Y axis. So we changed the way we calculate the destination quaternion: We get the third angle (the first is latitude, the second is longitude, the third is the rotation about the vector which points toward our current position) which we called orientation. Now that we have the "orientation" angle we rotate Z axis and then use the result vector as the rotation axis for the destination quaternion (you can see the rotation axis in grey): What we got is the correct route (you can see it lays on a great circle), but we get to this ONLY if the starting route point is at latitude, longitude (0, 0) which means the starting vector is (sphereRadius, 0, 0). With the previous version (image 1) we don't get a good result even when startin point is 0, 0, so i think we're moving towards a solution, but the procedure we follow to get this route is a little "strange" maybe? In the following image you get a view of the problem we get when starting point is not (0, 0), as you can see starting point is not the (sphereRadius, 0, 0) vector, and as you can see the destination point (which is correctly drawn!) is not on the route. The magenta point (the one which lays on the route) is the route's ending point rotated about the center of the sphere of (-startLatitude, 0, -startLongitude). This means that if i calculate a rotation matrix and apply it to every point on the route maybe i'll get the real route, but I start to think that there's a better way to do this. Maybe I should try to get the plane through the center of the sphere and the route points, intersect it with the sphere and get the geodesic? But how? Sorry for being way too verbose and maybe for incorrect English but this thing is blowing my mind! EDIT: This code version is related to the first image: public void setRouteStart(double lat, double lng) { EulerAngles tmp = new EulerAngles ( Math.toRadians(lat), 0, -Math.toRadians(lng)); //set route start Quaternion qtStart.setInertialToObject(tmp); //do other stuff like drawing start point... } public void impostaDestinazione(double lat, double lng) { EulerAngles tmp = new AngoliEulero( Math.toRadians(lat), 0, -Math.toRadians(lng)); qtEnd.setInertialToObject(tmp); //do other stuff like drawing dest point... } public V3D interpolate(double totalTime, double t) { double _t = t/totalTime; Quaternion q = Quaternion.Slerp(qtStart, qtEnd, _t); RotationMatrix.inertialQuatToIObject(q); V3D p = matInt.inertialToObject(V3D.Xaxis.scale(sphereRadius)); //other stuff, like drawing point ... return p; } //mostly taken from a book! public static Quaternion Slerp(Quaternion q0, Quaternion q1, double t) { double cosO = q0.dot(q1); double q1w = q1.w; double q1x = q1.x; double q1y = q1.y; double q1z = q1.z; if (cosO < 0.0f) { q1w = -q1w; q1x = -q1x; q1y = -q1y; q1z = -q1z; cosO = -cosO; } double sinO = Math.sqrt(1.0f - cosO*cosO); double O = Math.atan2(sinO, cosO); double oneOverSinO = 1.0f / senoOmega; k0 = Math.sin((1.0f - t) * O) * oneOverSinO; k1 = Math.sin(t * O) * oneOverSinO; // Interpolate return new Quaternion( k0*q0.w + k1*q1w, k0*q0.x + k1*q1x, k0*q0.y + k1*q1y, k0*q0.z + k1*q1z ); } A little dump of what i get (again check image 1): Route info: Sphere radius and center: 200,000, (0.0, 0.0, 0.0) Route start: lat 0,000 °, lng 0,000 ° @v: (200,000, 0,000, 0,000), |v| = 200,000 Route end: lat 30,000 °, lng 30,000 ° @v: (150,000, 86,603, 100,000), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (1,000, 0,000, -0,000, 0,000); 0,000 °; (1,000, 0,000, 0,000) Qt end: (0,933, 0,067, -0,250, 0,250); 42,181 °; (0,186, -0,695, 0,695) Route start: lat 30,000 °, lng 10,000 ° @v: (170,574, 30,077, 100,000), |v| = 200,000 Route end: lat 80,000 °, lng -50,000 ° @v: (22,324, -26,604, 196,962), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (0,962, 0,023, -0,258, 0,084); 31,586 °; (0,083, -0,947, 0,309) Qt end: (0,694, -0,272, -0,583, -0,324); 92,062 °; (-0,377, -0,809, -0,450)

    Read the article

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • Small adventure game

    - by Nick Rosencrantz
    I'm making a small adventure game where the player can walk through Dungeons and meet scary characters: The whole thing is 20 java classes and I'm making this a standalone frame while it could very well be an applet I don't want to make another applet since I might want to recode this in C/C++ if the game or game engine turns out a success. The engine is the most interesting part of the game, it controls players and computer-controlled characters such as Zombies, Reptile Warriors, Trolls, Necromancers, and other Persons. These persons can sleep or walk around in the game and also pick up and move things. I didn't add many things so I suppose that is the next thing to do is to add things that can get used now that I already added many different types of walking persons. What do you think I should add and do with things in the game? The things I have so far is: package adventure; /** * The data type for things. Subclasses will be create that takes part of the story */ public class Thing { /** * The name of the Thing. */ public String name; /** * @param name The name of the Thing. */ Thing( String name ) { this.name = name; } } public class Scroll extends Thing { Scroll (String name) { super(name); } } class Key extends Thing { Key (String name) { super(name); } } The key is the way to win the game if you figure our that you should give it to a certain person and the scroll can protect you from necromancers and trolls. If I make this game more Dungeons and Dragons-inspired, do you think will be any good? Any other ideas that you think I could use here? The Threadwhich steps time forward and wakes up persons is called simulation. Do you think I could do something more advanced with this class? package adventure; class Simulation extends Thread { private PriorityQueue Eventqueue; Simulation() { Eventqueue = new PriorityQueue(); start(); } public void wakeMeAfter(Wakeable SleepingObject, double time) { Eventqueue.enqueue(SleepingObject, System.currentTimeMillis()+time); } public void run() { while(true) { try { sleep(5); //Sov i en halv sekund if (Eventqueue.getFirstTime() <= System.currentTimeMillis()) { ((Wakeable)Eventqueue.getFirst()).wakeup(); Eventqueue.dequeue(); } } catch (InterruptedException e ) { } } } } And here is the class that makes up the actual world: package adventure; import java.awt.*; import java.net.URL; /** * Subklass to World that builds up the Dungeon World. */ public class DungeonWorld extends World { /** * * @param a Reference to adventure game. * */ public DungeonWorld(Adventure a) { super ( a ); // Create all places createPlace( "Himlen" ); createPlace( "Stairs3" ); createPlace( "IPLab" ); createPlace( "Dungeon3" ); createPlace( "Stairs5" ); createPlace( "C2M2" ); createPlace( "SANS" ); createPlace( "Macsal" ); createPlace( "Stairs4" ); createPlace( "Dungeon2" ); createPlace( "Datorsalen" ); createPlace( "Dungeon");//, "Ljushallen.gif" ); createPlace( "Cola-automaten", "ColaAutomat.gif" ); createPlace( "Stairs2" ); createPlace( "Fable1" ); createPlace( "Dungeon1" ); createPlace( "Kulverten" ); // Create all connections between places connect( "Stairs3", "Stairs5", "Down", "Up" ); connect( "Dungeon3", "SANS", "Down", "Up" ); connect( "Dungeon3", "IPLab", "West", "East" ); connect( "IPLab", "Stairs3", "West", "East" ); connect( "Stairs5", "Stairs4", "Down", "Up" ); connect( "Macsal", "Stairs5", "South", "Norr" ); connect( "C2M2", "Stairs5", "West", "East" ); connect( "SANS", "C2M2", "West", "East" ); connect( "Stairs4", "Dungeon", "Down", "Up" ); connect( "Datorsalen", "Stairs4", "South", "Noth" ); connect( "Dungeon2", "Stairs4", "West", "East" ); connect( "Dungeon", "Stairs2", "Down", "Up" ); connect( "Dungeon", "Cola-automaten", "South", "North" ); connect( "Stairs2", "Kulverten", "Down", "Up" ); connect( "Stairs2", "Fable1", "East", "West" ); connect( "Fable1", "Dungeon1", "South", "North" ); // Add things // --- Add new things here --- getPlace("Cola-automaten").addThing(new CocaCola("Ljummen cola")); getPlace("Cola-automaten").addThing(new CocaCola("Avslagen Cola")); getPlace("Cola-automaten").addThing(new CocaCola("Iskall Cola")); getPlace("Cola-automaten").addThing(new CocaCola("Cola Light")); getPlace("Cola-automaten").addThing(new CocaCola("Cuba Cola")); getPlace("Stairs4").addThing(new Scroll("Scroll")); getPlace("Dungeon3").addThing(new Key("Key")); Simulation sim = new Simulation(); // Load images to be used as appearance-parameter for persons Image studAppearance = owner.loadPicture( "Person.gif" ); Image asseAppearance = owner.loadPicture( "Asse.gif" ); Image trollAppearance = owner.loadPicture( "Loke.gif" ); Image necromancerAppearance = owner.loadPicture( "Necromancer.gif" ); Image skeletonAppearance = owner.loadPicture( "Reptilewarrior.gif" ); Image reptileAppearance = owner.loadPicture( "Skeleton.gif" ); Image zombieAppearance = owner.loadPicture( "Zombie.gif" ); // --- Add new persons here --- new WalkingPerson(sim, this, "Peter", studAppearance); new WalkingPerson(sim, this, "Zombie", zombieAppearance ); new WalkingPerson(sim, this, "Zombie", zombieAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "John", studAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Sean", studAppearance ); new WalkingPerson(sim, this, "Reptile", reptileAppearance ); new LabAssistant(sim, this, "Kate", asseAppearance); new LabAssistant(sim, this, "Jenna", asseAppearance); new Troll(sim, this, "Troll", trollAppearance); new Necromancer(sim, this, "Necromancer", necromancerAppearance); } /** * * The place where persons are placed by default * *@return The default place. * */ public Place defaultPlace() { return getPlace( "Datorsalen" ); } private void connect( String p1, String p2, String door1, String door2) { Place place1 = getPlace( p1 ); Place place2 = getPlace( p2 ); place1.addExit( door1, place2 ); place2.addExit( door2, place1 ); } } Thanks

    Read the article

  • Web Site Performance and Assembly Versioning

    - by capgpilk
    I originally wanted to write this post in one, but there is quite a large amount of information which can be broken down into different areas, so I am going to publish it in three posts. Minification and Concatination of JavaScript and CSS Files – this post Versioning Combined Files Using Subversion – published shortly Versioning Combined Files Using Mercurial – published shortly Website Performance There are many ways to improve web site performance, two areas are reducing the amount of data that is served up from the web server and reducing the number of files that are requested. Here I will outline the process of minimizing and concatenating your javascript and css files automatically at build time of your visual studio web site/ application. To edit the project file in Visual Studio, you need to first unload it by right clicking the project in Solution Explorer. I prefer to do this in a third party tool such as Notepad++ and save it there forcing VS to reload it each time I make a change as the whole process in Visual Studio can be a bit tedious. Now you have the project file, you will notice that it is an MSBuild project file. I am going to use a fantastic utility from Microsoft called Ajax Minifier. This tool minifies both javascript and css. 1. Import the tasks for AjaxMin choosing the location you installed to. I keep all third party utilities in a Tools directory within my solution structure and source control. This way I know I can get the entire solution from source control without worrying about what other tools I need to get the project to build locally. 1: <Import Project="..\Tools\MicrosoftAjaxMinifier\AjaxMin.tasks" /> 2. Now create ItemGroups for all your js and css files like this. Separating out your non minified files and minified files. This can go in the AfterBuild container. 1: <Target Name="AfterBuild"> 2:  3: <!-- Javascript files that need minimizing --> 4: <ItemGroup> 5: <JSMin Include="Scripts\jqModal.js" /> 6: <JSMin Include="Scripts\jquery.jcarousel.js" /> 7: <JSMin Include="Scripts\shadowbox.js" /> 8: </ItemGroup> 9: <!-- CSS files that need minimizing --> 10: <ItemGroup> 11: <CSSMin Include="Content\Site.css" /> 12: <CSSMin Include="Content\themes\base\jquery-ui.css" /> 13: <CSSMin Include="Content\shadowbox.css" /> 14: </ItemGroup>   1: <!-- Javascript files to combine --> 2: <ItemGroup> 3: <JSCat Include="Scripts\jqModal.min.js" /> 4: <JSCat Include="Scripts\jquery.jcarousel.min.js" /> 5: <JSCat Include="Scripts\shadowbox.min.js" /> 6: </ItemGroup> 7: <!-- CSS files to combine --> 8: <ItemGroup> 9: <CSSCat Include="Content\Site.min.css" /> 10: <CSSCat Include="Content\themes\base\jquery-ui.min.css" /> 11: <CSSCat Include="Content\shadowbox.min.css" /> 12: </ItemGroup>   3. Call AjaxMin to do the crunching. 1: <Message Text="Minimizing JS and CSS Files..." Importance="High" /> 2: <AjaxMin JsSourceFiles="@(JSMin)" JsSourceExtensionPattern="\.js$" 3: JsTargetExtension=".min.js" JsEvalTreatment="MakeImmediateSafe" 4: CssSourceFiles="@(CSSMin)" CssSourceExtensionPattern="\.css$" 5: CssTargetExtension=".min.css" /> This will create the *.min.css and *.min.js files in the same directory the original files were. 4. Now concatenate the minified files into one for javascript and another for css. Here we write out the files with a default file name. In later posts I will cover versioning these files the same as your project assembly again to help performance. 1: <Message Text="Concat JS Files..." Importance="High" /> 2: <ReadLinesFromFile File="%(JSCat.Identity)"> 3: <Output TaskParameter="Lines" ItemName="JSLinesSite" /> 4: </ReadLinesFromFile> 5: <WriteLinestoFile File="Scripts\site-script.combined.min.js" Lines="@(JSLinesSite)" 6: Overwrite="true" /> 7: <Message Text="Concat CSS Files..." Importance="High" /> 8: <ReadLinesFromFile File="%(CSSCat.Identity)"> 9: <Output TaskParameter="Lines" ItemName="CSSLinesSite" /> 10: </ReadLinesFromFile> 11: <WriteLinestoFile File="Content\site-style.combined.min.css" Lines="@(CSSLinesSite)" 12: Overwrite="true" /> 5. Save the project file, if you have Visual Studio open it will ask you to reload the project. You can now run a build and these minified and combined files will be created automatically. 6. Finally reference these minified combined files in your web page. In the next two posts I will cover versioning these files to match your assembly.

    Read the article

  • Network communications mechanisms for SQL Server

    - by Akshay Deep Lamba
    Problem I am trying to understand how SQL Server communicates on the network, because I'm having to tell my networking team what ports to open up on the firewall for an edge web server to communicate back to the SQL Server on the inside. What do I need to know? Solution In order to understand what needs to be opened where, let's first talk briefly about the two main protocols that are in common use today: TCP - Transmission Control Protocol UDP - User Datagram Protocol Both are part of the TCP/IP suite of protocols. We'll start with TCP. TCP TCP is the main protocol by which clients communicate with SQL Server. Actually, it is more correct to say that clients and SQL Server use Tabular Data Stream (TDS), but TDS actually sits on top of TCP and when we're talking about Windows and firewalls and other networking devices, that's the protocol that rules and controls are built around. So we'll just speak in terms of TCP. TCP is a connection-oriented protocol. What that means is that the two systems negotiate the connection and both agree to it. Think of it like a phone call. While one person initiates the phone call, the other person has to agree to take it and both people can end the phone call at any time. TCP is the same way. Both systems have to agree to the communications, but either side can end it at any time. In addition, there is functionality built into TCP to ensure that all communications can be disassembled and reassembled as necessary so it can pass over various network devices and be put together again properly in the right order. It also has mechanisms to handle and retransmit lost communications. Because of this functionality, TCP is the protocol used by many different network applications. The way the applications all can share is through the use of ports. When a service, like SQL Server, comes up on a system, it must listen on a port. For a default SQL Server instance, the default port is 1433. Clients connect to the port via the TCP protocol, the connection is negotiated and agreed to, and then the two sides can transfer information as needed until either side decides to end the communication. In actuality, both sides will have a port to use for the communications, but since the client's port is typically determined semi-randomly, when we're talking about firewalls and the like, typically we're interested in the port the server or service is using. UDP UDP, unlike TCP, is not connection oriented. A "client" can send a UDP communications to anyone it wants. There's nothing in place to negotiate a communications connection, there's nothing in the protocol itself to coordinate order of communications or anything like that. If that's needed, it's got to be handled by the application or by a protocol built on top of UDP being used by the application. If you think of TCP as a phone call, think of UDP as a postcard. I can put a postcard in the mail to anyone I want, and so long as it is addressed properly and has a stamp on it, the postal service will pick it up. Now, what happens it afterwards is not guaranteed. There's no mechanism for retransmission of lost communications. It's great for short communications that doesn't necessarily need an acknowledgement. Because multiple network applications could be communicating via UDP, it uses ports, just like TCP. The SQL Browser or the SQL Server Listener Service uses UDP. Network Communications - Talking to SQL Server When an instance of SQL Server is set up, what TCP port it listens on depends. A default instance will be set up to listen on port 1433. A named instance will be set to a random port chosen during installation. In addition, a named instance will be configured to allow it to change that port dynamically. What this means is that when a named instance starts up, if it finds something already using the port it normally uses, it'll pick a new port. If you have a named instance, and you have connections coming across a firewall, you're going to want to use SQL Server Configuration Manager to set a static port. This will allow the networking and security folks to configure their devices for maximum protection. While you can change the network port for a default instance of SQL Server, most people don't. Network Communications - Finding a SQL Server When just the name is specified for a client to connect to SQL Server, for instance, MySQLServer, this is an attempt to connect to the default instance. In this case the client will automatically attempt to communicate to port 1433 on MySQLServer. If you've switched the port for the default instance, you'll need to tell the client the proper port, usually by specifying the following syntax in the connection string: <server>,<port>. For instance, if you moved SQL Server to listen on 14330, you'd use MySQLServer,14330 instead of just MySQLServer. However, because a named instance sets up its port dynamically by default, the client never knows at the outset what the port is it should talk to. That's what the SQL Browser or the SQL Server Listener Service (SQL Server 2000) is for. In this case, the client sends a communication via the UDP protocol to port 1434. It asks, "Where is the named instance?" So if I was running a named instance called SQL2008R2, it would be asking the SQL Browser, "Hey, how do I talk to MySQLServer\SQL2008R2?" The SQL Browser would then send back a communications from UDP port 1434 back to the client telling the client how to talk to the named instance. Of course, you can skip all of this of you set that named instance's port statically. Then you can use the <server>,<port> mechanism to connect and the client won't try to talk to the SQL Browser service. It'll simply try to make the connection. So, for instance, is the SQL2008R2 instance was listening on port 20080, specifying MySQLServer,20080 would attempt a connection to the named instance. Network Communications - Named Pipes Named pipes is an older network library communications mechanism and it's generally not used any longer. It shouldn't be used across a firewall. However, if for some reason you need to connect to SQL Server with it, this protocol also sits on top of TCP. Named Pipes is actually used by the operating system and it has its own mechanism within the protocol to determine where to route communications. As far as network communications is concerned, it listens on TCP port 445. This is true whether we're talking about a default or named instance of SQL Server. The Summary Table To put all this together, here is what you need to know: Type of Communication Protocol Used Default Port Finding a SQL Server or SQL Server Named Instance UDP 1434 Communicating with a default instance of SQL Server TCP 1433 Communicating with a named instance of SQL Server TCP * Determined dynamically at start up Communicating with SQL Server via Named Pipes TCP 445

    Read the article

  • Rapid Evolution of Society & Technology

    - by Michael Snow
    We caught up with Brian Solis on the phone the other day and Christie Flanagan had a chance to chat with him and learn a bit more about him and some of the concepts he'll be addressing in our Social Business Thought Leaders Webcast on Thursday 12/13/12. «--- Interview with Brian Solis  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast- mso-fareast-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Be sure and register for this week's webcast ---» ------------------- Guest post by Brian Solis. Reposted (Borrowed) from his posting of May 24, 2012 Dear [insert business name], what’s your promise? - Brian Solis You say you want to get closer to customers, but your actions are different than your words. You say you want to “surprise and delight” customers, but your product development teams are too busy building against a roadmap without consideration of the 5th P of marketing…people. Your employees are your number one asset, however the infrastructure of the organization has turned once optimistic and ambitious intrapreneurs into complacent cogs or worse, your greatest detractors. You question the adoption of disruptive technology by your internal champions yet you’ve not tried to find the value for yourself. You’re a change agent and you truly wish to bring about change, but you’ve not invested time or resources to answer “why” in your endeavors to become a connected or social business. If we are to truly change, we must find purpose. We must uncover the essence of our business and the value it delivers to traditional and connected consumers. We must rethink the spirit of today’s embrace and clearly articulate how transformation is going to improve customer and employee experiences and relationships now and over time. Without doing so, any attempts at evolution will be thwarted by reality. In an era of Digital Darwinism, no business is too big to fail or too small to succeed. These are undisciplined times which require alternative approaches to recognize and pursue new opportunities. But everything begins with acknowledging the 360 view of the world that you see today is actually a filtered view of managed and efficient convenience. Today, many organizations that were once inspired by innovation and engagement have fallen into a process of marketing, operationalizing, managing, and optimizing. That might have worked for the better part of the last century, but for the next 10 years and beyond, new vision, leadership and supporting business models will be written to move businesses from rigid frameworks to adaptive and agile entities. I believe that today’s executives will undergo a great test; a test of character, vision, intention, and universal leadership. It starts with a simple, but essential question…what is your promise? Notice, I didn’t ask about your brand promise. Nor did I ask for you to cite your mission and vision statements. This is much more than value propositions or manufactured marketing language designed to hook audiences and stakeholders. I asked for your promise to me as your consumer, stakeholder, and partner. This isn’t about B2B or B2C, but instead, people to people, person to person. It is this promise that will breathe new life into an organization that on the outside, could be misdiagnosed as catatonic by those who are disrupting your markets. A promise, for example, is meant to inspire. It creates alignment. It serves as the foundation for your vision, mission, and all business strategies and it must come from the top to mean anything. For without it, we cannot genuinely voice what it is we stand for or stand behind. Think for a moment about the definition of community. It’s easy to confuse a workplace or a market where everyone simply shares common characteristics. However, a community in this day and age is much more than belonging to something, it’s about doing something together that makes belonging matter The next few years will force a divide where companies are separated by intention as measured by actions and words. But, becoming a social business is not enough. Becoming more authentic and transparent doesn’t serve as a mantra for a renaissance. A promise is the ink that inscribes the spirit of the relationship between you and me. A promise serves as the words that influence change from within and change beyond the halls of our business. It is the foundation for a renewed embrace, one that must then find its way to every aspect of the organization. It’s the difference between a social business and an adaptive business. While an adaptive business can also be social, it is the culture of the organization that strives to not just use technology to extend current philosophies or processes into new domains, but instead give rise to a new culture where striving for relevance is among its goals. The tools and networks simply become enablers of a greater mission You are reading this because you believe in something more than what you’re doing today. While you fight for change within your organization, remember to aim for a higher purpose. Organizations that strive for innovation, imagination, and relevance will outperform those that do not. Part of your job is to lead a missionary push that unites the groundswell with a top down cascade. Change will only happen because you and other internal champions see what others can’t and will do what other won’t. It takes resolve. It takes the ability to translate new opportunities into business value. And, it takes courage. “This is a very noisy world, so we have to be very clear what we want them to know about us”-Steve Jobs ----------------------------------------------------------------- So -- where do you begin to evaluate the kind of experience you are delivering for your customers, partners, and employees?  Take a look at this White Paper: Creating a Successful and Meaningful Customer Experience on the Web and then have a cup of coffee while you listen to the sage advice of Guy Kawasaki in a short video below.   An interview with Guy Kawasaki on Maximizing Social Media Channels 

    Read the article

  • Clarity is important, both in question and in answer.

    - by gerrylowry
    clarity is important ... i'm often reminded of the Clouseau movie in which Peter Sellers as Chief Inspector Clouseau asks a hotel clerk "Does your dog bite?" ... the clerk answers "no" ... after Clouseau has been bitten by the dog, he looks at the hotel clerk who says "That's not my dog".  Clarity is important, both in question and in answer. i've been a member of forums.asp.net since 2008 ... like many of my peers at forums.asp.net, i've answered my fair share of questions. FWIW, the purpose of this, my first web log post to http://weblogs.asp.net/gerrylowry is to help new members ask better questions and in turn get better answers. TIMTOWTDI  =.  there is more than one way to do it imho, the best way to ask a question in any forum, or even person to person, is to first formulate your question and then ask yourself to answer your own question. Things to consider when asking (the more complete your question, the more likely you'll get the answer you require): -- have you searched Google and/or your favourite search engine(s) before posting your question to forums.asp.net; examples: site:msdn.microsoft.com entity framework 5.0 c#http://lmgtfy.com/?q=site%3Amsdn.microsoft.com+entity+framework+5.0+c%23 site:forums.asp.net MVC tutorial c#http://lmgtfy.com/?q=site%3Aforums.asp.net+MVC+tutorial+c%23 -- are you asking your question in the correct forum?  look at the forums' descriptions at http://forums.asp.net/; examples: Getting Started If you have a general ASP.NET question on a topic that's not covered by one of the other more specific forums - ask it here. MVC Discussions regarding ASP.NET Model-View-Controller (MVC) C# Questions about using C# for ASP.NET development Note:  if your question pertains more to c# than to MVC, choosing the C# forum is likely to be more appropriate. -- is your post subject clear and concise, yet not too vague? compare these three subjects (all three had something to do with GridView):     (1)    please help     (2)    gridview      (3)    How to show newline in GridView  -- have you clearly explained your scenario? compare:  my leg hurts   with   when i walk too much, my right knee hurts in the knee joint  compare:  my code does not work    with    when i enter a date as 2012-11-8, i get a FormatException -- have you checked your spelling, your grammar, and your English? for better or worse, English is the language of forums.asp.net ... many of the currently 170000++ forums.asp.net are not native speakers of English; that's okay ... however, there are times when choosing the more appropriate words will likely get one a better answer; fortunately, there are web tools to help you formulate your question, for example, http://translate.google.com/.  -- have you provided relevant information about your environment? here are a few examples ... feel free to include other items to your question ... rule of thumb:  if you think a given detail is relevant, it likely is -- what technology are you using?    ASP.NET MVC 4, ASP.NET MVC 3, WebForms, ...  -- what version of Visual Studio are you using?  vs2012 (ultimate, professional, express), vs2010, vs2008 ... -- are you hosting your own website?  are you using a shared hosting service? -- are you experience difficulties in just one browser? more than one browser? -- what browser version(s) are you using?   ie8? ie9? ... -- what is your operating system?     win8, win7, vista, XP, server 2008 R2 ... -- what is your database?   SQL Server 2008 R2, ss2005, MySQL, Oracle, ... -- what is your web server?  iis 7.5, iis 6, .... -- have you provided enough information for someone to be able to answer your question? Here's an actual example from an O.P. that i hope is self-explanatory: I'm trying to make a simple calculator when i write the code in windows application it worked when i tried it in web application it doesn't work and there are no errors what should i do ??!! -- have you included unnecessary information? more than once, i've seen the O.P. (original post, original poster) include many extra lines of code that were not relevant to the actual question; the more unnecessary code that you include, the less likely your volunteer peers will be motivated to donate their time to help you. -- have you asked the question that you want answered? "Does this dog bite?" -- are your expectations reasonable? -- generally, persons who are going to answer your questions are your peers ... they are unpaid volunteers ... -- are you looking for help with your homework, work assignment, or hobby? or, are you expecting someone else to do your work for you?  -- do you expect a complete solution or are you simply looking for guidance and direction? -- you are likely to get more help by first making a reasonable effort to help yourself first Clarity is important, both in question and in answer. if you are answering someone else's question, please remember that clear answers are just as important as clear questions; would you understand your own answer? Things to consider when answering: -- have you tested your code example?  if you have, say so; if you've not tested your code example, also say so -- imho, it's okay to guess as long as you clearly state that you're guessing ... sometimes a wrong guess can still help the O.P. find her/his way to the right answer -- meanness does not contribute to being helpful; sometimes one may become frustrated with the O.P. and/or others participating in a thread, if that happens to you, be kind regardless; speaking from my own experience, at least once i've allowed myself to be frustrated into writing something inappropriate that i've regretted later ... being a meany does not feel good ... being kind and helpful feels fantastic! Tip:  before asking your question, read more than a few existing questions and answers to get a sense of how your peers ask and answer questions. Gerry P.S.:  try to avoid necroposting and piggy backing. necroposting is adding to an old post, especially one that was resolved months ago. piggy backing is adding your own question to someone else's thread.

    Read the article

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >