Search Results

Search found 5543 results on 222 pages for 'legacy terms'.

Page 62/222 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Why is VB so popular?

    - by aaaidan
    To me, Visual Basic seems clumsy, ugly, error-prone, and difficult to read. I'll let others explain why. While VB.net has clearly been a huge leap forward for the language in terms of features, I still don't understand why anyone would choose to code in VB over, say, C#. However, I still see (what seems to be) the vast majority of commercial web apps from "MS shops" are built in VB. I could stand corrected on this, but VB still seems more popular than it deserves. Can anyone help answer any (or all) of these questions: Am I missing something with VB? Is it easier to learn, or "friendlier" than C#? Are there features I don't know about? Why is VB/VB.net so frequently used today, especially in web projects?

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • Install SharePoint 2013 on a two server farm

    - by sreejukg
    When SharePoint 2010 was released, I published an article on how to install SharePoint on a two server farm. You can find that article from the below link. http://weblogs.asp.net/sreejukg/archive/2010/09/28/install-sharepoint-2010-in-a-farm-environment.aspx Now it is the time for SharePoint 2013. SharePoint 2013 brings lots of improvements to the topologies, but still supports two-server architecture. Be noted that “two-server architecture” is meant for small implementations with limited service applications. Refer the below link to understand more about the SharePoint architecture http://technet.microsoft.com/en-us/sharepoint/fp123594.aspx A two tier farm consists of a database server and a web/application server as follows. In this article I am going to explain how to install SharePoint in a two server farm. I prepared 2 servers, both of them joined to a domain(SP2013Domain), and in one server I installed SQL Server 2012 (Server name: SP2013_DB). Now I am going to install SharePoint 2013 in the second server (Server Name: SP2013). The following domain accounts are created for the installation.   User Account Purpose Server roles required SQLService - SQL Server service account - This account is used as the service account for SQL Server. - domain user account / local account spSetup - You will be running SharePoint setup and SharePoint products and configuration wizard using this account. -domain user account - Member of the Administrators group on each server on which Setup is run(In our case SP2013) - SQL Server login on the computer running SQL Server - Member of the Server admin SQL Server security role spDataaccess - Configure and manage server farm. This - Application pool identity for central admin website - Microsoft SharePoint Foundation Workflow Timer Service Domain user account (Other permissions will be set to this account automatically)   The above are the minimum list of accounts needed for SharePoint 2013 installation. Now you need additional accounts for services, application pool identities for web applications etc. Refer the service accounts requirements for SharePoint from the below link. http://technet.microsoft.com/en-us/library/cc263445.aspx In order to install SharePoint 2013 login to the server using setup account(spsetup). Now run the setup from the installation media. First you need to install the pre-requisites. During the installation process, the server may restart several times. The installation wizard will guide you through the installation. In the next step, you need to agree on the terms and conditions as usual. Once you click next, the installation will start immediately. The installation wizard will let you know the progress of the installation. During the installation you may receive notifications to restart the server, you need to just click the finish button so that the system will be restarted. Once all the pre-requisites are installed, you will get the success message as below. Click finish to close the dialog. Now from the media, run the setup again and this time you choose install SharePoint server. In the next screen, you need to enter the product key, and then click continue. Now you need to agree on the terms and conditions for SharePoint 2013, and click continue. Choose the file location as per your policies and click on the install now button. You will see the installation progress. Once completed, you will see the installation completed dialog. Make sure you select the run products and configuration wizard option and click close. From the start screen, click next to start the configuration wizard. You will receive warning telling you some of the services will be stopped during the installation. Select “create new server farm” radio button and click next. In the next step, you need to enter the configuration database settings. Enter the database server details and then specify the database access account. You need to specify the farm account(spdataaccess). The wizard will grant additional privileges to the account as needed. In the next step you need to specify the passphrase, you need to note this as you need this passphrase if you add additional server to the farm. In the next step, you need to enter the central administration website port and security settings. You can choose a port or just keep it as suggested by the wizard. Click next, you will see the summary of what you have been selected. Verify the selected settings and if you want to change any, just click back and change them, or click continue to start the configuration. The configuration may take some time, you can view the progress, in case of any error, you will get the log file, you need to fix any error and again start the configuration wizard. Once the configuration successful, you will see the success message. Just click finish. Now you can browse the central administration website. It is good to check the health analyzer to review whether there are any errors/warnings. No warnings/errors indicate a good installation. Two-Server architecture is the least configuration for production environments. For small firms with less number of employees can implement SharePoint 2013 using this topology and as the workload increases, they can add more servers to the farm without reconstructing everything.

    Read the article

  • Microsoft Press Weekend Deal 26/May/2012 - Microsoft® Manual of Style, 4th Edition

    - by TATWORTH
    At http://shop.oreilly.com/product/0790145305770.do?code=MSDEAL, Microsoft Press are offering the Microsoft® Manual of Style, 4th Edition as a PDF for 50% off using the MSDEAL code."Maximize the impact and precision of your message! Now in its fourth edition, the Microsoft Manual of Style provides essential guidance to content creators, journalists, technical writers, editors, and everyone else who writes about computer technology. Direct from the Editorial Style Board at Microsoft—you get a comprehensive glossary of both general technology terms and those specific to Microsoft; clear, concise usage and style guidelines with helpful examples and alternatives; guidance on grammar, tone, and voice; and best practices for writing content for the web, optimizing for accessibility, and communicating to a worldwide audience. Fully updated and optimized for ease of use, the Microsoft Manual of Style is designed to help you communicate clearly, consistently, and accurately about technical topics—across a range of audiences and media." There is a sample chapter for free download at the above link

    Read the article

  • Creating the “game space” for a 2D game

    - by alJaree
    How does one setup the game space for a game so that obstacles can be spawned? One example I am wondering about is doodle jump. Tile maps are limited in size and would need to change often if the user jumps a lot. How would this be done in another way than tile maps. How or what is used to create the notion of a game world where these spawned ledges/obstacles are placed as the user progresses through the stage? What is actually moving if the user jumps from ledge to ledge, what are the ledges based on in terms of the game world/space. What data structure or representation could the game use to reference and manage the spawning of these obstacles/ledges?

    Read the article

  • Change Comes from Within

    - by John K. Hines
    I am in the midst of witnessing a variety of teams moving away from Scrum. Some of them are doing things like replacing Scrum terms with more commonly understood terminology. Mainly they have gone back to using industry standard terms and more traditional processes like the RAPID decision making process. For example: Scrum Master becomes Project Lead. Scrum Team becomes Project Team. Product Owner becomes Stakeholders. I'm actually quite sad to see this happening, but I understand that Scrum is a radical change for most organizations. Teams are slowly but surely moving away from Scrum to a process that non-software engineers can understand and follow. Some could never secure the education or personnel (like a Product Owner) to get the whole team engaged. And many people with decision-making authority do not see the value in Scrum besides task planning and tracking. You see, Scrum cannot be mandated. No one can force a team to be Agile, collaborate, continuously improve, and self-reflect. Agile adoptions must start from a position of mutual trust and willingness to change. And most software teams aren't like that. Here is my personal epiphany from over a year of attempting to promote Agile on a small development team: The desire to embrace Agile methodologies must come from each and every member of the team. If this desire does not exist - if the team is satisfied with its current process, if the team is not motivated to improve, or if the team is afraid of change - the actual demonstration of all the benefits prescribed by Agile and Scrum will take years. I've read some blog posts lately that criticise Scrum for demanding "Big Change Up Front." One's opinion of software methodologies boils down to one's perspective. If you see modern software development as successful, you will advocate for small, incremental changes to how it is done. If you see it as broken, you'll be much more motivated to take risks and try something different. So my question to you is this - is modern software development healthy or in need of dramatic improvement? I can tell you from personal experience that any project that requires exploration, planning, development, stabilisation, and deployment is hard. Trying to make that process better with only a slightly modified approach is a mistake. You will become completely dependent upon the skillset of your team (the only variable you can change). But the difficulty of planned work isn't one of skill. It isn't until you solve the fundamental challenges of communication, collaboration, quality, and efficiency that skill even comes into play. So I advocate for Big Change Up Front. And I advocate for it to happen often until those involved can say, from experience, that it is no longer needed. I hope every engineer has the opportunity to see the benefits of Agile and Scrum on a highly functional team. I'll close with more key learnings that can help with a Scrum adoption: Your leaders must understand Scrum. They must understand software development, its inherent difficulties, and how Scrum helps. If you attempt to adopt Scrum before the understanding is there, your leaders will apply traditional solutions to your problems - often creating more problems. Success should be measured by quality, not revenue. Namely, the value of software to an organization is the revenue it generates minus ongoing support costs. You should identify quality-based metrics that show the effect Agile techniques have on your software. Motivation is everything. I finally understand why so many Agile advocates say you that if you are not on a team using Agile, you should leave and find one. Scrum and especially Agile encompass many elegant solutions to a wide variety of problems. If you are working on a team that has not encountered these problems the the team may never see the value in the solutions.   Having said all that, I'm not giving up on Agile or Scrum. I am convinced it is a better approach for software development. But reality is saying that its adoption is not straightforward and highly subject to disruption. Unless, that is, everyone really, really wants it.

    Read the article

  • Simple Java networking game engine [on hold]

    - by Florian Peschka
    I want to create a simple java networking game and search a networking engine that eases use of sockets etc. I have already read some questions on here and the internet about java networking for games, but many of them were over 10 years old or not really answered. I have no idea whatsoever about what exactly I need to send in terms of messages, but I figured simple strings or integers will be enough for my purposes. It's basically a peer to peer game, so I don't need a centralized server structure. Messaging doesn't have to have an extremely low ping, yet of course all players need to have a synchronized view. What are the possibilities I have here?

    Read the article

  • Review quality of code

    - by magol
    I have been asked to quality review two code bases. I've never done anything like that, and need advice on how to perform it and report it. Background There are two providers of code, one in VB and one in C (ISO 9899:1999 (C99)). These two programs do not work so well together, and of course, the two suppliers blames each other. I will therefore as a independent person review both codes, on a comprehensive level review the quality of the codes to find out where it is most likely that the problem lies. I will not try to find problems, but simply review the quality and how simple it is to manage and understand the code. Edit: I have yet not received much information about what the problem consists of. I've just been told that I will examine the code in terms of quality. Not so much more. I do not know the background to why they took this decision.

    Read the article

  • adding slugs to the URLs afterwards

    - by altuure
    we have a website for last 5 months and we did not used slug at bottom level elements so urls was like /apps/webmasters/badges/1100 would it make sense to add name to the URL after that point and redirect to the new ones ? I am interested in building more search terms. and increase page ranks ..... /apps/webmasters/badges/1100 - redirect and served at /apps/webmasters/badges/1100-supporter Or should I keep old URLs as is and create new urls with slugs. I would also appreciate some advice on shared urls on facebook or on twitter in those cases. Thanks in advance...

    Read the article

  • Chris Brook-Carter at the Oracle Retail Week Awards VIP Reception

    - by user801960
    The Oracle VIP Reception at the Oracle Retail Week Awards last week saw retail luminaries from around the UK and Europe gather to have a drink and celebrate the successes of retail in the last year. Guests included Lord Harris of Peckham, Tesco's Philip Clarke, Vanessa Gold from Ann Summers, former Retail Week editor Tim Danaher, Richard Pennycook from Morrisons and Ian Cheshire from Kingfisher Group. The new Retail Week editor-in-chief, Chris Brook-Carter, attended and took the time to speak to the guests about the value of the Oracle Retail Week Awards to the industry and to thank Oracle for its dedication to supporting the industry. Chris said: "I'd like to say a real heartfelt thanks to our partner this evening: Oracle. I had the privilege of being at the judging day and I got to meet Sarah and the team and I was struck by not only the passion that they have for the whole awards system and everything that means in terms of rewarding excellence within the retail industry but also their commitment to retail in general, and it's that sort of relationship that marks out retail as such a fantastic sector to be involved in." Chris's speech can be watched in full below:

    Read the article

  • Hadron Collider – Can it unveil the hidden secrets of universe?

    - by samsudeen
    Scientist at  European Centre for Nuclear Research (CERN) today successfully simulated the Big Bang experiment finally by producing  the world’s first high-energy particle collision.This is achieved through the collision of two protons with a total energy of  around seven trillion electron volts sending sub-particles spread through in every direction.   The experiment is conducted successfully around the  European Centre for Nuclear Research (CERN) which is under 100 metres below the Franco-Swiss border. This is said to be the biggest experiment in terms on the investment (around $7 billion) and the scientific importance. This will lead to a new era of science and could change the theories about the origin of universe. You can find  more videos about the experiment at the LHC Videos Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Deck from London UG 20110616 - Building a Reporting Brick capable of 1.2GBytes/sec and 80K IOs/sec for less than £2K

    - by tonyrogerson
    The Reporting Brick concept is not really anything new, it starts the walk toward bringing the work Jim Gray and Tom Barclay et al did on CyberBricks up-to-date in terms of current kit. A reporting brick is simply a box built from commodity kit utilising commodity SSD, namely the OCZ IBIS drives to gain extremely high levels of performance for a fraction of the cost required for typical server and san installs today. I'll write up over the next few months as I work further on the concept, for now the deck attached summarises some of the ideas around it, the deck was presented at last nights London SQL Server User Group, I will be presenting it again in Edinburgh on the 29th June and other locations later in the year. Deck: Commodity Kit.pptx  

    Read the article

  • Panduit Delivers on the Digital Business Promise

    - by Kellsey Ruppel
    How a 60-Year-Old Company Transformed into a Modern Digital BusinessConnecting with audiences through a robust online experience across multiple channels and devices is a nonnegotiable requirement in today’s digital world. Companies need a digital platform that helps them create, manage, and integrate processes, content, analytics, and more.Panduit, a company founded nearly 60 years ago, needed to simplify and modernize its enterprise application and infrastructure to position itself for long-term growth. Learn how it transformed into a digital business using Oracle WebCenter and Oracle Business Process Management. Join this webcast for an in-depth look at how these Oracle technologies helped Panduit: Increase self-service activity on their portal by 75% Improve number and quality of sales leads through increased customer interactions and registration over the web and mobile Create multichannel self-service interactions and content-enabled business processes Register now for this webcast. Register Now Presented by:Andy KershawSenior Director, Oracle WebCenter, Oracle BPM and Oracle Social Network Product Management, OracleVidya IyerIT Delivery Manager, PanduitPatrick GarciaIT Solutions Architect, Panduit Copyright © 2014, Oracle Corporation and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • MacBook Pro Compatibility, Multitouch, and General Experience

    - by jondavidjohn
    I am a Ex-Ubuntu user and decided to go to OSX mainly because I was going to be working in an OSX shop and felt like I needed a more mainstream OS to run Production level software packages like Adobe. 6 months in, and I am more than happy with my MacBook Pro purchase. Just the physical build quality alone warrants the premium price tag, but I am now looking at my day to day demands and realize that I really do not use any software that prevents me from turning back to Ubuntu. My question now is, in terms of 2010 MacBook Pro, How is the hardware compatibility? Does the trackpad multitouch gesture work with 10.10? is it oversensitive? And for anyone that has a relatively new macbook pro that is running Ubuntu, How is the general experience coming from an OSX environment?

    Read the article

  • Mocking the Unmockable: Using Microsoft Moles with Gallio

    - by Thomas Weller
    Usual opensource mocking frameworks (like e.g. Moq or Rhino.Mocks) can mock only interfaces and virtual methods. In contrary to that, Microsoft’s Moles framework can ‘mock’ virtually anything, in that it uses runtime instrumentation to inject callbacks in the method MSIL bodies of the moled methods. Therefore, it is possible to detour any .NET method, including non-virtual/static methods in sealed types. This can be extremely helpful when dealing e.g. with code that calls into the .NET framework, some third-party or legacy stuff etc… Some useful collected resources (links to website, documentation material and some videos) can be found in my toolbox on Delicious under this link: http://delicious.com/thomasweller/toolbox+moles A Gallio extension for Moles Originally, Moles is a part of Microsoft’s Pex framework and thus integrates best with Visual Studio Unit Tests (MSTest). However, the Moles sample download contains some additional assemblies to also support other unit test frameworks. They provide a Moled attribute to ease the usage of mole types with the respective framework (there are extensions for NUnit, xUnit.net and MbUnit v2 included with the samples). As there is no such extension for the Gallio platform, I did the few required lines myself – the resulting Gallio.Moles.dll is included with the sample download. With this little assembly in place, it is possible to use Moles with Gallio like that: [Test, Moled] public void SomeTest() {     ... What you can do with it Moles can be very helpful, if you need to ‘mock’ something other than a virtual or interface-implementing method. This might be the case when dealing with some third-party component, legacy code, or if you want to ‘mock’ the .NET framework itself. Generally, you need to announce each moled type that you want to use in a test with the MoledType attribute on assembly level. For example: [assembly: MoledType(typeof(System.IO.File))] Below are some typical use cases for Moles. For a more detailed overview (incl. naming conventions and an instruction on how to create the required moles assemblies), please refer to the reference material above.  Detouring the .NET framework Imagine that you want to test a method similar to the one below, which internally calls some framework method:   public void ReadFileContent(string fileName) {     this.FileContent = System.IO.File.ReadAllText(fileName); } Using a mole, you would replace the call to the File.ReadAllText(string) method with a runtime delegate like so: [Test, Moled] [Description("This 'mocks' the System.IO.File class with a custom delegate.")] public void ReadFileContentWithMoles() {     // arrange ('mock' the FileSystem with a delegate)     System.IO.Moles.MFile.ReadAllTextString = (fname => fname == FileName ? FileContent : "WrongFileName");       // act     var testTarget = new TestTarget.TestTarget();     testTarget.ReadFileContent(FileName);       // assert     Assert.AreEqual(FileContent, testTarget.FileContent); } Detouring static methods and/or classes A static method like the below… public static string StaticMethod(int x, int y) {     return string.Format("{0}{1}", x, y); } … can be ‘mocked’ with the following: [Test, Moled] public void StaticMethodWithMoles() {     MStaticClass.StaticMethodInt32Int32 = ((x, y) => "uups");       var result = StaticClass.StaticMethod(1, 2);       Assert.AreEqual("uups", result); } Detouring constructors You can do this delegate thing even with a class’ constructor. The syntax for this is not all  too intuitive, because you have to setup the internal state of the mole, but generally it works like a charm. For example, to replace this c’tor… public class ClassWithCtor {     public int Value { get; private set; }       public ClassWithCtor(int someValue)     {         this.Value = someValue;     } } … you would do the following: [Test, Moled] public void ConstructorTestWithMoles() {     MClassWithCtor.ConstructorInt32 =            ((@class, @value) => new MClassWithCtor(@class) {ValueGet = () => 99});       var classWithCtor = new ClassWithCtor(3);       Assert.AreEqual(99, classWithCtor.Value); } Detouring abstract base classes You can also use this approach to ‘mock’ abstract base classes of a class that you call in your test. Assumed that you have something like that: public abstract class AbstractBaseClass {     public virtual string SaySomething()     {         return "Hello from base.";     } }      public class ChildClass : AbstractBaseClass {     public override string SaySomething()     {         return string.Format(             "Hello from child. Base says: '{0}'",             base.SaySomething());     } } Then you would set up the child’s underlying base class like this: [Test, Moled] public void AbstractBaseClassTestWithMoles() {     ChildClass child = new ChildClass();     new MAbstractBaseClass(child)         {                 SaySomething = () => "Leave me alone!"         }         .InstanceBehavior = MoleBehaviors.Fallthrough;       var hello = child.SaySomething();       Assert.AreEqual("Hello from child. Base says: 'Leave me alone!'", hello); } Setting the moles behavior to a value of  MoleBehaviors.Fallthrough causes the ‘original’ method to be called if a respective delegate is not provided explicitly – here it causes the ChildClass’ override of the SaySomething() method to be called. There are some more possible scenarios, where the Moles framework could be of much help (e.g. it’s also possible to detour interface implementations like IEnumerable<T> and such…). One other possibility that comes to my mind (because I’m currently dealing with that), is to replace calls from repository classes to the ADO.NET Entity Framework O/R mapper with delegates to isolate the repository classes from the underlying database, which otherwise would not be possible… Usage Since Moles relies on runtime instrumentation, mole types must be run under the Pex profiler. This only works from inside Visual Studio if you write your tests with MSTest (Visual Studio Unit Test). While other unit test frameworks generally can be used with Moles, they require the respective tests to be run via command line, executed through the moles.runner.exe tool. A typical test execution would be similar to this: moles.runner.exe <mytests.dll> /runner:<myframework.console.exe> /args:/<myargs> So, the moled test can be run through tools like NCover or a scripting tool like MSBuild (which makes them easy to run in a Continuous Integration environment), but they are somewhat unhandy to run in the usual TDD workflow (which I described in some detail here). To make this a bit more fluent, I wrote a ReSharper live template to generate the respective command line for the test (it is also included in the sample download – moled_cmd.xml). - This is just a quick-and-dirty ‘solution’. Maybe it makes sense to write an extra Gallio adapter plugin (similar to the many others that are already provided) and include it with the Gallio download package, if  there’s sufficient demand for it. As of now, the only way to run tests with the Moles framework from within Visual Studio is by using them with MSTest. From the command line, anything with a managed console runner can be used (provided that the appropriate extension is in place)… A typical Gallio/Moles command line (as generated by the mentioned R#-template) looks like that: "%ProgramFiles%\Microsoft Moles\bin\moles.runner.exe" /runner:"%ProgramFiles%\Gallio\bin\Gallio.Echo.exe" "Gallio.Moles.Demo.dll" /args:/r:IsolatedAppDomain /args:/filter:"ExactType:TestFixture and Member:ReadFileContentWithMoles" -- Note: When using the command line with Echo (Gallio’s console runner), be sure to always include the IsolatedAppDomain option, otherwise the tests won’t use the instrumentation callbacks! -- License issues As I already said, the free mocking frameworks can mock only interfaces and virtual methods. if you want to mock other things, you need the Typemock Isolator tool for that, which comes with license costs (Although these ‘costs’ are ridiculously low compared to the value that such a tool can bring to a software project, spending money often is a considerable gateway hurdle in real life...).  The Moles framework also is not totally free, but comes with the same license conditions as the (closely related) Pex framework: It is free for academic/non-commercial use only, to use it in a ‘real’ software project requires an MSDN Subscription (from VS2010pro on). The demo solution The sample solution (VS 2008) can be downloaded from here. It contains the Gallio.Moles.dll which provides the here described Moled attribute, the above mentioned R#-template (moled_cmd.xml) and a test fixture containing the above described use case scenarios. To run it, you need the Gallio framework (download) and Microsoft Moles (download) being installed in the default locations. Happy testing…

    Read the article

  • keyword stuffing in SEO

    - by Andrej
    i have a web shop, and on some of the pages some keyword in used a bit more then on the others. for eg. "hp toner" is used preety much in the discription of the product, in the alt tag, in the brand, and so on, an if i have let's say 100 of these products on the "HP PAGE", that means that "hp toner" is gonna show up at least 200 times more than some other rendom word... but the keyword stuffing is not intentional here.. it's just that, the quantity of the product is bigger, and so is that word that describes it.. is that considered keyword stuffing in SEO terms?

    Read the article

  • Dual Control / Four Eyes Principle

    - by Ralf
    I have the requirement to implement some kind of Dual Control or Four-Eyes-Principle, meaning that every change of an object done by user A has to be checked by user B. A trivial example would be a publishing system where an author writes an article and another has to proofread it before it is published. I am a little bit surprised that you find nearly nothing about it on the net. No patterns, no libraries (besides cibet), no workflow solutions etc. Is this requirement really so uncommon? Or am I searching for the wrong terms? I am not looking for a specific solution. More for a pattern or best practice approach.

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • BDD IS Different to TDD

    - by Liam McLennan
    One of this morning’s sessions at Alt.NET 2010 discussed BDD. Charlie Pool expressed the opinion, which I have heard many times, that BDD is just a description of TDD done properly. For me, the core principles of BDD are: expressing behaviour in terms that show the value to the system actors Expressing behaviours / scenarios in a format that clearly separates the context, the action and the observations. If we go back to Kent Beck’s TDD book neither of these elements are mentioned as being core to TDD. BDD is an evolution of TDD. It is a specialisation of TDD, but it is not the same as TDD. Discussing BDD, and building specialised tools for BDD, is valuable even though the difference between BDD and TDD is subtle. Further, the existence of BDD does not mean that TDD is obsolete or invalidated.

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • Need help choosing between Grails and Yii Framework

    - by user530207
    I recently started on developing in PHP with the Yii Framework. I recently came across the Grails Framework and I'm pretty impressed by the sites they make, bigger companies seem to use Grails for their web development. When looking at yii, not many big companies are using it. I'm just starting out with the Yii framework and I don't want to turn back halfway when in the middle of learning Yii, so I hope someone can give me some comparison about the 2 in terms of power. Does Grails make things much easier and benefit me in the long run? I only have C++ background for now. It boils down to this. I want a powerful framework which will serve me for a very long time and by looking at the number of big companies using Grails, I feel discouraged to take the Yii path. Thank you! Some sites by Grails: http://video.sky.com/ http://espn.go.com/ http://www.atlassian.com/ http://www.linkedin.com/

    Read the article

  • Best Upper Bound & Best Lower Bound of an Algorithm

    - by Nayefc
    I am studying for a final exam and I came past a question I had on an earlier test. The questions asks us to find the minimum value in an unsorted array of integers. We must provide the best upper bound and the best lower bound that you can for the problem in the worst case. First, in such an example, the upper and lower bound are the same (hence, we can talk in terms of Big-Theta). In the worst case, we would have to go through the whole list as the minimum value would be at the end of the list. Therefore, the answer is Big-Theta(n). Is this a correct & good explanation?

    Read the article

  • ADF Faces now in Eclipse

    - by shay.shmeltzer
    The new version of Oracle Enterprise Pack for Eclipse was just release, and one of the key new feature it offers is integration of Oracle ADF Faces development in Eclipse. If you are serious about developing with JSF, you probably know by now that ADF Faces is the richest set of components out there both in terms of number of components and also the functionality they offer. The components offer a lot of Ajax functionality out of the box, and the framework also offers windowing, drag and drop, push, Javascript API, skinning and much more. OEPE makes it simple to build with ADF Faces and test run your application. Here is a basic tutorial that will get you all set up to use this combination. Once you do that, you can then do this:

    Read the article

  • Microsoft Technical Computing

    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: " As we build the Technical...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Quick ways to boost performance and scalability of ASP.NET, WCF and Desktop Clients

    - by oazabir
    There are some simple configuration changes that you can make on machine.config and IIS to give your web applications significant performance boost. These are simple harmless changes but makes a lot of difference in terms of scalability. By tweaking system.net changes, you can increase the number of parallel calls that can be made from the services hosted on your servers as well as on desktop computers and thus increase scalability. By changing WCF throttling config you can increase number of simultaneous calls WCF can accept and thus make most use of your hardware power. By changing ASP.NET process model, you can increase number of concurrent requests that can be served by your website. And finally by turning on IIS caching and dynamic compression, you can dramatically increase the page download speed on browsers and and overall responsiveness of your applications. Read the CodeProject article for more details. http://www.codeproject.com/KB/webservices/quickwins.aspx Please vote for me if you find the article useful.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >