Search Results

Search found 1378 results on 56 pages for 'martin ks'.

Page 9/56 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Upgrading Visual Studio 2010

    - by Martin Hinshelwood
    I have been running Visual Studio 2010 as my main development studio on my development computer since the RC was released. I need to upgrade that to the RTM, but first I need to remove it. Microsoft have done a lot of work to make this easy, and it works. Its as easy as uninstalling from the control panel. I have had may previous versions of Visual Studio 2010 on this same computer with no need to rebuild to remove all the bits. Figure: Run the uninstall from the control panel to remove Visual Studio 2010 RC Figure: The uninstall removes everything for you.  Figure: A green tick means the everything went OK. If you get a red cross, try installing the RTM anyway and it should warn you with what was not uninstalled properly and you can remove it manually.   Once you have VS2010 RC uninstalled installing should be a breeze. The install for 2010 is much faster than 2008. Which could take all day, and then some on slower computers. This takes around 20 minutes even on my small laptop. I always do a full install as although I have to do c# I sometimes get to use a proper programming language VB.NET. Seriously, there is nothing worse than trying to open a project and the other developer has used something you don't have. Its not their fault. Its yours! Save yourself the angst and install Fully, its only 5.9GB. Figure: I always select all of the options.   Now go forth and develop! Preferably in VB.NET…   Technorati Tags: Visual Studio,VS2010,VS 2010

    Read the article

  • Are project managers useful in Scrum?

    - by Martin Wickman
    There are three roles defined in Scrum: Team, Product Owner and Scrum Master. There is no project manager, instead the project manager job is spread across the three roles. For instance: The Scrum Master: Responsible for the process. Removes impediments. The Product Owner: Manages and prioritizes the list of work to be done to maximize ROI. Represents all interested parties (customers, stakeholders). The Team: Self manage its work by estimating and distributing it among themselves. Responsible for meeting their own commitments. So in Scrum, there is no longer a single person responsible for project success. There is no command-and-control structure in place. That seems to baffle a lot of people, specifically those not used to agile methods, and of course, PM's. I'm really interested in this and what your experiences are, as I think this is one of the things that can make or break a Scrum implementation. Do you agree with Scrum that a project manager is not needed? Do you think such a role is still required? Why?

    Read the article

  • When there's no TCO, when to worry about blowing the stack?

    - by Cedric Martin
    Every single time there's a discussion about a new programming language targetting the JVM, there are inevitably people saying things like: "The JVM doesn't support tail-call optimization, so I predict lots of exploding stacks" There are thousands of variations on that theme. Now I know that some language, like Clojure for example, have a special recur construct that you can use. What I don't understand is: how serious is the lack of tail-call optimization? When should I worry about it? My main source of confusion probably comes from the fact that Java is one of the most succesful languages ever and quite a few of the JVM languages seems to be doing fairly well. How is that possible if the lack of TCO is really of any concern?

    Read the article

  • SSW Scrum Rule: Do you know to use clear task descriptions?

    - by Martin Hinshelwood
    When you create tasks in Scrum you are doing this within a time box and you tend to add only the information you need to remember what the task is. And the entire Team was at the meeting and were involved in the discussions around the task, so why do you need more? Once you have accepted a task you should then add as much information as possible so that anyone can pick up that task; what if your numbers come up? Will you be into work the next day? Figure: What if your numbers come up in the lottery? What if the Team runs a syndicate and all your numbers come up? The point is that anything can happen and you need to protect the integrity of the project, the company and the Customer. Add as much information to the task as you think is necessary for anyone to work on the task. If you need to add rich text and images you can do this by attaching an email to the task.   Figure: Bad example, there is not enough information for a non team member to complete this task Figure: Julie provided a lot more information and another team should be able to pick this up. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS.   Technorati Tags: Scrum,SSW Rules,TFS 2010

    Read the article

  • Free training at Northwest Cadence

    - by Martin Hinshelwood
    Even though I have only been at Northwest Cadence for a short time I have already done so much. What I really wanted to do was let you guys know about a bunch of FREE training that NWC offers. These sessions are at a fantastic time for the UK as 9am PST (Seattle time) is around 5pm GMT. Its a fantastic way to finish off your Fridays and with the lack of love for developers in the UK set to continue I would love some of you guys to get some from the US instead. There are really two offerings. The first is something called Coffee talks that take you through an hours worth of detail in a specific category. Coffee Talks These coffee talks have some superb topics and you can get excellent interaction with the presenter as they are kind of informal. Date Day Time Topic Register Here 01/04/11 Tuesday 8:30AM – 9:30AM PST Real World Business and Technical Benefits of ALM with TFS 2010 150656 01/28/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152810 02/11/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152844 02/25/11 Friday 2:00PM - 3:00PM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152816 03/11/11 Friday 9:00AM - 10:00AM PST Lab Manager The Ultimate “No More No Repro” Tool 152809 03/25/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152838 04/08/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152846 04/22/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152839 05/06/11 Friday 2:00PM - 3:00PM PST Real World Business and Technical Benefits of ALM with TFS 2010 150657 05/20/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152842 06/03/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152847 06/17/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152843   ALM Training Engagement Program Microsoft has released a new program to bring free Visual Studio 2010 Training Sessions to select customers on Microsoft Visual Studio products and how Application Lifecycle Management (ALM) solutions can help drive greater business impact. For more details on this program, please see the process chart below.  To get started send an email to us; This training is paid for by Microsoft and you would need to commit to 4 sessions in order to get accepted into the program. So these have more hoops to jump through to get them, but the content is much more formal and centres around adoption.

    Read the article

  • Cuda vs OpenCL - opinions

    - by Martin Beckett
    Interested in peoples opinions of Cuda vs openCL following NVidia's Cuda4 release. I had originally gone with openCL since cross platform, open standards are a good thing(tm). I assumed NVidia would fall into line as they had done with openGL. But having talked to some NVidia people, they (naturaly) claim that they will concentrate on CUDA and openCL is hampered by having committees and having to please everyone - like openGL. And with the new tools and libs in CUDA it's hard to argue with that. -I'm in a fairly technical market so I can require the users to have particular HW.

    Read the article

  • Privacy Protection in Oracle IRM 11g

    - by martin.abrahams
    Another innovation in Oracle IRM 11g is an in-built privacy policy challenge. By design, one of the many things that Oracle IRM does, of course, is collect audit information about how and where sealed documents are being used - user names, machine identifiers and so on. Many customers consider that this has privacy implications that the user should be invited to accept as a condition of service use - for the protection of both of the user and the service from avoidable controversy. So, in 11g IRM, when a new user connects to a server for the first time, they can expect to see the following privacy policy dialog. The dialog provides a configurable URL that the customer can use to publish the privacy policy for their IRM service. The policy might clarify what data is being collected and stored, what use that data might be put to, and so on as required by the service owner's legal advisers. In previous releases, you could construct an equivalent capability, and some customers did, but this innovation makes it much easier to do - you simply write a privacy policy and publish it as a web page for which the dialog automatically provides a link. This is another example of how Oracle IRM anticipates not just the security requirements of a customer, but also the broader requirements of service provisioning.

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • Programa Talleres FMW Junio 2010

    - by [email protected]
      PROGRAMA TALLERES FMW Junio 2010 Enterprise 2.0 TALLER FECHA LOCALIZACIÓN Gestion de Contenidos Web y Portales (UCM + WC Suite) 08/06/10 Madrid Digitalización (IPM) 09/06/10 Madrid Service Oriented Architecture (SOA) TALLER FECHA LOCALIZACIÓN Automatización de Procesos de Negocio con Oracle BPM 01/06/10 Madrid Oracle WebLogic 17/06/10 Madrid Inscribirse:

    Read the article

  • Extensible Metadata in Oracle IRM 11g

    - by martin.abrahams
    Another significant change in Oracle IRM 11g is that we now use XML to create the tamperproof header for each sealed document. This article explains what this means, and what benefit it offers. So, every sealed file has a metadata header that contains information about the document - its classification, its format, the user who sealed it, the name and URL of the IRM Server, and much more. The IRM Desktop and other IRM applications use this information to formulate the request for rights, as well as to enhance the user experience by exposing some of the metadata in the user interface. For example, in Windows explorer you can see some metadata exposed as properties of a sealed file and in the mouse-over tooltip. The following image shows 10g and 11g metadata side by side. As you can see, the 11g metadata is written as XML as opposed to the simple delimited text format used in 10g. So why does this matter? The key benefit of using XML is that it creates the opportunity for sealing applications to use custom metadata. This in turn creates the opportunity for custom classification models to be defined and enforced. Out of the box, the solution uses the context classification model, in which two particular pieces of metadata form the basis of rights evaluation - the context name and the document's item code. But a custom sealing application could use some other model entirely, enabling rights decisions to be evaluated on some other basis. The integration with Oracle Beehive is a great example of this. When a user adds a document to a Beehive workspace, that document can be automatically sealed with metadata that represents the Beehive security model rather than the context model. As a consequence, IRM can enforce the Beehive security model precisely and all rights configuration can actually be managed through the Beehive UI rather than the IRM UI. In this scenario, IRM simply supports the Beehive application, seamlessly extending Beehive security to all copies of workspace documents without any additional administration. Finally, I mentioned that the metadata header is tamperproof. This is obviously to stop a rogue user modifying the metadata with a view to gaining unauthorised access - reclassifying a board document to a less sensitive classifcation, for example. To prevent this, the header is digitally signed and can only be manipulated by a suitably authorised sealing application.

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • Access Control and Accessibility in Oracle IRM 11g

    - by martin.abrahams
    A recurring theme you'll find throughout this blog is that IRM needs to balance security with usability and manageability. One of the innovations in Oracle IRM 11g typifies this, as we have introduced a new right that may be included in any role - Accessibility. When creating or modifying a role, you simply select Accessibility along with Open, Print, Edit or whatever rights you want to include in the role. You might, for example, have parallel roles of Reader and Reader with Accessibility and Contributor and Contributor with Accessibility. The effect of the Accessibility right is to relax some of the protection of content in use such that selected users can use accessibility tools. For example, a user with the Accessibility right would be able to use the screen magnification tool, which IRM would ordinarily prevent because it involves screen capture. This new right makes it easy for you to apply security to documents yet, subject to suitable approval processes, cater for the fact that a subset of users might be disproportionately inconvenienced by some of the normal usage constraints. Rather than make those users put up with the restrictions, or perhaps exempt them from using sealed documents altogether, this new right allows you to accommodate them in a controlled manner, and to balance security with corporate accessibility goals.

    Read the article

  • Hide email adress with JavaScript

    - by Martin Aleksander
    I read somewhere that hiding email address behind JavaScript code, could reduce spam bots harvesting the email address. <script language="javascript" type="text/javascript"> var a = "Red"; var t = "no"; var doc = document; var b = "ITpro"; var ad = a; ad += "@"; ad += b; ad += "."; ad += t; var mt = "ma"; mt += "il"; mt += "to"; var text = ""; if (text == null || text.length == 0) text = ad; doc.write("<"+"a hr"+"ef=\""+mt+":"+ad+"\">"+text+"</"+"a>"); </script> This will not display the actual email-address in the sourcecode of the page, but it will display and work like a normal link for human users. Is it any point of doing this? Will it reduce spam bots, or is it just nonsense that might slow down performance of the page because of the JavaScript?

    Read the article

  • Unable to install Apache 2.2.22 in Ubuntu 12.04

    - by Martin Betz
    I am not able to install Apache 2.2 in Ubuntu 12.04. Here is a snippet from my console log (sorry, it is in German): apache2 : Hängt ab von: apache2-mpm-worker (= 2.2.22-1ubuntu1) soll aber nicht installiert werden oder apache2-mpm-prefork (= 2.2.22-1ubuntu1) aber 2.2.22-1ubuntu1.1 soll installiert werden ... That means like: apache2 depends from apa..worker, but that should not get installed OR from ap...prefork..ubuntu1, but ap...prefork..ubuntu1.1 should get installed. I have no idea how to tackle that issue..

    Read the article

  • Is shipping a Clojure desktop app realistic?

    - by Cedric Martin
    I'm currently shipping a desktop Java application. It is a plain old Java 5 Java / Swing app and so far everything worked nicely. Java 5 was targetted because some users were on OS X version / computers that shall never have Java 6 (we may lift this limitation soon and switch to a newer Java and simply abandoning my users stuck with Java 5). I'm quickly getting up to speed with Clojure but I haven't really done lots of Clojure-to-Java and Java-to-Clojure yet and I was wondering if it was realistic to ship a Clojure desktop application instead of a Java application? The application I'm shipping is currently about 12 MB with all the .jar so adding Clojure doesn't seen to be too much of an issue. My plan would be to have Clojure call Java APIs: my application is already divided in several independent jars. If I understand correctly calling Clojure from Java is harder than calling Java code from Clojure which is why I'd basically rewrite all the UI (part of the UI, mixing Swing components and self-made BufferedImages needs to be rewritten anyway due to the rise of retina display), and do all the 'wiring' from Clojure. So that's the problem I'm facing: is it realistic to ship a Clojure desktop app? (it certainly doesn't seem to be very widespread but then shipping plain Java desktop apps ain't that common either and I'm doing it anyway) Technically, what would need to be done? (compared to shipping a Java app)

    Read the article

  • Allocating Entities within an Entity System

    - by miguel.martin
    I'm quite unsure how I should allocate/resemble my entities within my entity system. I have various options, but most of them seem to have cons associated with them. In all cases entities are resembled by an ID (integer), and possibly has a wrapper class associated with it. This wrapper class has methods to add/remove components to/from the entity. Before I mention the options, here is the basic structure of my entity system: Entity An object that describes an object within the game Component Used to store data for the entity System Contains entities with specific components Used to update entities with specific components World Contains entities and systems for the entity system Can create/destroy entites and have systems added/removed from/to it Here are my options, that I have thought of: Option 1: Do not store the Entity wrapper classes, and just store the next ID/deleted IDs. In other words, entities will be returned by value, like so: Entity entity = world.createEntity(); This is much like entityx, except I see some flaws in this design. Cons There can be duplicate entity wrapper classes (as the copy-ctor has to be implemented, and systems need to contain entities) If an Entity is destroyed, the duplicate entity wrapper classes will not have an updated value Option 2: Store the entity wrapper classes within an object pool. i.e. Entities will be return by pointer/reference, like so: Entity& e = world.createEntity(); Cons If there is duplicate entities, then when an entity is destroyed, the same entity object may be re-used to allocate another entity. Option 3: Use raw IDs, and forget about the wrapper entity classes. The downfall to this, I think, is the syntax that will be required for it. I'm thinking about doing thisas it seems the most simple & easy to implement it. I'm quite unsure about it, because of the syntax. i.e. To add a component with this design, it would look like: Entity e = world.createEntity(); world.addComponent<Position>(e, 0, 3); As apposed to this: Entity e = world.createEntity(); e.addComponent<Position>(0, 3); Cons Syntax Duplicate IDs

    Read the article

  • PHP may be executing as a "privileged" group and user, which could be a serious security vulnerability

    - by Martin
    I ran some security tests on a Ubuntu 12.04 Server, and I've got these warnings : PHP may be executing as a "privileged" group, which could be a serious security vulnerability. PHP may be executing as a "privileged" user, which could be a serious security vulnerability. In /etc/apache2/envvars, I have this: export APACHE_RUN_USER=www-data export APACHE_RUN_GROUP=www-data And all files in /var/www are having these user/group: www-data:www-data Am I setting this correctly? What should I do to fix this problem?

    Read the article

  • Parallelism implies concurrency but not the other way round right?

    - by Cedric Martin
    I often read that parallelism and concurrency are different things. Very often the answerers/commenters go as far as writing that they're two entirely different things. Yet in my view they're related but I'd like some clarification on that. For example if I'm on a multi-core CPU and manage to divide the computation into x smaller computation (say using fork/join) each running in its own thread, I'll have a program that is both doing parallel computation (because supposedly at any point in time several threads are going to run on several cores) and being concurrent right? While if I'm simply using, say, Java and dealing with UI events and repaints on the Event Dispatch Thread plus running the only thread I created myself, I'll have a program that is concurrent (EDT + GC thread + my main thread etc.) but not parallel. I'd like to know if I'm getting this right and if parallelism (on a "single but multi-cores" system) always implies concurrency or not? Also, are multi-threaded programs running on multi-cores CPU but where the different threads are doing totally different computation considered to be using "parallelism"?

    Read the article

  • Can the Clojure set and maps syntax be added to other Lisp dialects?

    - by Cedric Martin
    In addition to create list using parentheses, Clojure allows to create vectors using [ ], maps using { } and sets using #{ }. Lisp is always said to be a very extensible language in which you can easily create DSLs etc. But is Lisp so extensible that you can take any Lisp dialect and relatively easily add support for Clojure's vectors, maps and sets (which are all functions in Clojure)? I'm not necessarily asking about cons or similar actually working on these functions: what I'd like to know is if the other could be modified so that the source code would look like Clojure's source code (that is: using matching [ ], { } and #{ } in addition to ( )). Note that if it cannot be done this is not a criticism of Lisp: what I'd like to know is, technically, what should be done or what cannot be done if one were to add such a thing.

    Read the article

  • How does one set up a MIDI keyboard

    - by Martin Owens -doctormo-
    I would like to set up my keyboard via my midi-sport 2x2, I've plugged everything in and even installed the midisport-firmware package which was not automatically installed for some reason. The goal is to have the computer produce a piano sound when keys of the keyboard are hit. If you can make this work without jack, that would be good too. Step by step instructions, the less complexity the better.

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

  • Is there a language between C and C++?

    - by Robert Martin
    I really like the simple and transparent nature of C: when I write C code I feel unencumbered by "leaky abstractions" and can almost always make a shrewd guess as to the assembly I'm producing. I also like the simple, familiar syntax for C. However, C doesn't have these simple, helpful doodads that C++ offers like classes, simplified non-cstring handling, etc. I know that it's all possible to implement in C using jump tables and the like, but that's a bit wordy at times, and not very type-safe for various reasons. I'm not a fan of the over-emphasis on objects in C++, though, and I'm gun shy of the 'new' operator and the like. C++ seems to have just a few too many hiccups to, for instance, be used as a system programming language. Does there exist a language that sits between C and C++ on the scale of widgets and doodads? Disclaimer: I mean this as purely a factual question. I do not intend to anger you because I don't share your view that C{,++} is good enough to do whatever I'm planning.

    Read the article

  • Has anyone else read "Programming video games for the Evil Genius"

    - by Martin
    I bought this book called "Programming Video Games for the Evil Genius" by Ian Cinnamon. If there is anyone who has read or is familiar with this book I am wondering if they think it is worth reading. I am interested in making video games. I have already taken intro courses in C++, Java and Python and got through okay. I've been going through this book for about a month now(SLOWLY). All I have to do is type the code exactly in the book, BUT a lot of the code is not clearly explained. I do some research online but I usually still have some trouble answering my questions. Then I found stack overflow. It's been a ton of help. Right now I am trying to make a racing game right out of this book and I got to a point where the author left a bunch of errors in his code. One of the members of this website fixed it up for me, but added some stuff that I'm having trouble understanding. I spend more time trying to figure out the authors errors and fix them or get someone to help me fix them than I actually do learning code. I REALLY want to learn how to do this and I am ready and willing to put in the time, but I'm not sure if my time would be better spent learning from a different source. Are there any veterans out there that are familiar with this book and think it's worth it/not worth it? Should I try to move onto another book? Any advice for a fresh start for someone who wants to learn some video game programming?

    Read the article

  • IRM and Consumerization

    - by martin.abrahams
    As the season of rampant consumerism draws to its official close on 12th Night, it seems a fitting time to discuss consumerization - whereby technologies from the consumer market, such as the Android and iPad, are adopted by business organizations. I expect many of you will have received a shiny new mobile gadget for Christmas - and will be expecting to use it for work as well as leisure in 2011. In my case, I'm just getting to grips with my first Android phone. This trend developed so much during 2010 that a number of my customers have officially changed their stance on consumer devices - accepting consumerization as something to embrace rather than resist. Clearly, consumerization has significant implications for information control, as corporate data is distributed to consumer devices whether the organization is aware of it or not. I daresay that some DLP solutions can limit distribution to some extent, but this creates a conflict between accepting consumerization and frustrating it. So what does Oracle IRM have to offer the consumerized enterprise? First and foremost, consumerization does not automatically represent great additional risk - if an enterprise seals its sensitive information. Sealed files are encrypted, and that fundamental protection is not affected by copying files to consumer devices. A device might be lost or stolen, and the user might not think to report the loss of a personally owned device, but the data and the enterprise that owns it are protected. Indeed, the consumerization trend is another strong reason for enterprises to deploy IRM - to protect against this expansion of channels by which data might be accidentally exposed. It also enables encryption requirements to be met even though the enterprise does not own the device and cannot enforce device encryption. Moving on to the usage of sealed content on such devices, some of our customers are using virtual desktop solutions such that, in truth, the sealed content is being opened and used on a PC in the normal way, and the user is simply using their device for display purposes. This has several advantages: The sensitive documents are not actually on the devices, so device loss and theft are even less of a worry The enterprise has another layer of control over how and where content is used, as access to the virtual solution involves another layer of authentication and authorization - defence in depth It is a generic solution that means the enterprise does not need to actively support the ever expanding variety of consumer devices - the enterprise just manages some virtual access to traditional systems using something like Citrix or Remote Desktop services. It is a tried and tested way of accessing sealed documents. People have being using Oracle IRM in conjunction with Citrix and Remote Desktop for several years. For some scenarios, we also have the "IRM wrapper" option that provides a simple app for sealing and unsealing content on a range of operating systems. We are busy working on other ways to support the explosion of consumer devices, but this blog is not a proper forum for talking about them at this time. If you are an Oracle IRM customer, we will be pleased to discuss our plans and your requirements with you directly on request. You can be sure that the blog will cover the new capabilities as soon as possible.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >