Search Results

Search found 1348 results on 54 pages for 'martin hilton'.

Page 9/54 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Can the Clojure set and maps syntax be added to other Lisp dialects?

    - by Cedric Martin
    In addition to create list using parentheses, Clojure allows to create vectors using [ ], maps using { } and sets using #{ }. Lisp is always said to be a very extensible language in which you can easily create DSLs etc. But is Lisp so extensible that you can take any Lisp dialect and relatively easily add support for Clojure's vectors, maps and sets (which are all functions in Clojure)? I'm not necessarily asking about cons or similar actually working on these functions: what I'd like to know is if the other could be modified so that the source code would look like Clojure's source code (that is: using matching [ ], { } and #{ } in addition to ( )). Note that if it cannot be done this is not a criticism of Lisp: what I'd like to know is, technically, what should be done or what cannot be done if one were to add such a thing.

    Read the article

  • How does one set up a MIDI keyboard

    - by Martin Owens -doctormo-
    I would like to set up my keyboard via my midi-sport 2x2, I've plugged everything in and even installed the midisport-firmware package which was not automatically installed for some reason. The goal is to have the computer produce a piano sound when keys of the keyboard are hit. If you can make this work without jack, that would be good too. Step by step instructions, the less complexity the better.

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

  • IRM and Consumerization

    - by martin.abrahams
    As the season of rampant consumerism draws to its official close on 12th Night, it seems a fitting time to discuss consumerization - whereby technologies from the consumer market, such as the Android and iPad, are adopted by business organizations. I expect many of you will have received a shiny new mobile gadget for Christmas - and will be expecting to use it for work as well as leisure in 2011. In my case, I'm just getting to grips with my first Android phone. This trend developed so much during 2010 that a number of my customers have officially changed their stance on consumer devices - accepting consumerization as something to embrace rather than resist. Clearly, consumerization has significant implications for information control, as corporate data is distributed to consumer devices whether the organization is aware of it or not. I daresay that some DLP solutions can limit distribution to some extent, but this creates a conflict between accepting consumerization and frustrating it. So what does Oracle IRM have to offer the consumerized enterprise? First and foremost, consumerization does not automatically represent great additional risk - if an enterprise seals its sensitive information. Sealed files are encrypted, and that fundamental protection is not affected by copying files to consumer devices. A device might be lost or stolen, and the user might not think to report the loss of a personally owned device, but the data and the enterprise that owns it are protected. Indeed, the consumerization trend is another strong reason for enterprises to deploy IRM - to protect against this expansion of channels by which data might be accidentally exposed. It also enables encryption requirements to be met even though the enterprise does not own the device and cannot enforce device encryption. Moving on to the usage of sealed content on such devices, some of our customers are using virtual desktop solutions such that, in truth, the sealed content is being opened and used on a PC in the normal way, and the user is simply using their device for display purposes. This has several advantages: The sensitive documents are not actually on the devices, so device loss and theft are even less of a worry The enterprise has another layer of control over how and where content is used, as access to the virtual solution involves another layer of authentication and authorization - defence in depth It is a generic solution that means the enterprise does not need to actively support the ever expanding variety of consumer devices - the enterprise just manages some virtual access to traditional systems using something like Citrix or Remote Desktop services. It is a tried and tested way of accessing sealed documents. People have being using Oracle IRM in conjunction with Citrix and Remote Desktop for several years. For some scenarios, we also have the "IRM wrapper" option that provides a simple app for sealing and unsealing content on a range of operating systems. We are busy working on other ways to support the explosion of consumer devices, but this blog is not a proper forum for talking about them at this time. If you are an Oracle IRM customer, we will be pleased to discuss our plans and your requirements with you directly on request. You can be sure that the blog will cover the new capabilities as soon as possible.

    Read the article

  • What's a good 2D animation program for Linux (an alternative for e.g. Flash CS)?

    - by Martin Zeltin
    I don't mean the flash player here, I'm talking about the flash program that i can make animations with. Like Adoble Flash CS (formerly known as Macromedia flash). Is there a program on linux that i can make animations? I want to make a movie like animator vs animation. I used easy gif animator on windows lol it was a bit harder than flash but i'm on linux and I'd like to know what it has to offer. Worse case scenario, what gif animators are there on linux. :) Thanks!

    Read the article

  • Has anyone else read "Programming video games for the Evil Genius"

    - by Martin
    I bought this book called "Programming Video Games for the Evil Genius" by Ian Cinnamon. If there is anyone who has read or is familiar with this book I am wondering if they think it is worth reading. I am interested in making video games. I have already taken intro courses in C++, Java and Python and got through okay. I've been going through this book for about a month now(SLOWLY). All I have to do is type the code exactly in the book, BUT a lot of the code is not clearly explained. I do some research online but I usually still have some trouble answering my questions. Then I found stack overflow. It's been a ton of help. Right now I am trying to make a racing game right out of this book and I got to a point where the author left a bunch of errors in his code. One of the members of this website fixed it up for me, but added some stuff that I'm having trouble understanding. I spend more time trying to figure out the authors errors and fix them or get someone to help me fix them than I actually do learning code. I REALLY want to learn how to do this and I am ready and willing to put in the time, but I'm not sure if my time would be better spent learning from a different source. Are there any veterans out there that are familiar with this book and think it's worth it/not worth it? Should I try to move onto another book? Any advice for a fresh start for someone who wants to learn some video game programming?

    Read the article

  • Will bing bot index pages with invalid SSL certificates?

    - by Martin
    Bingbot and Yahoo slurp do not support SNI(Server Name Indication when using SSL). Ignoring other workarounds (multi domain certificates, non-SSL content etc.), will Bingbot index pages that have an invalid SSL certificate, eg. issued for example.net, but used on example.com? If possible please provide an example from Yahoo or Bing. I have found websites in bing, that use self signed certificates and are indexed correctly, but what about invalid certificates?

    Read the article

  • Configuring UCM content cache invalidation for a custom portal application

    - by Martin Deh
    Recently, I had blogged about enabling the UCM content cache invalidator for Spaces (found here).  This can also be enabled for a WebCenter Custom Portal application as well.  The much overlooked setting is done through the Content Repository connection definition in the JDeveloper Application Resources section.   Enabling the cache invalidator "sweeper" can be invaluable, where UCM content is being updated from within UCM (console) and not within the portal.

    Read the article

  • IRM Item Codes &ndash; what are they for?

    - by martin.abrahams
    A number of colleagues have been asking about IRM item codes recently – what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers. An item code is part of the metadata of every sealed document – unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination. This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly. In most scenarios, item codes are not relevant to the evaluation of a user’s rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution. Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code – when you hover the mouse pointer over a sealed file. As you see, the item code for this freshly created file combines a timestamp with the file name. But what are item codes for? The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle – internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx. This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don’t want to be managing each file individually, but never say never. Item codes can also be used for the opposite effect – to include a file in a user’s rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file – the one shown above. So how are item codes set? In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI. It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required. Setting the item code manually using the IRM Desktop The manual process is a simple extension of the sealing task. An authorised user can select the Advanced… sealing option, and will see a dialog that offers the option to specify the item code. To see this option, the user’s role needs the Set Item Code right – you don’t want most users to give any thought at all to item codes, so by default the option is hidden. Setting the item code programmatically A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document’s unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code. The Payslip Scenario To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips. To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code – and that item code might match the employee’s payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique – you can deliberately set the same code on many files for ease of administration. The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient. All that remains is to ensure that as each user’s payslip is sealed, it is assigned the correct item code – something that is easily managed by a simple IRM sealing application. Each month, an employee’s payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to – they have access to all documents that carry their employee code.

    Read the article

  • Is there a language between C and C++?

    - by Robert Martin
    I really like the simple and transparent nature of C: when I write C code I feel unencumbered by "leaky abstractions" and can almost always make a shrewd guess as to the assembly I'm producing. I also like the simple, familiar syntax for C. However, C doesn't have these simple, helpful doodads that C++ offers like classes, simplified non-cstring handling, etc. I know that it's all possible to implement in C using jump tables and the like, but that's a bit wordy at times, and not very type-safe for various reasons. I'm not a fan of the over-emphasis on objects in C++, though, and I'm gun shy of the 'new' operator and the like. C++ seems to have just a few too many hiccups to, for instance, be used as a system programming language. Does there exist a language that sits between C and C++ on the scale of widgets and doodads? Disclaimer: I mean this as purely a factual question. I do not intend to anger you because I don't share your view that C{,++} is good enough to do whatever I'm planning.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Should GeeksWithBlogs move to the Wordpress Platform?

    - by Martin Hinshelwood
    Geekswithblogs was my first ever blog and my first post was on 22nd June 2006. Since then very little functionality has been added. This is not a complaint, but rather an observation that it is very hard to keep up with all of the blogging capabilities that people want. My point would be: “Why bother!” Vote now for a migration of the awesome Geekswithblogs content from SubText to Wordpress. Having been a long time user of GWB I have been worried of late by my envy of other blogging platforms. I made a number of requests around 10 months ago for things that almost all blogging platforms provide, but which are not available on GWB. Support other comment frameworks – The current comment system is so antiquated that it does not even have the common filters like Facebook, twitter or Google login to help prevent spam. Tags are not listed in the RSS – This can prevent you from getting the Google juice that you deserve 301 redirects to a single URL– If you use a custom URL then all your posts are split between both the GWB URL and the custom one. This is VERY bas for SEO. I realise that it is difficult to find time to add all of the features that all the uber geeks on this site want, so why bother… lets move to the most popular and moded platform available and allow everyone to add whatever widgets they like. Figure: Vote for Geekswithblogs moving to Wordpress Why would I want this? There are over 13k plugins available Easy augmentation model Full mobile support Regular releases lots more… This could be a turning point in the legendary history of Geeks With Blogs, be part of it… What can I do to make this happen? I need your help to make this happen: Vote for it Discuss it

    Read the article

  • Allow JMX connection on JVM 1.6.x

    - by Martin Müller
    While trying to monitor a JVM on a remote system using visualvm the activation of JMX gave me some challenges. Dr Google and my employers documentation quickly revealed some -D opts needed for JMX, but strangely it only worked for a Solaris 10 system (my setup: MacOS laptop monitoring SPARC Solaris based JVMs) On S11 with the same opts I saw that "my" JVM listening on port 3000 (which I chose for JMX), but visualvm was not able to get a connection. Finally I found out that at least my S11 installation needed an explicit setting of the RMI host name. This what finally worked:         -Dcom.sun.management.jmxremote=true \        -Dcom.sun.management.jmxremote.ssl=false \        -Dcom.sun.management.jmxremote.authenticate=false \        -Dcom.sun.management.jmxremote.port=3000 \        -Djava.rmi.server.hostname=s11name.us.oracle.com \ Maybe this post saves someone else the time I spent on research 

    Read the article

  • What tasks should be explicitly mentioned in a job reference? [closed]

    - by Martin
    Glossary A job reference (see also the german version) is a letter from the (former) employer that states what the employee did, and how well he did it. There are oh so weird rules here on how to phrase stuff therein, but this is not what this question is about. Question I hope this can even be generally answered, but even if country/region specific, I think there is enough international know-how on this site to get useful answers for different regions. I was wondering how detailed the tasks a programmer / developer did should be spelled out in a job reference. (After all, they can be spelled out in all detail in a CV when applying for a new job.) So how much detail is usual for a job reference? Example Developed Windows applications in C++ or Developed Windows Desktop Applications using C++ with MS Visual Studio 2005 and MFC, utilising Boost 1.47 and specif library xyz, focusing on subsystem abc for numerical calculations of ... etc. What makes more sense?

    Read the article

  • Smarty: Configurable Comments and Code Templates

    - by Martin Fousek
    Hello, today we would like to show you few improvements we have prepared in PHP Smarty Framework for NetBeans 7.3. So let's talk about adjustable toggle comment action and code templates. Configurable Comments As some of you requested we implemented toggle comment action with adjustable behavior. In NetBeans 7.3 you can choose in Options between commenting as a "Smarty comments everywhere" or "Language sensitive comments" in Smarty Templates. Toggle comment language sensitive: Toggle comment as Smarty comment everywhere: Code Templates In NetBeans 7.3 we will provide by default many code templates inside Smarty templates or directly inside Smarty tags. Available should be code templates for all built-in or custom functions and modifiers of Smarty 3.x. Besides that you should be able to define additional custom templates easily in Options -> Editor -> Code Templates for "Smarty Templates" or directly for "Smarty Markup" (which means code templates inside Smarty tag). You can also take advantage of selection's template which are able to wrap your code with chosen Smarty tag. That's all for today. As always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent Smarty).

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • Do you know the minimum builds to create on any branch?

    - by Martin Hinshelwood
    You should always have three builds on your team project. These should be setup and tested using an empty solution before you write any code at all. Figure: Three builds named in the format [TeamProject].[AreaPath]_[Branch].[Gate|CI|Nightly] for every branch.   These builds should use the same XAML build workflow; however you may set them up to run a different set of tests depending on the time it takes to run a full build. Gate – Only needs to run the smallest set of tests, but should run most if not all of the Unit Test. This is run before developers are allowed to check-in CI – This should run all Unit Tests and all of the automated UI tests. It is run after a developer check-in. Nightly – The Nightly build should run all of the Unit Tests, all of the Automated UI tests and all of the Load and Performance tests. The nightly build is time consuming and will run but once a night. Packaging of your Product for testing the next day may be done at this stage as well. Figure: You can control what tests are run and what data is collected while they are running. Note: We do not run all the tests every time because of the time consuming nature of running some tests, but ALL tests should be run overnight. Note: If you had a really large project with thousands of tests including long running Load tests you may need to add a Weekly build to the mix.     Figure: Bad example, you can’t tell what these builds do if they are in a larger list   Figure: Good example, you know exactly what project, branch and type of build these are for.   Technorati Tags: SSW,SSW Rules,VS2010,VS ALM,Team Build 2010,Team Build

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • Backup, clean install and restore

    - by Tony Martin
    Due to the route I came into Ubuntu I now have 12.10 upgraded from 12.04 on an NTFS file system. I've invested a lot of time getting everything as I want it, vis packages installed and settings, etc. I wonder if there is an easy method of: 1 Backing up the whole system so that I can; 2 Do a squeaky clean install of 12.10 on ext4 with bells and whistles then; 3 Restore my backup so that my system behaves as it did before the reinstall? Sorry if this seems terribly obvious, but I don't want to find all the problems when it's too late. TIA.

    Read the article

  • No Cost 1-Click Remarketer Level Training

    - by martin.morganti(at)oracle.com
    The Remarketer level has proven to be a great success as a way of enabling Remarketers to Jump start a resale business with Oracle. As part of the Knowledge Zone for the 1-Click Products we have some no cost training available - the Oracle 1-Click Technology Products Guided Learning Path - which explains about the program and how to position Oracle products. We have been working to increase the training that is available for Remarketers and I am pleased to let you know that we have recently added more no cost training. The training path that we have released is the Oracle Database 11g 1-Click Technology Sales Guided Learning Path . This set of courses provides more detail on the Oracle 11G Database and will help you to better uncover and exploit opportunities for you to sell Oracle 11G as part of your solutions. So if you are interested in a No Fees, No Barriers No Excuses way to resell Oracle 1-Click products look at the Remarketer page and take the free 1-Click Guided Learning paths in the Training Section to kick start your activity.

    Read the article

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

  • More Collateral to drive Remarketer Activity

    - by martin.morganti(at)oracle.com
    Over the last few months we have added a range of marketing materials to the Remarketer Level pages including an extensive set of Marketing Kits to support the Solution Kits previously available. As part of the continuing program, we have just added some additional training for Remarketers at no cost. In addition to the Oracle 1-Click Technology Products Guided Learning Path which explains about the program and how to position the products, we have added Oracle Database 11g 1-Click Technology Sales Guided Learning Path. This Learning path allows Remarketers to get access to more detailed training on the Oracle 11G database and so develop their understanding of the opportunities they could be benefiting from today, by reselling the 1-click products. This is in direct response to requests to make more training available to the Remarketers, so take the opportunity to let your Remarketer customers and prospects know about this additional training and how they can Jump Start a resell business with Oracle today with No Fees, No Barriers and No Excuses.

    Read the article

  • Do you know that every user story should have an owner?

    - by Martin Hinshelwood
    When you are building complicated software and working with customers it is always nice for them to have some idea on who to speak to about a particular story during a sprint. In order to achieve this one of the Team takes responsibility for “looking after” a story. They will collect all of the “Done” emails and make sure that everyone follows the Done criteria identified by the team as well as answering any Product Owner queries. Figure: Bad example, The product owner is not sure who to speak to. Figure: Good example, The product owner can now see who he should speak to an developers know where to send done emails.   Technorati Tags: SSW,Scrum,SSW Rules,Rules to better Scrum with TFS

    Read the article

  • Positioning a sprite in XNA: Use ClientBounds or BackBuffer?

    - by Martin Andersson
    I'm reading a book called "Learning XNA 4.0" written by Aaron Reed. Throughout most of the chapters, whenever he calculates the position of a sprite to use in his call to SpriteBatch.Draw, he uses Window.ClientBounds.Width and Window.ClientBounds.Height. But then all of a sudden, on page 108, he uses PresentationParameters.BackBufferWidth and PresentationParameters.BackBufferHeight instead. I think I understand what the Back Buffer and the Client Bounds are and the difference between those two (or perhaps not?). But I'm mighty confused about when I should use one or the other when it comes to positioning sprites. The author uses for the most part Client Bounds both for checking whenever a moving sprite is of the screen and to find a spawn point for new sprites. However, he seems to make two exceptions from this pattern in his book. The first time is when he wants some animated sprites to "move in" and cross the screen from one side to another (page 108 as mentioned). The second and last time is when he positions a texture to work as a button in the lower right corner of a Windows Phone 7 screen (page 379). Anyone got an idea? I shall provide some context if it is of any help. Here's how he usually calls SpriteBatch.Draw (code example from where he positions a sprite in the middle of the screen [page 35]): spriteBatch.Draw(texture, new Vector2( (Window.ClientBounds.Width / 2) - (texture.Width / 2), (Window.ClientBounds.Height / 2) - (texture.Height / 2)), null, Color.White, 0, Vector2.Zero, 1, SpriteEffects.None, 0); And here is the first case of four possible in a switch statement that will set the position of soon to be spawned moving sprites, this position will later be used in the SpriteBatch.Draw call (page 108): // Randomly choose which side of the screen to place enemy, // then randomly create a position along that side of the screen // and randomly choose a speed for the enemy switch (((Game1)Game).rnd.Next(4)) { case 0: // LEFT to RIGHT position = new Vector2( -frameSize.X, ((Game1)Game).rnd.Next(0, Game.GraphicsDevice.PresentationParameters.BackBufferHeight - frameSize.Y)); speed = new Vector2(((Game1)Game).rnd.Next( enemyMinSpeed, enemyMaxSpeed), 0); break;

    Read the article

  • Ubuntu One and iPad

    - by Martin G Miller
    I have installed the Ubuntu One app for my iPad 3 running iOS 6. It runs and asked to access my pictures folder on the iPad, which I granted. Now it just displays a splash screen for the Ubuntu One but does not seem to be doing anything else. I can't figure out how to get it to see my regular Ubuntu One account that I have had for the last few years and use regularly between various Ubuntu computers I have.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >