Search Results

Search found 2733 results on 110 pages for 'asset tracking'.

Page 44/110 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Week in Geek: U.S and Israel Caught Operating as Partners in Cyberwarfare Scandal

    - by Asian Angel
    Our first edition of WIG for June is filled with news link goodness covering topics such as no more Start Menu hacks in the Windows 8 Release Preview, Microsoft has upset advertisers with IE10 ‘Do Not Track’ policy, the FTC will investigate Facebook’s purchase of Instagram, and more. Original, unaltered Grim Reaper wallpaper is available for download here. HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • What is the way to understand someone else's giant uncommented spaghetti code? [closed]

    - by Anisha Kaul
    Possible Duplicate: I’ve inherited 200K lines of spaghetti code — what now? I have been recently handled a giant multithreaded program with no comments and have been asked to understand what it does, and then to improve it (if possible). Are there some techniques which should be followed when we need to understand someone else's code? OR do we straightaway start from the first function call and go on tracking next function calls? C++ (with multi-threading) on Linux

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • The Best How-To Geek Articles for June 2012

    - by Asian Angel
    This past month we covered topics such as why you only have to wipe a disk once to erase it, what RSS is and how you can benefit from using it, how websites are tracking you online, and more. Join us as we look back at the best articles for June. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • At The ATM: The Challenge of Tiny Buttons [Video]

    - by Jason Fitzpatrick
    If you’ve ever mis-mashed the buttons on an electronic device because your fingers are just too big, you’ll appreciate the situation this cheerful but massive fingered fellow gets into. Courtesy of Rikke Asbjoern, created while interning at Cartoon Network, the video is sure to hit home with those of us that fumble keypads and buttons wherever we go. [via Neatorama] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Google I/O 2010 - Google Analytics APIs: End to end

    Google I/O 2010 - Google Analytics APIs: End to end Google I/O 2010 - Google Analytics APIs: End to end Google APIs 201 Nick Mihailovski Google Analytics measures performance of your website. Learn advanced techniques on how to use our tracking, processing and data export APIs as we walk you through an example of creating a most visited pages web element for your website. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 55:42 More in Science & Technology

    Read the article

  • Friday Fun: Mad Virus

    - by Asian Angel
    In this week’s game infection of all cell-kind is the ultimate goal as you lead your virus army to victory. Will you succeed in infecting everything in your path or will you be stopped just short of total domination? HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • Exploring your database schema with SQL

    In the second part of Phil's series of articles on finding stuff (such as objects, scripts, entities, metadata) in SQL Server, he offers some scripts that should be handy for the developer faced with tracking down problem areas and potential weaknesses in a database.

    Read the article

  • What resources are there for facial recognition

    - by Zintinio
    I'm interested in learning the theory behind facial recognition software so that I can hopefully implement it in the future. Not just face tracking, but being able to recognize individuals. What papers, books, libraries, or source is available so that I can learn more about the subject? I have found libface which seems to use eigenfaces for recognition. If there are any practitioners out there, please share any information that you can.

    Read the article

  • Resources for using TFS for Agile Project Development?

    - by Amy P
    Our company just installed TFS for us to start using for project development processes and source control. They want us to start using it to manage our projects as well. We have a small team, no current bug or task tracking software, and 2 developers of the 3 have experience with any actual methodologies. What books, websites, and/or other information can you recommend for us to use to get started?

    Read the article

  • Sustainability Activities at Oracle OpenWorld

    - by Evelyn Neumayr
    Close to 50,000 participants will come to San Francisco for Oracle OpenWorld and JavaOne events, held September 30-October 4, 2012 at Moscone Center. Oracle is very conscious of the impact that these events have on the environment and, as part of its ongoing commitment to sustainability, has developed a sustainable event program-now in its fifth year-that aims to maximize positive benefits and minimize negative impacts in a variety of ways. Click here for more details. At the Oracle OpenWorld conference, there will be many sessions and even a hands-on lab which discuss the sustainability solutions that Oracle provides for our customers. I wanted to highlight a few of those sessions here so if you will be at Oracle OpenWorld, you can make sure to attend them. One of the most compelling sessions promises to be our “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” session on Wednesday, October 3 from 10:15 a.m. to 11:15 a.m. in Moscone West 3005. Oracle Chairman of the Board Jeff Henley, Chief Sustainability Officer Jon Chorley, and other Oracle executives will honor select customers with Oracle's Eco-Enterprise Innovation award. This award recognizes customers and their respective partners who rely on Oracle products to support their green business practices in order to reduce their environmental impact, while improving business efficiencies and reducing costs. Another interesting session is the “Tracking, Reporting, and Reducing Environmental Impact with Oracle Solutions” which occurs on Monday, October 1 from 4:45 p.m. to 5:45 p.m. in Moscone West Room 2022. This session covers Oracle’s overall sustainability strategy as well as Oracle Environmental Accounting and Reporting (EA&R), which leverages Oracle ERP and BI solutions for accurate, efficient tracking of energy, emissions, and other environmental data. If you want more details, make sure to visit the hands-on lab titled “Oracle Environmental Accounting & Reporting for Integrated Sustainability Reporting”. This hour-long lab will take place on Tuesday, October 2 at 5:00 p.m. in the Marriott Marquis Hotel-Nob Hill CD. Here you can learn how to use Oracle EA&R to collect sustainability-related data in an efficient and reliable manner as part of existing business processes in Oracle E-Business Suite or JD Edwards Enterprise One. Register for this hands-on lab here.  

    Read the article

  • A Look at Exceptions in .NET Applications

    Memory dumps are a wonderful way of finding out what caused an exception in a managed .NET application, particularly if it is happening in a production application. It is when tracking exceptions in applications where you can't use Visual Studio that the techniques of using cdb and sos.dll come into their own. They may nor be skills that you need to use regularly, but at some point, they will be invaluable. Edward supplies sample memory dumps and gives you a simple introduction.

    Read the article

  • Easy Access a Cornerstone to Fusion Applications HCM User Experience

    - by Jay Richey, HCM Product Marketing
    With Fusion Applications, Oracle fundamentally changes a fragmented, frustrating work situation. Users of Human Capital Management (HCM) software often must bounce around between applications, searching diligently for the right information about employees. They may spend a lot of their time tracking down the data they need to complete a task. Fusion offers a completely different user experience. Read more...

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • SQL Server 2008 R2 Cumulative Update 8 now available

    - by Greg Low
    CU8 is now available for SQL Server 2008 R2. You will find it here: http://support.microsoft.com/kb/2534352/en-us It includes the following fixes: VSTS bug number KB article number Description 726734 2522893 (http://support.microsoft.com/kb/2522893/ ) FIX: A backup operation on a SQL Server 2008 or SQL Server 2008 R2 database fails if you enable change tracking on this database 730658 2525665 (http://support.microsoft.com/kb/2525665/ ) FIX: SQL Server 2008 BIDS stops responding when you stop debugging...(read more)

    Read the article

  • How easy is it to alter a browser fingerprint?

    - by JFig
    I am researching this question for a possible paper. Given the exploitation of user identities for risk management and market tracking, how easy is it to alter a browser enough to throw off fingerprinting techniques? My current sources are the EFF Panopticlick project- https://www.eff.org/deeplinks/2010/01/primer-information-theory-and-privacy and Peter Eckersly's follow-up presentation at Def Con 18- http://privacy-pc.com/articles/how-safe-is-your-browser-peter-ackersley-on-personally-identifiable-information-basics.html

    Read the article

  • Free and Innovative Website Analytics Tools

    Many people think of free web analytics in terms of Google Analytics, well, this is understandable given the big name that Google is. Unknown to many, there are numerous web analytics tools out there that are free and very innovative. These web statistics tools measure everything from real-time visitor tracking, to search engine traffic and user behavior.

    Read the article

  • Best method to do A B testing across to subdomains

    - by Lior
    I want to do an A B test of an entire site for a new design and UX with only slight changes in content (a big brand site that has good Google rankings for many generic keywords. My idea of implementation is doing a 302 redirect to the new version (placing it on www1 subdomain) and allowing only user agents of known browsers to pass. The test version will have disallow all in the robots text. Will Google treat this favorably or do I have to use Google Website Optimizer (which will give me tracking headaches)?

    Read the article

  • From the Tips Box: Life after Babel Fish, Hidden Features in iOS apps, and Finding Clean Beaches with a Smartphone

    - by Jason Fitzpatrick
    Once a week we round up some of the great reader tips that come pouring in and share them with everyone. This week we’re looking at Bing’s absorbtion of Babelfish, hidden features in iOS apps, and how to find a clean beach with your smartphone. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • JDK bug migration milestone: JIRA now the system of record

    - by darcy
    I'm pleased to announce the OpenJDK bug database migration project has reached a significant milestone: the JDK has switched from the legacy Sun "bugtraq" system to a new internal JIRA instance as the system of record for our bug tracking. This completes the initial phase of the previously described plan of getting OpenJDK onto an externally visible and writable bug tracker. The identities contained in the current system include recognized OpenJDK contributors. The bug migration effort to date has been sizable in multiple dimensions. There are around 140,000 distinct issues imported into the JDK project of the JIRA instance, nearly 165,000 if backport issues to track multiple-release information are included. Separately, the Code Tools OpenJDK project has its own JIRA project populated with several thousands existing bugs. Once the OpenJDK JIRA instance is externalized, approved OpenJDK projects will be able to request the creation of a JIRA project for issue tracking. There are many differences in the schema used to model bugs between the legacy bug system and the schema for the new JIRA projects. We've favored simplifications to the existing system where possible and, after much discussion, we've settled on five main states for the OpenJDK JIRA projects: New Open In progress Resolved Closed The Open and In-progress states can have a substate Understanding field set to track whether the issues has its "Cause Known" or "Fix understood". In the closed state, a Verification field can indicate whether a fix has been verified, unverified, or if the fix has failed. At the moment, there will be very little externally visible difference between JIRA for OpenJDK and the legacy system it replaces. One difference is that bug numbers for newly filed issues in the JIRA JDK project will be 8000000 and above. If you are working with JDK Hg repositories, update any local copies of jcheck to the latest version which recognizes this expanded bug range. (The bug numbers of existing issues have been preserved on the import into JIRA). Relatively soon, we plan for the pages published on bugs.sun.com to be generated from information in JIRA rather than in the legacy system. When this occurs, there will be some differences in the page display and the terminology used will be revised to reflect JIRA usage, such as referring to the "component/subcomponent" of an issue rather than its "category". The exact timing of this transition will be announced when it is known. We don't currently have a firm timeline for externalization of the JIRA system. Updates will be provided as they become available. However, that is unlikely to happen before JavaOne next week!

    Read the article

  • Google I/O 2012 - Protecting your User Experience While Integrating 3rd-party Code

    Google I/O 2012 - Protecting your User Experience While Integrating 3rd-party Code Patrick Meenan The amount of 3rd-party content included on websites is exploding (social sharing buttons, user tracking, advertising, code libraries, etc). Learn tips and techniques for how best to integrate them into your sites without risking a slower user experience or even your sites becoming unavailable. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 598 12 ratings Time: 48:04 More in Science & Technology

    Read the article

  • Les meilleurs cours et tutoriels Webmarketing : nouvelle mise à jour avec douze nouveaux tutoriels

    Les meilleurs cours et tutoriels Webmarketing : nouvelle mise à jour avec douze nouveaux tutoriels Bonjour à tous, Une mise à jour importante a été faite sur la page Cours Webmarketing. 12 tutoriels ont été ajoutés (25 au total) et deux nouvelles catégories ont été créées : E-mailing et Eye tracking. Merci à tous les contributeurs de cette page qui ne cessent d'enrichir les ressources de developpez. N'hésitez pas à proposer vos contributions ou à poster v...

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >