Search Results

Search found 12083 results on 484 pages for 'general purpose'.

Page 340/484 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

  • Brendan Gregg's "Systems Performance: Enterprise and the Cloud"

    - by user12608550
    Long ago, the prerequisite UNIX performance book was Adrian Cockcroft's 1994 classic, Sun Performance and Tuning: Sparc & Solaris, later updated in 1998 as Java and the Internet. As Solaris evolved to include the invaluable DTrace observability features, new essential performance references have been published, such as Solaris Performance and Tools: DTrace and MDB Techniques for Solaris 10 and OpenSolaris (2006)  by McDougal, Mauro, and Gregg, and DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X and FreeBSD (2011), also by Mauro and Gregg. Much has occurred in Solaris Land since those books appeared, notably Oracle's acquisition of Sun Microsystems in 2010 and the demise of the OpenSolaris community. But operating system technologies have continued to improve markedly in recent years, driven by stunning advances in multicore processor architecture, virtualization, and the massive scalability requirements of cloud computing. A new performance reference was needed, and I eagerly waited for something that thoroughly covered modern, distributed computing performance issues from the ground up. Well, there's a new classic now, authored yet again by Brendan Gregg, former Solaris kernel engineer at Sun and now Lead Performance Engineer at Joyent. Systems Performance: Enterprise and the Cloud is a modern, very comprehensive guide to general system performance principles and practices, as well as a highly detailed reference for specific UNIX and Linux observability tools used to examine and diagnose operating system behaviour.  It provides thorough definitions of terms, explains performance diagnostic Best Practices and "Worst Practices" (called "anti-methods"), and covers key observability tools including DTrace, SystemTap, and all the traditional UNIX utilities like vmstat, ps, iostat, and many others. The book focuses on operating system performance principles and expands on these with respect to Linux (Ubuntu, Fedora, and CentOS are cited), and to Solaris and its derivatives [1]; it is not directed at any one OS so it is extremely useful as a broad performance reference. The author goes beyond the intricacies of performance analysis and shows how to interpret and visualize statistical information gathered from the observability tools.  It's often difficult to extract understanding from voluminous rows of text output, and techniques are provided to assist with summarizing, visualizing, and interpreting the performance data. Gregg includes myriad useful references from the system performance literature, including a "Who's Who" of contributors to this great body of diagnostic tools and methods. This outstanding book should be required reading for UNIX and Linux system administrators as well as anyone charged with diagnosing OS performance issues.  Moreover, the book can easily serve as a textbook for a graduate level course in operating systems [2]. [1] Solaris 11, of course, and Joyent's SmartOS (developed from OpenSolaris) [2] Gregg has taught system performance seminars for many years; I have also taught such courses...this book would be perfect for the OS component of an advanced CS curriculum.

    Read the article

  • PRUEBAS DE ESPECIALIZACION GRATUITAS 2013

    - by agallego
    Consigue  tu Certificado de Especialista Oracle  de forma GRATUITA , 27 y 28 de Noviembre de 2013  Ahora puedes realizar los exámenes de implementación de las especializaciones de Oracle y convertirte en especialista. Podrás realizar cualquiera de los exámenes de implementación de la siguiente lista: Oracle Fusion Customer Relationship Management 11g Sales Certified Implementation Specialist (1Z0-456) Oracle Fusion Financials 11g General Ledger Implementation Specialist(1Z0-508) Oracle Fusion Financials 11g Accounts Payable Implementation Specialist(1Z0-507) Oracle Fusion Financials 11g Accounts Receivable Implementation Specialist(1Z0-506) Oracle Fusion Human Capital Management 11g Human Resources Certified Implementation Specialist (1Z0-584) Oracle Fusion Human Capital Management 11g Talent Management Certified Implementation Specialist (1Z0-585) Oracle ATG Web Commerce 10 Implementation Developer Certified Implementation Specialist (1Z0-510) Oracle Hyperion Planning 11 Essentials (1Z0-533) Oracle Business Intelligence Applications 7.9.6 for ERP Essentials (1Z0-525) Oracle Hyperion Financial Management 11 Essentials (1Z0-532) Oracle Business Intelligence Applications 7.9.6 for CRM Essentials (1Z0-524) Oracle Hyperion Data Relationship Management 11.1.2 Essentials (1Z1-588) Oracle Business Intelligence Foundation Suite 11g Essentials (1Z0-591) Oracle Essbase 11 Essentials (1Z0-531) SPARC T4-Based Server Installation Essentials (1Z0-597) Oracle Solaris 11 Installation and Configuration Essentials (1Z0-580) Sun ZFS Storage Appliance Certified Implementation Specialist Exalogic Elastic Cloud X2-2 Certified Implementation Specialist Oracle Exadata 11g Essentials (1Z0-536) Oracle VM 3 for x86 Essenstials (1Z0-590) Oracle Linux Fundamentals (1Z0-402) Oracle Linux System Administration (1Z0-403) Oracle Service-Oriented Architecture Certified Implementation Specialist (1Z0-451) Oracle Unified Business Process Management Suite 11g Certified Implementation Specialist (1Z0-560) Oracle WebLogic Server 12c Certified Implementation Specialist (1Z0-599) Oracle WebCenter Portal 11g Essentials (1Z0-541) Oracle WebCenter Content 11g Essentials (1Z0-542) Oracle Application Development Framework 11g Certified Implementation Specialist Oracle Identity Analytics 11g Certified Implementation Specialist (1Z0-545) Oracle Enterprise Manager 12c Essentials (1Z0-457) Puedes consultar la información acerca de los examenes en cada uno de los enlaces. Para prepararte los examenes sigue la Guia de estudio que encontrarás en la página de cada examen. Requisitos: ser  Partner Gold, Platinum o Diamond de Oracle y tener un usuario de Oracle Pearson Vue.  ¿Cuándo?: 27 y 28 de noviembre  a las (9:00, 12:00, 16:00)  ¿Dónde?: Core Networks, C.E.Parque Norte, Edificio Olmo, Planta 1 Serrano Galvache 56 | 28033, Madrid Para inscribirte: Create una cuenta en Pearson Vue (www.pearsonvue.com/oracle). Para Registrarte aquí. Para más información sobre el programa de especializaciones, haz clic aquí. No pierdas esta oportunidad e inscríbete hoy.  Para cualquier duda contactar con [email protected]. Ana María Gallego Partner Enablement Manager Spain and Portugal      

    Read the article

  • How to Format a USB Drive in Ubuntu Using GParted

    - by Trevor Bekolay
    If a USB hard drive or flash drive is not properly formatted, then it will not show up in the Ubuntu Places menu, making it hard to interact with. We’ll show you how to format a USB drive using the tool GParted. Note: Formatting a USB drive will destroy any data currently stored on it. If you think that your USB drive is already properly formatted, but Ubuntu just isn’t picking it up, try unplugging it and plugging it back in to a different USB slot, or restarting your machine with the device plugged in on start-up. Open a terminal by clicking on Applications in the top-left of the screen, then Accessories > Terminal. GParted should be installed by default, but we’ll make sure it’s installed by entering the following command in the terminal: sudo apt-get install gparted To open GParted, enter the following command in the terminal: sudo gparted Find your USB drive in the drop-down box at the top right of the GParted window. The drive should be unallocated – if it has a valid partition on it, then you may be looking at the wrong drive. Note: Make sure you’re on the correct drive, as making changes on the wrong hard drive with GParted can delete all data on a hard drive! Assuming you’re on the right drive, right-click on the unallocated grey block and click New. In the window that pops up, change the File System to fat32 for USB Flash Drives, NTFS for USB Hard Drives that will be used in Windows, or ext3/ext4 for USB Hard Drives that will be used exclusively in Linux. Add a label if you’d like, and then click Add. Click the green checkmark and then the Apply button to apply the changes. GParted will now format your drive. If you’re formatting a large USB Hard Drive, this can take some time. Once the process is done, you can close GParted, and the drive will now show up in the Places menu. Clicking on the drive will mount it and open it in a File Browser window. It will also add a shortcut to the drive on the Desktop by default. Your USB drive is now ready to store your files! Similar Articles Productive Geek Tips Using GParted to Resize Your Windows Vista PartitionInstall an RPM Package on Ubuntu LinuxCreate a Persistent Bootable Ubuntu USB Flash DriveShare Ubuntu Home Directories using SambaCreate a Samba User on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Silverlight Cream for November 26, 2011 -- #1175

    - by Dave Campbell
    In this Issue: Michael Washington, Manas Patnaik, Jeff Blankenburg, Doug Mair, Jon Galloway, Richard Bartholomew, Peter Bromberg, Joel Reyes, Zeben Chen, Navneet Gupta, and Cathy Sullivan. Above the Fold: Silverlight: "Using ASP.NET PageMethods With Silverlight" Peter Bromberg WP7: "Leveraging Background Services and Agents in Windows Phone 7 (Mango)" Jon Galloway Metro/WinRT/Windows8: "Debugging Contracts using Windows Simulator" Cathy Sullivan LightSwitch: "LightSwitch: It Is About The Money (It Is Always About The Money)" Michael Washington Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:LightSwitch: It Is About The Money (It Is Always About The Money)Michael Washington has a very nice post up about LightSwitch apps in general and his opinion about the future use... based on what he and I have been up to, I tend to agree on all counts!Accessing Controls from DataGrid ColumnHeader – SilverlightManas Patnaik's latest post is about using the VisualTreeHelper class to iterate through the visual tree to find the controls you need ... including sample code31 Days of Mango | Day #18: Using Sample DataJeff Blankenburg's Day 18 in his 31-Day Mango quest is on Sample Data using Expression Blend, and he begins with great links to his other Blend posts followed by a nice sample data tutorial and source31 Days of Mango | Day #19: Tilt EffectsDoug Mair returns to the reigns of Jeff's 31-Days series with number 19 which is all about Tilt Effects ... as seen in the Phone application when you select a user... Doug shows how to add this effect to your appLeveraging Background Services and Agents in Windows Phone 7 (Mango)Jon Galloway has a WP7 post up discussing Background Services and how they all fit together... he's got a great diagram of that as an overview then really nice discussion of each followed up by his slides from DevConnections, and codeNetflix on Windows 8This one isn't C#/XAML, but Richard Bartholomew has a Netflix on Windows 8 app running that bears noticeUsing ASP.NET PageMethods With SilverlightPeter Bromberg has a post up demonstrating calling PageMethods from a Silverlight app using the ScriptManager controlAWESOME Windows Phone Power ToolJoel Reyes announced the release of a full-featured tool for side-loading apps to your WP7 device... available at codeplexMicrosoft Windows Simulator Rotation and Resolution EmulationZeben Chen discusses the Windows 8 Simulator a bit deeper with this code-laden post showing how to look at roation and orientation-aware apps and resolution.First look at Windows SimulatorNavneet Gupta has a great into post to using the simulator in VS2011 for Windows 8 apps. Four things you really need this for: Touch Emulation, Rotation, Different target resolutions, and ContractsDebugging Contracts using Windows SimulatorCathy Sullivan shows how to debug W8 Contracts in VS2011... why you ask? because when you hit one in the debugger, the target app disappears.. but enter the simulator... check it outStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Canonicalization of single, small pages like reviews or product categories [SEO]

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/">HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do? Thank you for your help!

    Read the article

  • SQL Contests – Solution – Identify the Database Celebrity

    - by Pinal Dave
    Last week we were running contest Identify the Database Celebrity and we had received a fantastic response to the contest. Thank you to the kind folks at NuoDB as they had offered two USD 100 Amazon Gift Cards to the winners of the contest. We had also additional contest that users have to download and install NuoDB and identified the sample database. You can read about the contest over here. Here is the answer to the questions which we had asked earlier in the contest. Part 1: Identify Database Celebrity Personality 1 – Edgar Frank “Ted” Codd (August 19, 1923 – April 18, 2003) was an English computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He made other valuable contributions to computer science, but the relational model, a very influential general theory of data management, remains his most mentioned achievement. (Wki) Personality 2 – James Nicholas “Jim” Gray (born January 12, 1944; lost at sea January 28, 2007; declared deceased May 16, 2012) was an American computer scientist who received the Turing Award in 1998 “for seminal contributions to database and transaction processing research and technical leadership in system implementation.” (Wiki) Personality 3 – Jim Starkey (born January 6, 1949 in Illinois) is a database architect responsible for developing InterBase, the first relational database to support multi-versioning, the blob column type, type event alerts, arrays and triggers. Starkey is the founder of several companies, including the web application development and database tool company Netfrastructure and NuoDB. (Wiki) Part 2: Identify NuoDB Samples Database Names In this part of the contest one has to Download NuoDB and install the sample database Hockey. Hockey is sample database and contains few tables. Users have to install sample database and inform the name of the sample databases. Here is the valid answer. HOCKEY PLAYERS SCORING TEAM Once again, it was indeed fun to run this contest. I have received great feedback about it and lots of people wants me to run similar contest in future. I promise to run similar interesting contests in the near future. Winners Within next two days, we will let winners send emails. Winners will have to confirm their email address and NuoDB team will send them directly Amazon Cards. Once again it was indeed fun to run this contest. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • What's New in Oracle's EPM System?

    - by jmorourke
    Oracle’s EPM System R11.1.2.2  is now generally available to customers and partners on the download center.  Although the release number doesn’t sound significant, this is a major release of Oracle’s Hyperion EPM Suite with new modules as well as significant enhancements across the suite.  This release was announced back on April 4th as part of Oracle’s Business Analytics Strategy launch, so analytics is a key aspect of the release.  But the three biggest pieces of news in this release are Oracle Hyperion Planning support for the Exalytics In-Memory Machine, the new Project Financial Planning Application and the new Account Reconciliations Manager module. The Oracle Exalytics In-Memory Machine was announced back in October 2011, at Oracle OpenWorld.  It’s the latest installment from Oracle in a line of engineered systems that combine Oracle Sun hardware, with Oracle database and application technologies – in solutions that are designed to provide high scalability and performance for specific tasks.  Exalytics is the first engineered system specifically designed for high performance analytics.  Running in-memory versions of Oracle Essbase, as well as the Oracle TimesTen database and Oracle BI tools, Exalytics provides speed of thought response times for complex analytic processes with advanced visualizations.  Early adopter customers have achieved 5X to 100X faster interactivity and 6X to 10X faster planning cycles.  Hyperion Planning running with Oracle Exalytics will support enterprise-wide planning, budgeting and forecasting with more detailed data, with hundreds to thousands of users across an organization getting speed of thought performance. The new Hyperion Project Financial Planning application delivered with EPM 11.1.2.2 is also great news for Oracle customers.  This application follows on the heels of other special-purpose planning applications that Oracle has delivered for Workforce and Capital Asset planning.  It allows Project Managers to identify project-related expenses and revenues, plan and propose new projects, and track results over time. Finance Managers can evaluate and compare different projects, manage the funding process, monitor and report the actual financial results and impacts of projects and project portfolios. This new application is applicable to capital projects, contract projects and indirect projects like IT and HR projects across all industries.  This application is a great complement to existing Project Management applications, and helps bridge the gap between these applications, and the financial planning and budgeting process. Account reconciliations has to be one of the biggest bottlenecks and risks in the financial close and reporting process, and many organizations rely on spreadsheets and manual processes to perform this critical process.  To help address this problem, Oracle developed an Account Reconciliation Manager module that is being delivered as part of Oracle Hyperion Financial Close Management.   This module helps automate and streamline account reconciliations and eliminates the chances for errors, omissions and fraud.  But unlike standalone account reconciliation packages, it’s integrated with the rest of the Oracle Hyperion Financial Close suite, and can integrate balances from any source system.  This can help alleviate a major bottleneck in the financial close process, increase accuracy and reduce risk, and can complement existing investments in Hyperion Financial Management, as well as Oracle and non-Oracle transaction processing systems. Other enhancements in this release include an enhanced Web 2.0 interface for Hyperion Planning and Hyperion Financial Management (HFM), configurable dimensionality in HFM, new Predictive Planning feature in Hyperion Planning, new Detailed Profitability feature in Hyperion Profitability and Cost Management, new Smart View interface for Hyperion Strategic Finance, and integration of the Hyperion applications with JD Edwards Financials. For more information about Oracle EPM System R11.1.2.2 check out the links below: Press Release:  http://www.oracle.com/us/corporate/press/1575775 Product Information on O.com:  http://www.oracle.com/us/solutions/business-analytics/overview/index.html Product Information on OTN:  http://www.oracle.com/technetwork/middleware/epm/downloads/index.html Webcast Replay:  http://www.oracle.com/us/go/index.html?Src=7317510&Act=65&pcode=WWMK11054701MPP046 Please contact me if you have any questions or need additional information – [email protected]

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Performing an upgrade from TFS 2008 to TFS 2010

    - by Enrique Lima
    I recently had to go through the process of migrating a TFS 2008 SP1 to a TFS 2010 environment. I will go into the details of the tasks that I went through, but first I want to explain why I define it as a migration and not an upgrade. When this environment was setup, based on support and limitations for TFS 2008, we used a 32 bit platform for the TFS Application Tier and Build Servers.  The Data Tier, since we were installing SP1 for TFS 2008, was done as a 64 bit installation.  We knew at that point that TFS 2010 was in the picture so that served as further motivation to make that a 64bit install of SQL Server.  The SQL Server at that point was a single instance (Default) installation too.  We had a pretty good strategy in place for backups of the databases supporting the environment (and this made the migration so much smoother), so we were pretty familiar with the databases and the purpose they serve. I am sure many of you that have gone through a TFS 2008 installation have encountered challenges and trials.  And likely even more so if you, like me, needed to configure your deployment for SSL.  So, frankly I was a little concerned about the process of migrating.  They say practice makes perfect, and this environment I worked on is in some way my brain child, so I was not ready nor willing for this to be a failure or something that would impact my client’s work. Prior to going through the migration process, we did the install of the environment.  The Data Tier was the same, with a new Named instance in place to host the 2010 install.  The Application Tier was in place too, and we did the DefaultCollection configuration to test and validate all components were in place as they should. Anyway, on to the tasks for the migration (thanks to Martin Hinselwood for his very thorough documentation): Close access to TFS 2008, you want to make sure all code is checked in and ready to go.  We stated a difference of 8 hours between code lock and the start of migration to give time for any unexpected delay.  How do we close access?  Stop IIS. Backup your databases.  Which ones? TfsActivityLogging TfsBuild TfsIntegration TfsVersionControl TfsWorkItemTracking TfsWorkItemTrackingAttachments Restore the databases to the new Named Instance (make sure you keep the same names) Now comes the fun part! The actual import/migration of the databases.  A couple of things happen here. The TfsIntegration database will be scanned, the other databases will be checked to validate they exist.  Those databases will go through a process of data being extracted and transferred to the TfsVersionControl database to then be renamed to Tfs_<Collection>. You will be using a tool called tfsconfig and the option import. This tool is located in the TFS 2010 installation path (C:\Program Files\Microsoft Team Foundation Server 2010\Tools),  the command to use is as follows:    tfsconfig import /sqlinstance:<instance> /collectionName:<name> /confirmed Where <instance> is going to be the SQL Server instance where you restored the databases to.  <name> is the name you will give the collection. And to explain /confirmed, well this means you have done a backup of the databases, why?  well remember you are going to merge the databases you restored when you execute the tfsconfig import command. The process will go through about 200 tasks, once it completes go to Team Foundation Server Administration Console and validate your imported databases and contents. We’ll keep this manageable, so the next post is about how to complete that implementation with the SSL configuration.

    Read the article

  • JWT Token Security with Fusion Sales Cloud

    - by asantaga
    When integrating SalesCloud with a 3rd party application you often need to pass the users identity to the 3rd party application so that  The 3rd party application knows who the user is The 3rd party application needs to be able to do WebService callbacks to Sales Cloud as that user.  Until recently without using SAML, this wasn't easily possible and one workaround was to pass the username, potentially even the password, from Sales Cloud to the 3rd party application using URL parameters.. With Oracle Fusion R8 we now have a proper solution and that is called "JWT Token support". This is based on the industry JSON Web Token standard , for more information see here JWT Works by allowing the user the ability to generate a token (lasts a short period of time) for a specific application. This token is then passed to the 3rd party application as a GET parameter.  The 3rd party application can then call into SalesCloud and use this token for all webservice calls, the calls will be executed as the user who generated the token in the first place, or they can call a special HR WebService (UserService-findSelfUserDetails() ) with the token and Fusion will respond with the users details. Some more details  The following will go through the scenario that you want to embed a 3rd party application within a WebContent frame (iFrame) within the opportunity screen.  1. Define your application using the topology manager in setup and maintenance  See this documentation link on topology manager 2. From within your groovy script which defines the iFrame you wish to embed, write some code which looks like this : def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())def url = thirdpartyapplicationurl +"param1="+OptyId+"&jwt ="+crmkeyreturn (url)  This snippet generates a URL which contains The Hostname/endpoint of the 3rd party application Two Parameters The opportunityId stored in parameter "param1" The JWT Token store in  parameter "jwt" 3. From your 3rd Party Application you now have two options Execute a webservice call by first setting the header parameter "Authentication" to the JWT token. The webservice call will be executed against Fusion Applications "As" the user who execute the process To find out "Who you are" , set the header parameter to "Authentication" and execute the special webservice call findSelfUserDetails(), in the UserDetailsService For more information  Oracle Sales Cloud Documentation , specific chapter on JWT Token OTN samples, specifically the Rich UI With JWT Token Sample Oracle Fusion Applications General Documentation

    Read the article

  • Convert Chrome Bookmark Toolbar Folders to Icons

    - by Asian Angel
    So you have your regular bookmarks reduced to icons but what about the folders? With our little hack and a few minutes of your time you can turn those folders into icons too. Condensing the Folders Reducing bookmark folders to icons is a little more tricky than regular bookmarks but not hard to do. Right click on the folder and select “Rename…”. The folder’s name should already be highlighted/selected as shown here. Delete the text…notice that the “OK Button” has become unusable for the moment. Now what you will need to do is: Hold down the “Alt Key” Type in “0160” (without the quotes) using the numbers keypad on the right side of your keyboard Release the “Alt Key” after you have finished typing in the number above Once you have released the “Alt Key” you will notice two things…the “cursor” has moved further into the text area and you can now click on the “OK Button” again. There is our folder after editing. And it works just as well as before but without taking up so much room. Here is how our “iconized” folder looks next to our bookmarks. Perfect! What if you want to reduce multiple folders to icons? Perform the same exact steps shown above for each folder and pack your “Bookmarks Toolbar” full of folder goodness! As seen here the folders will have a little more space between them in comparison with singular bookmarks due to the “blank name” for each folder. For those who may be curious this is what your bookmarks will look like in the “Bookmark Manager Page”. Note: If you export your bookmarks all bookmarks contained in multiple blank name folders will be combined into a single folder. Conclusion With just a little bit of work you can pack a lot of goodness into your “Bookmarks Toolbar”. No more wasted space… Similar Articles Productive Geek Tips Condense the Bookmarks in the Firefox Bookmarks ToolbarAccess Your Bookmarks with a Toolbar Button in Google ChromeAdd the Bookmarks Menu to Your Bookmarks Toolbar with Bookmarks UI ConsolidatorAdd a Vertical Bookmarks Toolbar to FirefoxReduce Your Bookmarks Toolbar to a Toolbar Button TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • SQL – Business Intelligence: Derive Data or Information?

    - by Pinal Dave
    We all know the value of information in our lives. Whether it’s a personal decision or a business initiated one, people need it. But the question is: who is to make the distinction between data and information? We all come across a whole lot of data daily, that may be significant or not. We filter what’s required and forget about the rest. Information is filtered and distilled data. Filtering and distillation can also alter its actual meaning and natural state. Therefore, in this blog we discover some ways to ensure that we’re using business intelligence derived from the right information for making critical management decisions. Four key questions managers must ask themselves before making a decision: 1. Am I working with data or information? 2. What is it’s context? 3. How recent is it? 4. How was it derived or what is the source? The first question is probably the most important. You must know what you’re dealing with here. If you see use of adjectives and conclusions drawn, it’s information. Not raw data. You very next concern must be whether this is guised to present a particular viewpoint or perspective. It makes a lot of difference if you take a decision based on someone’s propaganda to distort real facts. Therefore, the context and the intentions of the distillation process must be clear to you. The next consideration is whether data is recent enough to hold any value. Since it has a very short shelf life, you must ensure that its context and value is not lost out of time. The last and the most important consideration is how was it derived in the first place. The observer effect is what calls the shots here. The source can change the context to a great extent if the collection methodology  and purpose is not clear. Gathering intelligence for decision making requires users to be keen observers and not take the information provided on its face value alone. These probing questions will allow you to make sure that you’re working with clean and accurate data devoid of any influence or manipulations. Only then can you be sure of deriving true business intelligence for your organization. BI technology is also a great way to ensure accuracy of reports. SQL BI Platform  provides advanced tools and techniques for all your BI needs and concerns. Koenig Solutions offers this course along with a host of other Business Intelligence and IT courses on all latest technologies available in the market today. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Mexico leading in Business Transformation Strategies:

    - by [email protected]
    By John Burke Group Vice President Oracle Applications Business Unit     I recently completed a business tour in Mexico, and was surprised by both the economic vibrancy of the country and the thought leadership expressed by many of the customers I met.  An example of the economic vibrancy of the country: across the street from my hotel was the local Bentley dealership, Coach Store, Yves Saint Laurent and of course a Starbucks.  I only made it to Starbucks.  Both the Coach Store and YSL had a line of folks waiting to get in... As for thought leadership, there were several illustrations only on the first day. I had the opportunity to meet with a branch of the Mexican Federal Government. Their questions were not about clerical task automation, far from it! We discussed citizen on-line access to fees and services - for example looking up the duty on an international goods shipment, or tracking that my taxes have been received, or the status of my request for a certain service.  Eligibility, policies and status.  Having an integrated rules or policy automation system that would allow businesses and citizens to access accurate information and ensure the proper collection of fees and payment for 3rd party provided services.    Then in the afternoon, I met with the owner of a roofing company (note: most roofs in Mexico are flat and made of cement).  This CEO started discussing how he wanted to transform his business from a cement products company to a service company and market 5-10-15 year service contracts which would guarantee the structural integrity of the roof and of course that the roof would remain waterproof.  Although his products were guaranteed, they required an annual inspection and most home owners never schedule that inspection until it is too late and water damage has occurred.  These emergency calls reduce his margin and reduce customer satisfaction.  This lead to a discussion of business models in general and why long term differentiation can only come from service, not just for the music or news industries, but also for roofing companies!    I completely agreed with the transformational concepts described in both meetings and quickly understood why there is a Bentley dealership near my hotel.    

    Read the article

  • Why does Chrome video performance substantially degrade after waking from suspend in 10.10?

    - by Grant Heaslip
    Note: For some more details, some of which may not be true given what I've figured out, see this post. When I first boot my computer, video performance (both native H.264 HTML5 in YouTube and Vimeo, and in Flash) in Chrome is perfectly reasonable. CPU usage stays slow, everything works correctly, and the video is silky-smooth. But for whatever reason, if I suspend my computer then wake it up, video performance plummets. Full screen HTML5 video is choppy at best, and full-screen Flash video basically brings my computer to its knees (I'm talking less than a frame a second, and a 5 second lead time to leave full-screen after hitting Esc). Restarting Chrome doesn't fix this — I need to completely restart my machine before performance goes back to normal. Video performance in other applications, such as Movie Player, doesn't seem to be affected at all by the suspend cycle — it's only Chrome. I'm using a Lenovo X201, with an Intel GMA HD graphics chipset, and Intel compnents all around (I don't need any proprietary drivers). This didn't happen in 10.04, and I haven't anything that I think would have caused this to happen. It's possible that a Chrome release could have caused this, but it seems less likely than a regression between 10.04 and 10.10. Any ideas? EDIT: In response Georg's comment, logging in and out doesn't fix it. Restarting Compiz or switching to Metacity (at least by using "compiz/metacity --replace & disown" — am I doing it right?) doesn't help (actually, it seemed to help somewhat with Flash once, but I haven't been able to reproduce this). I'm not sure about GDM — when I use "sudo restart gdm" I get kicked back to the Linux shell (?), which I have no idea how to get out of. Also, I want to make very clear that this isn't just a case of Flash sucking (it does,but that's beside the point). I"m seeing the same general problem with HTML5 videos, and Flash is performing better on my Nexus One than it does on my Core i5 laptop. There's something screwy going on with Chrome and/or 10.10.

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • Oracle Ebusiness Suite 12.1.3 Oracle VM templates

    - by wcoekaer
    Steven Chan just published a great blog entry that talks about the release of a new set of Oracle VM templates. Oracle Ebusiness Suite 12.1.3. You can find the blog post here. Templates are available for: E-Business Suite 12.1.3 Vision (64-bit) E-Business Suite 12.1.3 Production (32-bit) E-Business Suite 12.x Sparse Middle Tiers (32-bit and 64-bit) Thanks Steven! Why does this stuff matter? Well, in general, virtualization (or cloud) solutions provide an easy way to create Virtual Machines. Whether it's through a "cloud api" or just a virtualization API. But all you end up with, in the end, is still just a Virtual Machine... Maybe with an OS pre-installed/pre-configured. So you have flexibility of moving VMs around and providing a VM but what about the actual applications (anything more than a very basic app)? The application administrator then still has to go and install and configure the OS for that application and install the application and its patches and basic configuration so that the application user then can go in. Building gold images for complex software stacks that are not owned by the users/admins is always very difficult. With our templates, we provide a number of things : Oracle Linux pre-installed and pre-configured with the minimum required packages for that application to run. (so it's secure) Oracle Linux can be distributed and used for free or with a support subscription. There is no trial license, there is no registration key, no alpha version or community version versus enterprise version. You get what we provide in our engineered systems, what we provide support for, without change. Supported out of the box. No virtual Trial appliances, no prototypes, no POC. What you download is production ready without change. The applications are installed by the developers of the application. The database team builds database templates, the applications engineering team builds applications templates. The first boot/configuration scripts ask for the basic information such as hostname, ip address, user passwords and then go off and set everything up correctly. All tested together - application - operating system - hypervisor. not 3 (or more) products from 3(or more) different companies.

    Read the article

  • Log Debug Messages without Debug Serial on Shipped Device

    - by Kate Moss' Open Space
    Debug message is one of the ancient but useful way for problem resolving. Message is redirected to PB if KITL is enabled otherwise it goes to default debug port, usually a serial port on most of the platform but it really depends on how OEMWriteDebugString and OEMWriteDebugByte are implemented. For many reasons, we don't want to have a debug serial port, for example, we don't have enough spare serial ports and it can affect the performance. So some of the BSP designers decide to dump the messages into other media, could be a log file, shared memory or any solution that is suitable for the need. In CE 5.0 and previous, OAL and Kernel are linked into one binaries; in the other word, you can use whatever function in kernel, such as SC_CreateFileW to access filesystem in OAL, even this is strongly not recommended. But since the OAL is being a standalone executable in CE 6.0, we no longer can use this back door but only interface exported in NKGlobal which just provides enough for OAL but no more. Accessing filesystem or using sync object to communicate to other drivers or application is even not an option. Sounds like the kernel lock itself up; of course, OAL is in kernel space, you can still do whatever you want to hack into kernel, but once again, it is not only make it a dirty solution but also fragile. So isn't there an elegant solution? Let's see how a debug message print out. In private\winceos\COREOS\nk\kernel\printf.c, the OutputDebugStringW is the one for pumping out the messages; most of the code is for error handling and serialization but what really interesting is the following code piece     if (g_cInterruptsOff) {         OEMWriteDebugString ((unsigned short *)str);     } else {         g_pNKGlobal->pfnWriteDebugString ((unsigned short *)str);     }     CELOG_OutputDebugString(dwActvProcId, dwCurThId, str); It outputs the message to default debug output (is redirected to KITL when available) or OAL when needed but note that highlight part, it also invokes CELOG_OutputDebugString. Follow the thread to private\winceos\COREOS\nk\logger\CeLogInstrumentation.c, this function dump whatever input to CELOG. So whatever the debug message is we always got a clone in CELOG. General speaking, all of the debug message is logged to CELOG already, so what you need to do is using celogflush.exe with CELZONE_DEBUG zone, and then viewing the data using the by Readlog tool. Here are some information about these tools CELOG - http://msdn.microsoft.com/en-us/library/ee479818.aspx READLOG - http://msdn.microsoft.com/en-us/library/ee481220.aspx Also for advanced reader, I encourage you to dig into private\winceos\COREOS\nk\celog\celogdll, the source of CELOG.DLL and use it as a starting point to create a more lightweight debug message logger for your own device!

    Read the article

  • Virtual Box - How to open a .VDI Virtual Machine

    - by [email protected]
    TUESDAY, APRIL 27, 2010 How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager.2. Click on Actions ? Add and select the .VDI fileClick "Ok"3. Now we can register the new Virtual Machine - Click New, and Click Next4. Write down a Name for the virtual Machine a proceed to select a Operating System and Version. (In this case it is a Linux (Oracle Enterprise Linux or RedHat)Click Next5. Select the memory amount base for the Virtual Machine(Minimal 1280 for our case) - Click Next6. Select the Disk 11GR2_OEL5_32GB.vdi it was added in the virtual media manager in the step 2.Dont forget let selected Boot hard Disk (Primary Master) . Given it is the only disk assigned to the virtual machine.Click Next7. Click Finish8. This step is important. Once you have click on the settings Button. 9. On General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.10. Now Click on System, and proceed to assign the correct memory (If you did not before)Note: Enable "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.Define the processors for the Virtual machine. If you processor is dual core choose 211. Select the video memory amount you want to assign to the Virtual Machine12. Associated more storage disk to the Virtual machine, if you have more VDI files.(Not our case)The disk must be selected as IDE Primary Master.13. Well you can verify the other options, but with these changes you will be able to start the VM.Note: Sometime the VM owner may share some instructions, if so follow his instructions.14. Finally Start the Virtual Machine (Click > Start)

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

  • 13.04 Logitech bluetooth speaker adapter pairing but no mixer output

    - by user1455622
    I had to change [General] Enable = Socket in /etc/bluetooth/audio.conf to get it to pair. But now that they are I don't get an output in pavucontrol. D: [pulseaudio] bluetooth-util.c: Registering /MediaEndpoint/HFPAG on adapter /org/bluez/3855/hci0. D: [pulseaudio] bluetooth-util.c: Registering /MediaEndpoint/HFPHS on adapter /org/bluez/3855/hci0. D: [pulseaudio] bluetooth-util.c: Registering /MediaEndpoint/A2DPSource on adapter /org/bluez/3855/hci0. D: [pulseaudio] bluetooth-util.c: Registering /MediaEndpoint/A2DPSink on adapter /org/bluez/3855/hci0. E: [pulseaudio] bluetooth-util.c: org.bluez.Media.RegisterEndpoint() failed: org.bluez.Error.AlreadyExists: Already Exists E: [pulseaudio] bluetooth-util.c: org.bluez.Media.RegisterEndpoint() failed: org.bluez.Error.AlreadyExists: Already Exists E: [pulseaudio] bluetooth-util.c: org.bluez.Media.RegisterEndpoint() failed: org.bluez.Error.AlreadyExists: Already Exists E: [pulseaudio] bluetooth-util.c: org.bluez.Media.RegisterEndpoint() failed: org.bluez.Error.AlreadyExists: Already Exists D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'disconnected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'disconnected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'disconnected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'disconnected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'connected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'connected' D: [pulseaudio] bluetooth-util.c: Unknown Bluetooth minor device class 0 D: [pulseaudio] module-card-restore.c: Not restoring profile for card bluez_card.C8_84_47_15_B7_34, because already set. I: [pulseaudio] module-card-restore.c: Restoring port latency offsets for card bluez_card.C8_84_47_15_B7_34. I: [pulseaudio] card.c: Created 2 "bluez_card.C8_84_47_15_B7_34" W: [pulseaudio] module-bluetooth-device.c: Profile has no transport D: [pulseaudio] core-subscribe.c: Dropped redundant event due to change event. I: [pulseaudio] card.c: Changed profile of card 2 "bluez_card.C8_84_47_15_B7_34" to off I: [pulseaudio] module.c: Loaded "module-bluetooth-device" (index: #22; argument: "address=C8:84:47:15:B7:34 profile=a2dp"). I: [alsa-source] alsa-source.c: Scheduling delay of 10,06ms, you might want to investigate this to improve latency... I: [alsa-source] ratelimit.c: 5 events suppressed I: [alsa-source] alsa-source.c: Overrun! I: [alsa-source] alsa-source.c: Increasing minimal latency to 2,00 ms D: [alsa-source] alsa-source.c: latency set to 20,00ms D: [alsa-source] alsa-source.c: hwbuf_unused=62008 D: [alsa-source] alsa-source.c: setting avail_min=442 What can I do to get it working? Regards,

    Read the article

  • Debugging .NET code called from X++ code in AX 2012

    - by ssmantha
    A very intriguing issue came to me to debug .Net code called from X++ code in AX 2012. This was indeed a challenge to be nailed down. Luckily the tools and some concepts helped me to achieve this task. Here it goes... We need to do a seamless debugging from AX debugger to Visual Studio back and forth. To enable this we need to first see if the dll to be debug is present in GAC then we might need to uninstall it from it due to the order of preference .NET loads the assemblies. The assemblies are first loaded from GAC and then the runtime checks for Public and Private Assemblies. Since the assembly in GAC is always compiled with runtime optimizations it is difficult to debug. We need to unhook this assembly from GAC and then move further relying on >NET assembly loading patterns. Step 1: Remove the target assembly to debug from GAC. Before that stop all the AOS servers and close all the instances of programs which rely on AOT e.g. all clients and even visual studio now. Step 2: Build your sample code which is present in AOT in debug mode and get the dll file along with PDB files. Step 3: Place these files in the Server\..\Bin and Client\bin directories of AX installation. Step 4: Configure Visual Studio: Step 4.1: Configure Debugging Options. In Visual Studio Go to Debug -> Options and Settings -> Debug node -> General sub node and disable “Enable Just My Code (managed)” Step 4.2: Specify the symbol loading directory options. Specify the locations for Client bin and server bin directories of the installation, remember to specify the correct instance of Server bin directory corresponding to your AOS. Step 4.3: Configure the project for debugging Step 5: Ready to go place your breakpoints in X++ and in .Net wherever necessary before this process... Run the Visual studio project and it will invoke the AX client with your breakpoint hitting X++ code.. and when you do a step-in using F11 the Visual studio debugger will be active and from here onwards you would be able to debug the complete flow. Debugging in seamless manner across debuggers is really very good feature and mostly underutilized, but by doing so we can have improved troubleshooting and saves a hell lot of time.. Stay tuned for more in Advanced Debugging..

    Read the article

  • SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28

    - by pinaldave
    Backup is the most important task for any database admin. Your data is at risk if you are not performing database backup. Honestly, I have seen many DBAs who know how to take backups but do not know how to restore it. (Sigh!) In this blog post we are going to discuss about one of my real experiences with one of my clients – BACKUPIO. When I started to deal with it, I really had no idea how to fix the issue. However, after fixing it at two places, I think I know why this is happening but at the same time, I am not sure the fix is the best solution. The reality is that the fix is not a solution but a workaround (which is not optimal, but get your things done). From Book On-Line: BACKUPIO Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPBUFFER Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPIO and BACKUPBUFFER Explanation: This wait stats will occur when you are taking the backup on the tape or any other extremely slow backup system. Reducing BACKUPIO and BACKUPBUFFER wait: In my recent consultancy, backup on tape was very slow probably because the tape system was very old. During the time when I explained this wait type reason in the consultancy, the owners immediately decided to replace the tape drive with an alternate system. They had a small SAN enclosure not being used on side, which they decided to re-purpose. After a week, I had received an email from their DBA, saying that the wait stats have reduced drastically. At another location, my client was using a third party tool (please don’t ask me the name of the tool) to take backup. This tool was compressing the backup along with taking backup. I have had a very good experience with this tool almost all the time except this one sparse experience. When I tried to take backup using the native SQL Server compressed backup, there was a very small value on this wait type and the backup was much faster. However, when I attempted with the third party backup tool, this value was very high again and was taking much more time. The third party tool had many other features but the client was not using these features. We end up using the native SQL Server Compressed backup and it worked very well. If I get to see this higher in my future consultancy, I will try to understand this wait type much more in detail and so probably I would able to come to some solid solution. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Great Expectations - Fusion HCM Highlights at OOW

    - by Kathryn Perry
    A guest post by Lisa Conley, Principal Product Strategy Manager, Fusion HCM, Oracle Applications Development Oracle Open World is just around the corner! There's always so much to see and do and learn at the conference so I want to share some of the 'don't miss' Fusion HCM highlights with you. (Use this tool to search by session number to get a full description.) For starters, we have several customers who will be sharing their Fusion HCM implementation stories. We'll kick off these presentations with a customer panel at 12:15 on Monday in Moscone West 2005 (CON9420). You'll hear from Zillow, the Gerson Lehrman Group, UBS, and ConAgra about their experiences with our products. Oracle partners MarketSphere (CON8581) and eVerge (CON3800) have implemented Fusion HCM themselves and and will talk about how they'll use their experiences to help customers with their implementations (both are in Moscone West 2006). Beth Correa, CEO of Official Payroll Advisor, will highlight her favorite things about Oracle Fusion HCM Payroll on Tuesday at 11:45 in Moscone West 2006 (CON6691). And you'll get to hear from customers again when they speak with Steve Miranda in his Oracle Applications: Strategic Directions and Recommendations session on Tuesday at 1:15 in Moscone West 2002/2004 (CON11434). To bring it all together for you, we've listed all your Fusion HCM opportunities to learn and interact in this Focus On Document. I am really looking forward to the sessions on Human Capital Management in the Cloud. The Oracle Cloud combines the multiple product offerings into a single environment that leverages a common technology infrastructure enabling users to focus on their business - not the business of managing environments. On Tuesday at 10:15 in Moscone West 2002/2004, there is a General Session entitled the Future of Oracle HCM -- Strategy and Roadmap (GEN9505). This will touch on all product lines. Fusion HCM will be highlighted in Gretchen Alarcon's Oracle HCM: Overview, Strategy, Customer Experiences, and Roadmap session on Monday at 12:15 in Moscone West 2005 (CON9410). Also on Tuesday at 1:15 in Moscone West 2006, is a session focused on Talent Management and how you can try out these new products, co-existing with your current product set (CON9430). This is important in that you can test the waters before diving in. ConAgra will be sharing their experience in this session as well.  And of course, if you want to have a personal demonstration, please come by the Oracle DEMOgrounds in West Exhibition Hall Level 1 or the Oracle Cloud Services Lounge at Moscone West Level 3 where our Oracle HCM Cloud Services experts will be ready to answer your questions. I hope you have a wonderful week in San Francisco.Lisa ConleyPrincipal Product Strategy Manager, Fusion HCMApplications DevelopmentOracle Corporation

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >