Search Results

Search found 18034 results on 722 pages for 'tutor product features'.

Page 27/722 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • New Podcast Available: Product Value Chain Management: How Oracle is Taking the Lead on Next Gen Enterprise PLM

    - by Terri Hiskey
    A new podcast on how Oracle is taking the lead in Enterprise PLM with our Product Value Chain solution is now available. In case you're not yet familiar with the concept of Product Value Chain, its an integrated business model powered by Oracle that offers executives the ability to collectively leverage enterprise Agile PLM, Product Data Hub, Enterprise Data Quality and AutoVue Enterprise Visualization and other industry-leading Oracle applications for incremental value. In this quick, 10 minute podcast, you'll hear John Kelley, VP PLM Product Strategy, and Terri Hiskey, Director, PLM Product Marketing, discuss Oracle's vision for next generation enterprise PLM: the Product Value Chain. http://feedproxy.google.com/~r/OracleAppcast/~3/jxAED7ugMEc/11525926_Enterprise_PLM_040612.mp3

    Read the article

  • django access to parent

    - by SledgehammerPL
    model: class Product(models.Model): name = models.CharField(max_length = 128) (...) def __unicode__(self): return self.name class Receipt(models.Model): name = models.CharField(max_length=128) (...) components = models.ManyToManyField(Product, through='ReceiptComponent') def __unicode__(self): return self.name class ReceiptComponent(models.Model): product = models.ForeignKey(Product) receipt = models.ForeignKey(Receipt) quantity = models.FloatField(max_length=9) unit = models.ForeignKey(Unit) def __unicode__(self): return unicode(self.quantity!=0 and self.quantity or '') + ' ' + unicode(self.unit) + ' ' + self.product.genitive And now I'd like to get list of the most often useable products: ReceiptComponent.objects.values('product').annotate(Count('product')).order_by('-product__count' the example result: [{'product': 3, 'product__count': 5}, {'product': 6, 'product__count': 4}, {'product': 5, 'product__count': 3}, {'product': 7, 'product__count': 2}, {'product': 1, 'product__count': 2}, {'product': 11, 'product__count': 1}, {'product': 8, 'product__count': 1}, {'product': 4, 'product__count': 1}, {'product': 9, 'product__count': 1}] It's almost what I need. But I'd prefer having Product object not product value, because I'd like to use this in views.py for generating list.

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • The next next C++ [closed]

    - by Roger Pate
    It's entirely too early for speculation on what C++ will be like after C++0x, but idle hands make for wild predictions. What features would you find useful and why? Is there anything in another language that would fit nicely into the state of C++ after 0x? What should be considered for the next TC and TR? (Mostly TR, as the TC would depend more on what actually becomes the next standard.) Export was removed, rather than merely deprecated, in 0x. (It remains a keyword.) What other features carry so much baggage to also be more harmful than helpful? ISO Standards' process I'm not involved in the C++ committee, but it's also a mystery, unfortunately, to most programmers using C++. A few things worth keeping in mind: There will be 10 years between standards, barring extremely exceptional circumstances. The standard can get "bug fixes" in the form of a Technical Corrigendum. This happened to C++98 with TC1, named C++03. It fixed "simple" issues such as making the explicit guarantee that std::vector stores items contiguously, which was always intended. The committee can issue reports which can add to the language. This happened to C++98/03 with TR1 in 2005, which introduced the std::tr1 namespace.

    Read the article

  • How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad

    - by The Geek
    The iWork apps are some of the best apps on the iPad, and each show just how powerful a touchscreen device can be with the most basic of computing functions. In fact, there’s not much to dislike about the iWork apps, except for one thing: importing and exporting files. You can open documents from email attachments, download them from websites, or import them from other apps like Dropbox. Once you’ve opened your file in Pages, Keynote, or Numbers on iPad, though, you can only send it via email, upload it to a WebDAV server or Apple’s iDisk service, or wait to sync it with iTunes on your computer. Most other iOS office apps don’t offer nearly as many features as the iWork apps, but they do offer deep integration with Dropbox which makes it easy to view and edit your documents no matter where you are. Dropbox is the most popular file sync and sharing solution, and makes it absolutely painless to share folders with anyone around the world and keep your computers in sync. That is, computers and applications that integrate with Dropbox. However, you don’t need to give up on using Dropbox with iWork apps on iPad. Today we’re going to look at how you can enable WebDAV compatibility on your Dropbox account to let Pages integrate nearly the whole way with Dropbox. It’s not a perfect solution, but it’s much better than the default setup. So let’s get started Latest Features How-To Geek ETC How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition Stylebot Customizes Web Pages in Chrome, Now Has Downloadable Styles Blackberry, Dell, Apple, and Motorola Tablets Compared [Infographic] Encrypt Your Google Search Queries Vintage Posters Showcase the History of Tech Advertising Google Cloud Print Extension Lets You Print Doc/PDF/Txt Files from Web Sites Hack a $10 Flashlight into an Ultra-bright Premium One

    Read the article

  • If the bug is 5+ years old, then is it a feature?

    - by Job
    Allow me to add details: I work at an institutional place with many coders, testers, QA analysts, product owners, etc. and here is something that bugs me: We have been able to sell crappy (albeit pretty functional) software for over a decade. It has many features and the product is competitive, but there are a some serious bugs out there, as well as thousands of "paper cuts" - little annoyances that clients need to get used to. It pains me to look at some of the things because I firmly believe that if computers do not help to make our lives easier, then we should not use them. I have confidence in my colleagues - they are smart, able, and can improve things when the focus is on doing that. But, it can be difficult to file bugs against some old functionality without seeing them closed or forgotten. "It worked like that for ions" is a typical answer. Also, when QA does regression, they tend to look for anything that is different as much as anything that does not seem right. So, a fix to an old problem can be written up as a bug, because "it has been like that before even my time". The young coder in me thinks: rewrite this freaking thing! As someone who had the opportunity to be close to sales, clients, I want to give a benefit of a doubt to this approach. I am interested in your opinion/experience as well. Please try to consider risk, cost-to-benefit, and other non-technical factors.

    Read the article

  • 8 Backup Tools Explained for Windows 7 and 8

    - by Chris Hoffman
    Backups on Windows can be confusing. Whether you’re using Windows 7 or 8, you have quite a few integrated backup tools to think about. Windows 8 made quite a few changes, too. You can also use third-party backup software, whether you want to back up to an external drive or back up your files to online storage. We won’t cover third-party tools here — just the ones built into Windows. Backup and Restore on Windows 7 Windows 7 has its own Backup and Restore feature that lets you create backups manually or on a schedule. You’ll find it under Backup and Restore in the Control Panel. The original version of Windows 8 still contained this tool, and named it Windows 7 File Recovery. This allowed former Windows 7 users to restore files from those old Windows 7 backups or keep using the familiar backup tool for a little while. Windows 7 File Recovery was removed in Windows 8.1. System Restore System Restore on both Windows 7 and 8 functions as a sort of automatic system backup feature. It creates backup copies of important system and program files on a schedule or when you perform certain tasks, such as installing a hardware driver. If system files become corrupted or your computer’s software becomes unstable, you can use System Restore to restore your system and program files from a System Restore point. This isn’t a way to back up your personal files. It’s more of a troubleshooting feature that uses backups to restore your system to its previous working state. Previous Versions on Windows 7 Windows 7′s Previous Versions feature allows you to restore older versions of files — or deleted files. These files can come from backups created with Windows 7′s Backup and Restore feature, but they can also come from System Restore points. When Windows 7 creates a System Restore point, it will sometimes contain your personal files. Previous Versions allows you to extract these personal files from restore points. This only applies to Windows 7. On Windows 8, System Restore won’t create backup copies of your personal files. The Previous Versions feature was removed on Windows 8. File History Windows 8 replaced Windows 7′s backup tools with File History, although this feature isn’t enabled by default. File History is designed to be a simple, easy way to create backups of your data files on an external drive or network location. File History replaces both Windows 7′s Backup and Previous Versions features. Windows System Restore won’t create copies of personal files on Windows 8. This means you can’t actually recover older versions of files until you enable File History yourself — it isn’t enabled by default. System Image Backups Windows also allows you to create system image backups. These are backup images of your entire operating system, including your system files, installed programs, and personal files. This feature was included in both Windows 7 and Windows 8, but it was hidden in the preview versions of Windows 8.1. After many user complaints, it was restored and is still available in the final version of Windows 8.1 — click System Image Backup on the File History Control Panel. Storage Space Mirroring Windows 8′s Storage Spaces feature allows you to set up RAID-like features in software. For example, you can use Storage Space to set up two hard disks of the same size in a mirroring configuration. They’ll appear as a single drive in Windows. When you write to this virtual drive, the files will be saved to both physical drives. If one drive fails, your files will still be available on the other drive. This isn’t a good long-term backup solution, but it is a way of ensuring you won’t lose important files if a single drive fails. Microsoft Account Settings Backup Windows 8 and 8.1 allow you to back up a variety of system settings — including personalization, desktop, and input settings. If you’re signing in with a Microsoft account, OneDrive settings backup is enabled automatically. This feature can be controlled under OneDrive > Sync settings in the PC settings app. This feature only backs up a few settings. It’s really more of a way to sync settings between devices. OneDrive Cloud Storage Microsoft hasn’t been talking much about File History since Windows 8 was released. That’s because they want people to use OneDrive instead. OneDrive — formerly known as SkyDrive — was added to the Windows desktop in Windows 8.1. Save your files here and they’ll be stored online tied to your Microsoft account. You can then sign in on any other computer, smartphone, tablet, or even via the web and access your files. Microsoft wants typical PC users “backing up” their files with OneDrive so they’ll be available on any device. You don’t have to worry about all these features. Just choose a backup strategy to ensure your files are safe if your computer’s hard disk fails you. Whether it’s an integrated backup tool or a third-party backup application, be sure to back up your files.

    Read the article

  • 5 Android Keyboard Replacements to Help You Type Faster

    - by Chris Hoffman
    Android allows developers to replace its keyboard with their own keyboard apps. This has led to experimentation and great new features, like the gesture-typing feature that’s made its way into Android’s official keyboard after proving itself in third-party keyboards. This sort of customization isn’t possible on Apple’s iOS or even Microsoft’s modern Windows environments. Installing a third-party keyboard is easy — install it from Google Play, launch it like another app, and it will explain how to enable it. Google Keyboard Google Keyboard is Android’s official keyboard, as seen on Google’s Nexus devices. However, there’s a good chance your Android smartphone or tablet comes with a keyboard designed by its manufacturer instead. You can install the Google Keyboard from Google Play, even if your device doesn’t come with it. This keyboard offers a wide variety of features, including a built-in gesture-typing feature, as popularized by Swype. It also offers prediction, including full next-word prediction based on your previous word, and includes voice recognition that works offline on modern versions of Android. Google’s keyboard may not offer the most accurate swiping feature or the best autocorrection, but it’s a great keyboard that feels like it belongs in Android. SwiftKey SwiftKey costs $4, although you can try it free for one month. In spite of its price, many people who rarely buy apps have been sold on SwiftKey. It offers amazing auto-correction and word-prediction features. Just mash away on your touch-screen keyboard, typing as fast as possible, and SwiftKey will notice your mistakes and type what you actually meant to type. SwiftKey also now has built-in support for gesture-typing via SwiftKey Flow, so you get a lot of flexibility. At $4, SwiftKey may seem a bit pricey, but give the month-long trial a try. A great keyboard makes all the typing you do everywhere on your phone better. SwiftKey is an amazing keyboard if you tap-to-type rather than swipe-to-type. Swype While other keyboards have copied Swype’s swipe-to-type feature, none have completely matched its accuracy. Swype has been designing a gesture-typing keyboard for longer than anyone else and its gesture feature still seems more accurate than its competitors’ gesture support. If you use gesture-typing all the time, you’ll probably want to use Swype. Swype can now be installed directly from Google Play without the old, tedious process of registering a beta account and sideloading the Swype app. Swype offers a month-long free trial and the full version is available for $1 afterwards. Minuum Minuum is a crowdfunded keyboard that is currently still in beta and only supports English. We include it here because it’s so interesting — it’s a great example of the kind of creativity and experimentation that happens when you allow developers to experiment with their own forms of keyboard. Minuum uses a tiny, minimum keyboard that frees up your screen space, so your touch-screen keyboard doesn’t hog your device’s screen. Rather than displaying a full keyboard on your screen, Minuum displays a single row of letters.  Each letter is small and may be difficult to hit, but that doesn’t matter — Minuum’s smart autocorrection algorithms interpret what you intended to type rather than typing the exact letters you press. Just swipe to the right to type a space and accept Minuum’s suggestion. At $4 for a beta version with no trial, Minuum may seem a bit pricy. But it’s a great example of the flexibility Android allows. If there’s a problem with this keyboard, it’s that it’s a bit late — in an age of 5″ smartphones with 1080p screens, full-size keyboards no longer feel as cramped. MessagEase MessagEase is another example of a new take on text input. Thankfully, this keyboard is available for free. MessagEase presents all letters in a nine-button grid. To type a common letter, you’d tap the button. To type an uncommon letter, you’d tap the button, hold down, and swipe in the appropriate direction. This gives you large buttons that can work well as touch targets, especially when typing with one hand. Like any other unique twist on a traditional keyboard, you’d have to give it a few minutes to get used to where the letters are and the new way it works. After giving it some practice, you may find this is a faster way to type on a touch-screen — especially with one hand, as the targets are so large. Google Play is full of replacement keyboards for Android phones and tablets. Keyboards are just another type of app that you can swap in. Leave a comment if you’ve found another great keyboard that you prefer using. Image Credit: Cheon Fong Liew on Flickr     

    Read the article

  • Restful Services, oData, and Rest Sharp

    - by jkrebsbach
    After a great presentation by Jason Sheehan at MDC about RestSharp, I decided to implement it. RestSharp is a .Net framework for consuming restful data sources via either Json or XML. My first step was to put together a Restful data source for RestSharp to consume.  Staying entirely withing .Net, I decided to use Microsoft's oData implementation, built on System.Data.Services.DataServices.  Natively, these support Json, or atom+pub xml.  (XML with a few bells and whistles added on) There are three main steps for creating an oData data source: 1)  override CreateDSPMetaData This is where the metadata data is returned.  The meta data defines the structure of the data to return.  The structure contains the relationships between data objects, along with what properties the objects expose.  The meta data can and should be somehow cached so that the structure is not rebuild with every data request. 2) override CreateDataSource The context contains the data the data source will publish.  This method is the conduit which will populate the metadata objects to be returned to the requestor. 3) implement static InitializeService At this point we can set up security, along with setting up properties of the web service (versioning, etc)   Here is a web service which publishes stock prices for various Products (stocks) in various Categories. namespace RestService {     public class RestServiceImpl : DSPDataService<DSPContext>     {         private static DSPContext _context;         private static DSPMetadata _metadata;         /// <summary>         /// Populate traversable data source         /// </summary>         /// <returns></returns>         protected override DSPContext CreateDataSource()         {             if (_context == null)             {                 _context = new DSPContext();                 Category utilities = new Category(0);                 utilities.Name = "Electric";                 Category financials = new Category(1);                 financials.Name = "Financial";                                 IList products = _context.GetResourceSetEntities("Products");                 Product electric = new Product(0, utilities);                 electric.Name = "ABC Electric";                 electric.Description = "Electric Utility";                 electric.Price = 3.5;                 products.Add(electric);                 Product water = new Product(1, utilities);                 water.Name = "XYZ Water";                 water.Description = "Water Utility";                 water.Price = 2.4;                 products.Add(water);                 Product banks = new Product(2, financials);                 banks.Name = "FatCat Bank";                 banks.Description = "A bank that's almost too big";                 banks.Price = 19.9; // This will never get to the client                 products.Add(banks);                 IList categories = _context.GetResourceSetEntities("Categories");                 categories.Add(utilities);                 categories.Add(financials);                 utilities.Products.Add(electric);                 utilities.Products.Add(electric);                 financials.Products.Add(banks);             }             return _context;         }         /// <summary>         /// Setup rules describing published data structure - relationships between data,         /// key field, other searchable fields, etc.         /// </summary>         /// <returns></returns>         protected override DSPMetadata CreateDSPMetadata()         {             if (_metadata == null)             {                 _metadata = new DSPMetadata("DemoService", "DataServiceProviderDemo");                 // Define entity type product                 ResourceType product = _metadata.AddEntityType(typeof(Product), "Product");                 _metadata.AddKeyProperty(product, "ProductID");                 // Only add properties we wish to share with end users                 _metadata.AddPrimitiveProperty(product, "Name");                 _metadata.AddPrimitiveProperty(product, "Description");                 EntityPropertyMappingAttribute att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 // Define products as a set of product entities                 ResourceSet products = _metadata.AddResourceSet("Products", product);                 // Define entity type category                 ResourceType category = _metadata.AddEntityType(typeof(Category), "Category");                 _metadata.AddKeyProperty(category, "CategoryID");                 _metadata.AddPrimitiveProperty(category, "Name");                 _metadata.AddPrimitiveProperty(category, "Description");                 // Define categories as a set of category entities                 ResourceSet categories = _metadata.AddResourceSet("Categories", category);                 att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 // A product has a category, a category has products                 _metadata.AddResourceReferenceProperty(product, "Category", categories);                 _metadata.AddResourceSetReferenceProperty(category, "Products", products);             }             return _metadata;         }         /// <summary>         /// Based on the requesting user, can set up permissions to Read, Write, etc.         /// </summary>         /// <param name="config"></param>         public static void InitializeService(DataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.All);             config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;             config.DataServiceBehavior.AcceptProjectionRequests = true;         }     } }     The objects prefixed with DSP come from the samples on the oData site: http://www.odata.org/developers The products and categories objects are POCO business objects with no special modifiers. Three main options are available for defining the MetaData of data sources in .Net: 1) Generate Entity Data model (Potentially directly from SQL Server database).  This requires the least amount of manual interaction, and uses the edmx WYSIWYG editor to generate a data model.  This can be directly tied to the SQL Server database and generated from the database if you want a data access layer tightly coupled with your database. 2) Object model decorations.  If you already have a POCO data layer, you can decorate your objects with properties to statically inform the compiler how the objects are related.  The disadvantage is there are now tags strewn about your business layer that need to be updated as the business rules change.  3) Programmatically construct metadata object.  This is the object illustrated above in CreateDSPMetaData.  This puts all relationship information into one central programmatic location.  Here business rules are constructed when the DSPMetaData response object is returned.   Once you have your service up and running, RestSharp is designed for XML / Json, along with the native Microsoft library.  There are currently some differences between how Jason made RestSharp expect XML with how atom+pub works, so I found better results currently with the Json implementation - modifying the RestSharp XML parser to make an atom+pub parser is fairly trivial though, so use what implementation works best for you. I put together a sample console app which calls the RestSvcImpl.svc service defined above (and assumes it to be running on port 2000).  I used both RestSharp as a client, and also the default Microsoft oData client tools. namespace RestConsole {     class Program     {         private static DataServiceContext _ctx;         private enum DemoType         {             Xml,             Json         }         static void Main(string[] args)         {             // Microsoft implementation             _ctx = new DataServiceContext(new System.Uri("http://localhost:2000/RestServiceImpl.svc"));             var msProducts = RunQuery<Product>("Products").ToList();             var msCategory = RunQuery<Category>("/Products(0)/Category").AsEnumerable().Single();             var msFilteredProducts = RunQuery<Product>("/Products?$filter=length(Name) ge 4").ToList();             // RestSharp implementation                          DemoType demoType = DemoType.Json;             var client = new RestClient("http://localhost:2000/RestServiceImpl.svc");             client.ClearHandlers(); // Remove all available handlers             // Set up handler depending on what situation dictates             if (demoType == DemoType.Json)                 client.AddHandler("application/json", new RestSharp.Deserializers.JsonDeserializer());             else if (demoType == DemoType.Xml)             {                 client.AddHandler("application/atom+xml", new RestSharp.Deserializers.XmlDeserializer());             }                          var request = new RestRequest();             if (demoType == DemoType.Json)                 request.RootElement = "d"; // service root element for json             else if (demoType == DemoType.Xml)             {                 request.XmlNamespace = "http://www.w3.org/2005/Atom";             }                              // Return all products             request.Resource = "/Products?$orderby=Name";             RestResponse<List<Product>> productsResp = client.Execute<List<Product>>(request);             List<Product> products = productsResp.Data;             // Find category for product with ProductID = 1             request.Resource = string.Format("/Products(1)/Category");             RestResponse<Category> categoryResp = client.Execute<Category>(request);             Category category = categoryResp.Data;             // Specialized queries             request.Resource = string.Format("/Products?$filter=ProductID eq {0}", 1);             RestResponse<Product> productResp = client.Execute<Product>(request);             Product product = productResp.Data;                          request.Resource = string.Format("/Products?$filter=Name eq '{0}'", "XYZ Water");             productResp = client.Execute<Product>(request);             product = productResp.Data;         }         private static IEnumerable<TElement> RunQuery<TElement>(string queryUri)         {             try             {                 return _ctx.Execute<TElement>(new Uri(queryUri, UriKind.Relative));             }             catch (Exception ex)             {                 throw ex;             }         }              } }   Feel free to step through the code a few times and to attach a debugger to the service as well to see how and where the context and metadata objects are constructed and returned.  Pay special attention to the response object being returned by the oData service - There are several properties of the RestRequest that can be used to help troubleshoot when the structure of the response is not exactly what would be expected.

    Read the article

  • Metro: Understanding Observables

    - by Stephen.Walther
    The goal of this blog entry is to describe how the Observer Pattern is implemented in the WinJS library. You learn how to create observable objects which trigger notifications automatically when their properties are changed. Observables enable you to keep your user interface and your application data in sync. For example, by taking advantage of observables, you can update your user interface automatically whenever the properties of a product change. Observables are the foundation of declarative binding in the WinJS library. The WinJS library is not the first JavaScript library to include support for observables. For example, both the KnockoutJS library and the Microsoft Ajax Library (now part of the Ajax Control Toolkit) support observables. Creating an Observable Imagine that I have created a product object like this: var product = { name: "Milk", description: "Something to drink", price: 12.33 }; Nothing very exciting about this product. It has three properties named name, description, and price. Now, imagine that I want to be notified automatically whenever any of these properties are changed. In that case, I can create an observable product from my product object like this: var observableProduct = WinJS.Binding.as(product); This line of code creates a new JavaScript object named observableProduct from the existing JavaScript object named product. This new object also has a name, description, and price property. However, unlike the properties of the original product object, the properties of the observable product object trigger notifications when the properties are changed. Each of the properties of the new observable product object has been changed into accessor properties which have both a getter and a setter. For example, the observable product price property looks something like this: price: { get: function () { return this.getProperty(“price”); } set: function (value) { this.setProperty(“price”, value); } } When you read the price property then the getProperty() method is called and when you set the price property then the setProperty() method is called. The getProperty() and setProperty() methods are methods of the observable product object. The observable product object supports the following methods and properties: · addProperty(name, value) – Adds a new property to an observable and notifies any listeners. · backingData – An object which represents the value of each property. · bind(name, action) – Enables you to execute a function when a property changes. · getProperty(name) – Returns the value of a property using the string name of the property. · notify(name, newValue, oldValue) – A private method which executes each function in the _listeners array. · removeProperty(name) – Removes a property and notifies any listeners. · setProperty(name, value) – Updates a property and notifies any listeners. · unbind(name, action) – Enables you to stop executing a function in response to a property change. · updateProperty(name, value) – Updates a property and notifies any listeners. So when you create an observable, you get a new object with the same properties as an existing object. However, when you modify the properties of an observable object, then you can notify any listeners of the observable that the value of a particular property has changed automatically. Imagine that you change the value of the price property like this: observableProduct.price = 2.99; In that case, the following sequence of events is triggered: 1. The price setter calls the setProperty(“price”, 2.99) method 2. The setProperty() method updates the value of the backingData.price property and calls the notify() method 3. The notify() method executes each function in the collection of listeners associated with the price property Creating Observable Listeners If you want to be notified when a property of an observable object is changed, then you need to register a listener. You register a listener by using the bind() method like this: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price observableProduct.price = 2.99; } }; app.start(); })(); In the code above, the bind() method is used to associate the price property with a function. When the price property is changed, the function logs the new value of the price property to the Visual Studio JavaScript console. The price property is associated with the function using the following line of code: // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); Coalescing Notifications If you make multiple changes to a property – one change immediately following another – then separate notifications won’t be sent. Instead, any listeners are notified only once. The notifications are coalesced into a single notification. For example, in the following code, the product price property is updated three times. However, only one message is written to the JavaScript console. Only the last value assigned to the price property is written to the JavaScript Console window: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price observableProduct.price = 3.99; observableProduct.price = 2.99; observableProduct.price = 1.99; Only the last value assigned to price, the value 1.99, appears in the console: If there is a time delay between changes to a property then changes result in different notifications. For example, the following code updates the price property every second: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Add 1 to price every second window.setInterval(function () { observableProduct.price += 1; }, 1000); In this case, separate notification messages are logged to the JavaScript Console window: If you need to prevent multiple notifications from being coalesced into one then you can take advantage of promises. I discussed WinJS promises in a previous blog entry: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-promises.aspx Because the updateProperty() method returns a promise, you can create different notifications for each change in a property by using the following code: // Change the price observableProduct.updateProperty("price", 3.99) .then(function () { observableProduct.updateProperty("price", 2.99) .then(function () { observableProduct.updateProperty("price", 1.99); }); }); In this case, even though the price is immediately changed from 3.99 to 2.99 to 1.99, separate notifications for each new value of the price property are sent. Bypassing Notifications Normally, if a property of an observable object has listeners and you change the property then the listeners are notified. However, there are certain situations in which you might want to bypass notification. In other words, you might need to change a property value silently without triggering any functions registered for notification. If you want to change a property without triggering notifications then you should change the property by using the backingData property. The following code illustrates how you can change the price property silently: // Simple product object var product = { name: "Milk", description: "Something to drink", price: 12.33 }; // Create observable product var observableProduct = WinJS.Binding.as(product); // Execute a function when price is changed observableProduct.bind("price", function (newValue) { console.log(newValue); }); // Change the price silently observableProduct.backingData.price = 5.99; console.log(observableProduct.price); // Writes 5.99 The price is changed to the value 5.99 by changing the value of backingData.price. Because the observableProduct.price property is not set directly, any listeners associated with the price property are not notified. When you change the value of a property by using the backingData property, the change in the property happens synchronously. However, when you change the value of an observable property directly, the change is always made asynchronously. Summary The goal of this blog entry was to describe observables. In particular, we discussed how to create observables from existing JavaScript objects and bind functions to observable properties. You also learned how notifications are coalesced (and ways to prevent this coalescing). Finally, we discussed how you can use the backingData property to update an observable property without triggering notifications. In the next blog entry, we’ll see how observables are used with declarative binding to display the values of properties in an HTML document.

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

  • How to start a Software Company

    - by MeshMan
    I've always been interested in wondering how software companies happen. I find it extremely difficult once you're tied down with car, house, life etc. Funding is always the biggest concern. To make this a bit more specific, I see two types. Those offering a product/service or those offering a consultancy company. One things that bugs me about the product/service kind is that we all know how burning the candle at both ends is extremely exhausting. Coding for 8-10 hours in the day and then code in the evenings on your own stuff, doesn't last long. No matter how passionate you are about your idea, simply put, coding day and night is a recipe for burn out. Is this a defeatist attitude though? Can it be balanced? A consultancy kind isn't as tricky in my honest opinion. I think once you have spent years and years in the industry building up relationships, contacts from contracting or moving around, and of course, being involved in the community, then landing your first project as a consultant I'm sure is easier than the product/service kind. I'd imagine friends then could join you as you take on bigger company projects, like an Agile implementation or TDD training, then off you go gaining bigger things. Could you please specify which company type you're answering if you can't contribute to both. I'd like to hear everyone's experiences or ideas on any level for software company start-ups.

    Read the article

  • How to avoid Cartesian product in an INNER JOIN query?

    - by flhe
    I have 6 tables, let's call them a,b,c,d,e,f. Now I want to search all the colums (except the ID columns) of all tables for a certain word, let's say 'Joe'. What I did was, I made INNER JOINS over all the tables and then used LIKE to search the columns. INNER JOIN ... ON INNER JOIN ... ON.......etc. WHERE a.firstname ~* 'Joe' OR a.lastname ~* 'Joe' OR b.favorite_food ~* 'Joe' OR c.job ~* 'Joe'.......etc. The results are correct, I get all the colums I was looking for. But I also get some kind of cartesian product, I get 2 or more lines with almost the same results. How can i avoid this? I want so have each line only once, since the results should appear on a web search.

    Read the article

  • Magento: Configurable product options not showing on a view page?

    - by Relja
    I have multi-store Magento system and strange things happen when i try to see a configurable product on my main store - the options select lists don't show up at all! And that is the case for the 95% of the products on the main store. But on the other stores it works fine?! I can't see what am I doing wrong. All my products are configurable, all have set simple products with options attached to them, all are set to be visible on all stores (WebsiteIds attribute), all are enabled on all stores, all simple products are in stock and have some stock quantity set. I think if I've done something wrong it would be like that on all stores, not just the main one. I'm totally clueless, please help. I've attached couple of images to see the difference. http://img51.imageshack.us/img51/3224/59155765.jpg http://img196.imageshack.us/img196/8145/98963713.jpg

    Read the article

  • You Don’t Need to Install a Task Manager: How to Manage Running Apps on Android

    - by Chris Hoffman
    Google Play is full of task managers for Android. These utilities can show you apps running in the background, kill running apps, and otherwise manage your apps — but you don’t need to install any third-party software to do this. We’ll show you how to quickly and easily kill and manage your running apps using only the software included with your Android phone. Third-party task managers are unnecessary and many include harmful features, like task killers.    

    Read the article

  • Comparing Standard Editions of SQL Server

    - by RickHeiges
    Recently, I've been speaking with customers about upgrading SQL Server. At times, some customers have a lot of Standard Edition SQL Server 2005 / 2008 / 2008R2 in their organization and they want to see the features they get when upgrading to SQL Server 2012. Last week, I sent out some tweets to the #sqlhelp hashtag to see if someone has already put together a document or blog post about comparing the Standard Editions. I was unable to discover anything out there that really focuses just on Standard...(read more)

    Read the article

  • How to Use Quick Toggles on Your Android Phone

    - by Chris Hoffman
    One of the big new features in Apple’s iOS 7 is Control Center, which allows you to quickly access and toggle common setting from anywhere. However, Android phones have had quick toggles for a long time. Android now has its own built-in quick toggles, while popular manufacturer-customized interfaces like Samsung’s TouchWiz have their own quick toggles, which work differently. You can also add custom quick toggles in different places.    

    Read the article

  • Are non Turing-complete languages considered programming languages at all?

    - by user1598390
    Reading a recent question: Is it actually possible to have a 'useful' programming language that isn't Turing complete?, I've come to wonder if non Turing-complete programming languages are considered programming languages at all. Since Turing-completeness means a language has to have variables to store values as well as control structures ( for, while )... Is a language that lacks these features considered a programming language ?

    Read the article

  • Specifying and applying broad changes to a program

    - by Victor Nicollet
    How do you handle incomplete feature requests, when the ones asking for the feature cannot possibly write a complete request? Consider an imaginary situation. You are a tech lead working on a piece of software that revolves around managing profiles (maybe they're contacts in a CRM-type application, or employees in an HR application), with many operations being directly or indirectly performed on those profiles — edit fields, add comments, attach documents, send e-mail... The higher-ups decide that a lock functionality should be added whereby a profile can be locked to prevent anyone else from doing any operations on it until it's unlocked — this feature would be used by security agents to prevent anyone from touching a profile pending a security audit. Obviously, such a feature interacts with many other existing features related to profiles. For example: Can one add a comment to a locked profile? Can one see e-mails that were sent by the system to the owner of a locked profile? Can one see who recently edited a locked profile? If an e-mail was in the process of being sent when the lock happened, is the e-mail sending canceled, delayed or performed as if nothing happened? If I just changed a profile and click the "cancel" link on the confirmation, does the lock prevent the cancel or does it still go through? In all of these cases, how do I tell the user that a lock is in place? Depending on the software, there could be hundreds of such interactions, and each interaction requires a decision — is the lock going to apply and if it does, how will it be displayed to the user? And the higher-ups asking for the feature probably only see a small fraction of these, so you will probably have a lot of questions coming up while you are working on the feature. How would you and your team handle this? Would you expect the higher-ups to come up with a complete description of all cases where the lock should apply (and how), and treat all other cases as if the lock did not exist? Would you try to determine all potential interactions based on existing specifications and code, list them and ask the higher-ups to make a decision on all those where the decision is not obvious? Would you just start working and ask questions as they come up? Would you try to change their minds and settle on a more easily described feature with similar effects? The information about existing features is, as I understand it, in the code — how do you bridge the gap between the decision-makers and that information they cannot access?

    Read the article

  • What common programming problems are best solved by using prototypes and closures?

    - by vemv
    As much as I understand both concepts, I can't see how can I take advantage of JavaScript's closures and prototypes aside from using them for creating instantiable and/or encapsulated class-like blocks (which seems more of a workaround than an asset to me) Other JS features such as functions-as-values or logical evaluation of non-booleans are much easier to fall in love with... What common programming problems are best solved by using propotypal inheritance and closures?

    Read the article

  • Problem in using xpath with xslt having distinct values

    - by AB
    I have XML like, <items> <item> <products> <product>laptop</product> <product>charger</product> <product>Cam</product> </products> </item> <item> <products> <product>laptop</product> <product>headphones</product> <product>Photoframe</product> </products> </item> <item> <products> <product>laptop</product> <product>charger</product> <product>Battery</product> </products> </item> </items> and I am using xslt on it //can't change xpath as getting it from somewhere else <xsl:param name="xparameter" select="items/item[products/product='laptop' and products/product='charger']"></xsl:param> <xsl:template match="/"> <xsl:for-each select="$xparameter"> <xsl:for-each select="products/product[not(.=preceding::product)]"> <xsl:sort select="."></xsl:sort> <xsl:value-of select="." ></xsl:value-of>, </xsl:for-each> </xsl:for-each> </xsl:template> I want the output to be laptop charger cam Battery But I m not getting the result as I m expecting...distinct values is working fine ..something is getting wrong when I am adding that and claus

    Read the article

  • Problem in using xpath with xslt having distint values

    - by AB
    I have XML like, <items> <item> <products> <product>laptop</product> <product>charger</product> <product>Cam</product> </products> </item> <item> <products> <product>laptop</product> <product>headphones</product> <product>Photoframe</product> </products> </item> <item> <products> <product>laptop</product> <product>charger</product> <product>Battery</product> </products> </item> </items> and I am using xslt on it //can't change xpath as getting it from somewhere else <xsl:param name="xparameter" select="items/item[products/product='laptop' and products/product='charger']"></xsl:param> <xsl:template match="/"> <xsl:for-each select="$xparameter"> <xsl:for-each select="products/product[not(.=preceding::product)]"> <xsl:sort select="."></xsl:sort> <xsl:value-of select="." ></xsl:value-of>, </xsl:for-each> </xsl:for-each> </xsl:template> I want the output to be laptop charger cam Battery But I m not getting the result as I m expecting...distinct values is working fine ..something is getting wrong when I am adding that and claus

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >