Search Results

Search found 15485 results on 620 pages for 'product updates'.

Page 90/620 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Silverlight 4 ComboBox - Binding to Nullable data (tried TargetNullValue but not working as expected)

    - by Laurence
    (Please note - I am a Silverlight beginner and am looking for the simplest solution here, e.g. that doesn't involve writing/installing a replacement for the ComboBox control!) This is an issue with a Silverlight 4 application that uses the View Model (MVVM) approach. I have a simple form for editing a "Product" object. Product has a CategoryID property which is nullable (int?). A ComboBox is used to view and set the CategoryID - this is bound to an ObservableCollection of Categories. Product also has number of non-nullable properties bound to TextBoxes. I want the user to see "N/A" in the ComboBox for a product with no category, and to be use this "N/A" option to set CategoryID to null. So, I manually added a Category object with CategoryID=0 and CategoryName="N/A" to the collection; then I set TargetNullValue=0 in the SelectedValue Binding of the ComboBox. My thinking was - when the ComboBox SelectedValue was bound to a null CategoryID it would substitute zero, and therefore select the "N/A" option. When editing a Product with a non-null CategoryID, everything works. However when a null CategoryID is found, two problems occur: No option is selected in the ComboBox (its blank) The ComboBox binding seems broken from this point onwards - any Product I subsequently edit (incl. ones with a non-null CategoryID) have nothing selected in the ComboBox (its still populated with all categories - just no selected item). I've seen reports of problem #2 (here, here) but I was under the impression that #1 should have worked. What am I missing to get the "N/A" option to be selected? XAML for ComboBox: <ComboBox x:Name="cboCategory" ItemsSource="{Binding colCategories, Mode=OneWay}" SelectedValuePath="CategoryID" DisplayMemberPath="CategoryName" SelectedValue="{Binding CurrentProduct.CategoryID, Mode=TwoWay, TargetNullValue=0}" Height="24" Width="344"></ComboBox>

    Read the article

  • LINQ to SQL: Reusable expression for property?

    - by coenvdwel
    Pardon me for being unable to phrase the title more exact. Basically, I have three LINQ objects linked to tables. One is Product, the other is Company and the last is a mapping table Mapping to store what Company sells which products and by which ID this Company refers to this Product. I am now retrieving a list of products as follows: var options = new DataLoadOptions(); options.LoadWith<Product>(p => p.Mappings); context.LoadOptions = options; var products = ( from p in context.Products select new { ProductID = p.ProductID, //BackendProductID = p.BackendProductID, BackendProductID = (p.Mappings.Count == 0) ? "None" : (p.Mappings.Count > 1) ? "Multiple" : p.Mappings.First().BackendProductID, Description = p.Description } ).ToList(); This does a single query retrieving the information I want. But I want to be able to move the logic behind the BackendProductID into the LINQ object so I can use the commented line instead of the annoyingly nested ternary operator statements for neatness and re-usability. So I added the following property to the Product object: public string BackendProductID { get { if (Mappings.Count == 0) return "None"; if (Mappings.Count > 1) return "Multiple"; return Mappings.First().BackendProductID; } } The list is still the same, but it now does a query for every single Product to get it's BackendProductID. The code is neater and re-usable, but the performance now is terrible. What I need is some kind of Expression or Delegate but I couldn't get my head around writing one. It always ended up querying for every single product, still. Any help would be appreciated!

    Read the article

  • iPhone - Problem with in-app purchases

    - by Satyam svv
    I've created iPhone app with in-app purchase. Now, I'm in testing phase. I created provisioning profile com.satyam.testapp In iTunes connected I created the application and uploaded the images, screen shots, desscription etc. I also created two id's for in-app purchase. One is com.satyam.testapp.book1 and the other one is com.satyam.testapp.book5 I created test account also for verifying my in-app purchases. Using com.stayam.testapp i created developer test profile and using the same in my developed application. I logged out the itunes app store account in my iphone. Now i started running my application on my iphone. Its saying that no items are there to purchase. But its not even asking me for credentials where i've to enter test account username and password..... how to debug it? Here's my delegate: - (void)productsRequest:(SKProductsRequest *)request didReceiveResponse:(SKProductsResponse *)response { NSArray *myProduct = [[NSArray alloc] initWithArray:response.products]; for(int i=0;i<[myProduct count];i++) { SKProduct *product = [myProduct objectAtIndex:i]; NSLog(@"Name: %@ - Price: %f",[product localizedTitle],[[product price] doubleValue]); NSLog(@"Product identifier: %@", [product productIdentifier]); } for(NSString *invalidProduct in response.invalidProductIdentifiers) NSLog(@"Problem in iTunes connect configuration for product: %@", invalidProduct); [request autorelease]; [myProduct release]; }

    Read the article

  • jQuery each loop - using variables

    - by Sam
    I have a list of products. Each product has a title and a review link. Currently the titles link directly to the individual product page, and the review links go elsewhere. I'd like to use a jquery each loop to cycle through each li, take the href from the title (the first link), and apply it to the review link (the second link), so they both point to the product page. Simplified code would be as follows: <ul> <li><a href="product1.html">Product 1</a><a href="review1.html">Review 1</a></li> <li><a href="product2.html">Product 2</a><a href="review2.html">Review 2</a></li> <li><a href="product3.html">Product 3</a><a href="review3.html">Review 3</a></li> </ul> I thought it would be something like the following: $("li").each(function(){ var link = $("a:eq(0)").attr('href'); $("a:eq(1)").attr("href", link); }); But it always uses the same variable "link". Can someone help me out?

    Read the article

  • Flex 3 - Remove image flickering

    - by BS_C3
    Hello Community! I have an application with different components that are accessible through a viewstack in the main application. The main application looks like that: <Application> <Viewstack> <myComponent1/> <myComponent2/> <myComponent3/> . . . </Viewstack> </Application> In myComponent1, I have a horizontalList where the user can select a product. In myComponent2, I have 2 containers inside the component. A left container with a larger image of the product selected in myComponent1 and a right container with all characteristics of the product. Both containers have an embed background image. When I select a product in myComponent1, the application displays myComponent2. When the component is displayed, I first see the page without the large image of the product, then both containers flickers and the product image is displayed. How could I avoid this flickering? It's really annoying _< Thanks in advance for your help =) Regards. BS_C3

    Read the article

  • Do I have to implement Add/Delete methods in my NHibernate entities ?

    - by Lisa
    This is a sample from the Fluent NHibernate website: Compared to the Entitiy Framework I have ADD methods in my POCO in this code sample using NHibernate. With the EF I did context.Add or context.AddObject etc... the context had the methods to put one entity into the others entity collection! Do I really have to implement Add/Delete/Update methods (I do not mean the real database CRUD operations!) in a NHibernate entity ? public class Store { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual IList<Product> Products { get; set; } public virtual IList<Employee> Staff { get; set; } public Store() { Products = new List<Product>(); Staff = new List<Employee>(); } public virtual void AddProduct(Product product) { product.StoresStockedIn.Add(this); Products.Add(product); } public virtual void AddEmployee(Employee employee) { employee.Store = this; Staff.Add(employee); } }

    Read the article

  • Repository Pattern Standardization of methods

    - by Nix
    All I am trying to find out the correct definition of the repository pattern. My original understanding was this (extremely dubmed down) Separate your Business Objects from your Data Objects Standardize access methods in data access layer. I have really seen 2 different implementations. Implementation 1 : public Interface IRepository<T>{ List<T> GetAll(); void Create(T p); void Update(T p); } public interface IProductRepository: IRepository<Product> { //Extension methods if needed List<Product> GetProductsByCustomerID(); } Implementation 2 : public interface IProductRepository { List<Product> GetAllProducts(); void CreateProduct(Product p); void UpdateProduct(Product p); List<Product> GetProductsByCustomerID(); } Notice the first is generic Get/Update/GetAll, etc, the second is more of what I would define "DAO" like. Both share an extraction from your data entities. Which I like, but i can do the same with a simple DAO. However the second piece standardize access operations I see value in, if you implement this enterprise wide people would easily know the set of access methods for your repository. Am I wrong to assume that the standardization of access to data is an integral piece of this pattern ? Rhino has a good article on implementation 1, and of course MS has a vague definition and an example of implementation 2 is here.

    Read the article

  • JDO architecture: One to many relationship and cascading deleting

    - by user361897
    I’m new to object oriented database designs and I’m trying to understand how I should be structuring my classes in JDO for google app engine, particularly one to many relationships. Let’s say I’m building a structure for a department store where there are many departments, and each department has many products. So I’d want to have a class called Department, with a variable that is a list of a Product class. @PersistenceCapable public class Department { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private String deptID; @Persistent private String departmentName; @Persistent private List<Product>; } @PersistenceCapable public class Product { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private String productID; @Persistent private String productName; } But one Product can be in more than one Department (like a battery could be in electronics and household supplies). So the next question is, how do I not duplicate data in the OOD world and have only one copy of product data in numerous departments? And the next question is, let’s say I delete out a particular product, how do each of the departments know it was deleted?

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • Reading HTML header info of files via JS

    - by Morten Repsdorph Husfeldt
    I have a product list that is generated in ASP. I have product descriptions for each product in an HTML file. Each HTML file is named: <product.id>.html. Each HTML file size is only 1-3 kb. Within the HTML file is <title> and <meta name="description" content="..." />. I want to access these in an efficient way so that I can output this as e.g.: document.write(<product.id>.html.title);<br/> document.write(<product.id>.html.description); I have a working solution for the individual products, where I use the description file - but I hope to find a more efficient / simple approach. Preferably, I want to avoid having 30+ hidden iframes - Google might think that I am trying to tamper with search result and blacklist my page. Current code: <script type="text/javascript"> document.getElementById('produkt').onload = function(){ var d = window.frames[frame].document; document.getElementById('pfoto').title = d.title : ' '; document.getElementById('pfoto').alt = d.getElementsByName('description')[0].getAttribute('content', 0) : ' '; var keywords = d.getElementsByName('keywords')[0].getAttribute('content', 0) : ' '; }; </script>

    Read the article

  • Serializing a class containing a custom class

    - by Netfangled
    I want to serialize an object as xml that contains other custom classes. From what I understand (I've been reading MSDN and SO mostly), the XmlSerializer doesn't take this into account. This is the line that's confusing me: XML serialization serializes only the public fields and property values of an object into an XML stream. XML serialization does not include type information. For example, if you have a Book object that exists in the Library namespace, there is no guarantee that it will be deserialized into an object of the same type. Taken from MSDN, here For example, I want to serialize an object of type Order, but it contains a list of Products, and each one contains an object of type Category: class Order { List<Product> products; } class Product { Category type; } class Category { string name; string description; } And I want my Order object to be serialized like so: <Order> <Product> <Category Name=""> <Description></Description> </Category> </Product> <Product> <Category Name=""> <Description></Description> </Category> </Product> <Order> Does the XmlSerializer already do this? If not, is there another class that does or do I have to define the serialization process myself?

    Read the article

  • Advance Query with Join

    - by user1462589
    I'm trying to convert a product table that contains all the detail of the product into separate tables in SQL. I've got everything done except for duplicated descriptor details. The problem I am having all the products have size/color/style/other that many other products contain. I want to only have one size or color descriptor for all the items and reuse the "ID" for all the product which I believe is a Parent key to the Product ID which is a ...Foreign Key. The only problem is that every descriptor would have multiple Foreign Keys assigned to it. So I was thinking on the fly just have it skip figuring out a Foreign Parent key for each descriptor and just check to see if that descriptor exist and if it does use its Key for the descriptor. Data Table PI Colo Sz OTHER 1 | Blue | 5 | Vintage 2 | Blue | 6 | Vintage 3 | Blac | 5 | Simple 4 | Blac | 6 | Simple =================================== Its destination table is this =================================== DI Description 1 | Blue 2 | Blac 3 | 5 4 | 6 6 | Vintage 7 | Simple ============================= Select Data.Table Unique.Data.Table.Colo Unique.Data.Table.Sz Unique.Data.Table.Other ======================================= Then the dual part of the questions after we create all the descriptors how to do a new query and assign the product ID to the descriptors. PI| DI 1 | 1 1 | 3 1 | 4 2 | 1 2 | 3 2 | 4 By figuring out how to do this I should be able to duplicate this pattern for all 300 + columns in the product. Some of these fields are 60+ characters large so its going to save a ton of space. Do I use a Array?

    Read the article

  • Fidelity Investments Life Insurance Executive Weighs in on Policy Administration Modernization

    - by helen.pitts(at)oracle.com
    James Klauer, vice president of Client Services Technology at Fidelity Investment Life Insurance, weighs in on the rationale and challenges associated with policy administration system replacement in this month's digital issue of Insurance Networking News.    In "The Policy Administration Replacement Quandary"  Klauer shared the primary business benefit that can be realized by adopting a modern policy administration system--a timely topic given that recent industry analyst surveys indicate policy administration replacement and modernization will continue to be a top priority for insurers this year.    "Modern policy administration systems are more flexible than systems of the past," Klauer says in the article. " This has allowed us to shorten our delivery time for new products and product changes.  We have also had a greater ability to integrate with other systems and to deliver process efficiencies."   Klauer goes on to advise that insurers ensure they have a solid understanding of the requirements when replacing their legacy policy administration system. "If you can afford the time, take the opportunity to re-engineer your business processes.  We were able to drastically change our death processes, introducing automation and error-proofing." Click here to read more of Klauer's insights and recommendations for best practices in the publication's "Ask & Answered" column.   You also can learn more the benefits of an adaptive, rules-driven approach to policy administration and how to mitigate risks associated with system replacement by attending the free Oracle Insurance Virtual Summit:  Fueling the Adaptive Insurance Enterprise, 10:00 a.m. - 6:00 p.m. EST, Wednesday, January 26.      Insurance Networking News and Oracle Insurance have teamed up to bring you this first-of-its-kind event. This year's theme, "Fueling the Adaptive Insurance Enterprise," will focus on bringing you information about exciting new technology concepts, which can help your company react more quickly to new market opportunities and, ultimately, grow the business.    Visit virtual booths and chat online with Oracle product specialists, network with other insurers, learn about exciting new product announcements, win prizes, and much more--all without leaving your office.  Be sure and attend the on-demand session, "Adapt, Transform and Grow: Accelerate Speed to Market with Adaptive Insurance Policy Administration," hosted by Kate Fowler, product strategy director for Oracle Insurance Policy Administration for Life and Annuity.   Register Now!   Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Book review: Microsoft System Center Enterprise Suite Unleashed

    - by BuckWoody
    I know, I know – what’s a database guy doing reading a book on System Center? Well, I need it from time to time. System Center is actually a collection of about 7 different products that you can use to manage and monitor your software and hardware, from drive space through Microsoft Office, UNIX systems, and yes, SQL Server. It’s that last part I care about the most, and so I’ve dealt with Data Protection Manager and System Center Operations Manager (I call it SCOM) in SQL Server. But I wasn’t familiar with the rest of the suite nor was I as familiar as I needed to be with the “Essentials” release – a separate product that groups together the main features of System Center into a single offering for smaller organizations. These companies usually run with a smaller IT shop, so they sometimes opt for this product to help them monitor everything, including SQL Server. So I picked up “Microsoft System Center Enterprise Suite Unleashed” by Chris Amaris and a cast of others. I don’t normally like to get a technical book by multiple authors – I just find that most of the time it’s quite jarring to switch from author to author, but I think this group did pretty well here.  The first chapter on introducing System Center has helped me talk with others about what the product does, and which pieces fit well together with SQL Server. The writing is well done, and I didn’t find a jump from author to author as I went along. The information is sequential, meaning that they lead you from install to configuration and then use. It’s very much a concepts-and-how-to book, and a big one at that – over 950 pages of learning! It was a pretty quick read, though, since I skipped the installation parts and there are lots of screenshots. While I’m not sure you’d be an expert on the product when you finish reading this book, but I would say you’re more than halfway there. I would say it suits someone that learns through examples the best, since they have a lot of step-by-step examples I do recommend that you take a look if you have to interact with this product, or even if you are a smaller shop and you’re the primary IT resource. The last few chapters deal with System Center Essentials, and honestly it was the best part of the book for me. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • Great Expectations - Fusion HCM Highlights at OOW

    - by Kathryn Perry
    A guest post by Lisa Conley, Principal Product Strategy Manager, Fusion HCM, Oracle Applications Development Oracle Open World is just around the corner! There's always so much to see and do and learn at the conference so I want to share some of the 'don't miss' Fusion HCM highlights with you. (Use this tool to search by session number to get a full description.) For starters, we have several customers who will be sharing their Fusion HCM implementation stories. We'll kick off these presentations with a customer panel at 12:15 on Monday in Moscone West 2005 (CON9420). You'll hear from Zillow, the Gerson Lehrman Group, UBS, and ConAgra about their experiences with our products. Oracle partners MarketSphere (CON8581) and eVerge (CON3800) have implemented Fusion HCM themselves and and will talk about how they'll use their experiences to help customers with their implementations (both are in Moscone West 2006). Beth Correa, CEO of Official Payroll Advisor, will highlight her favorite things about Oracle Fusion HCM Payroll on Tuesday at 11:45 in Moscone West 2006 (CON6691). And you'll get to hear from customers again when they speak with Steve Miranda in his Oracle Applications: Strategic Directions and Recommendations session on Tuesday at 1:15 in Moscone West 2002/2004 (CON11434). To bring it all together for you, we've listed all your Fusion HCM opportunities to learn and interact in this Focus On Document. I am really looking forward to the sessions on Human Capital Management in the Cloud. The Oracle Cloud combines the multiple product offerings into a single environment that leverages a common technology infrastructure enabling users to focus on their business - not the business of managing environments. On Tuesday at 10:15 in Moscone West 2002/2004, there is a General Session entitled the Future of Oracle HCM -- Strategy and Roadmap (GEN9505). This will touch on all product lines. Fusion HCM will be highlighted in Gretchen Alarcon's Oracle HCM: Overview, Strategy, Customer Experiences, and Roadmap session on Monday at 12:15 in Moscone West 2005 (CON9410). Also on Tuesday at 1:15 in Moscone West 2006, is a session focused on Talent Management and how you can try out these new products, co-existing with your current product set (CON9430). This is important in that you can test the waters before diving in. ConAgra will be sharing their experience in this session as well.  And of course, if you want to have a personal demonstration, please come by the Oracle DEMOgrounds in West Exhibition Hall Level 1 or the Oracle Cloud Services Lounge at Moscone West Level 3 where our Oracle HCM Cloud Services experts will be ready to answer your questions. I hope you have a wonderful week in San Francisco.Lisa ConleyPrincipal Product Strategy Manager, Fusion HCMApplications DevelopmentOracle Corporation

    Read the article

  • Networking Guidelines

    - by ACShorten
    One of the things I have noticed in my years in IT is the changes in networking. In the past networking was pretty simple with the host name and name resolution (via DNS) being pretty simple. Some sites still use this simple networking setup. These days, more complex name resolution, proxies, firewalls, demarcation nd virtualization, can make networking more complex. This can cause issues when installing products with in built networking that can frustrate even seasoned veterans. I have put together a few basic guidelines to hopefully help along with product installation and getting a product to operate in a somewhat complex network setup. All the components of the product (including the infrastructure) need to communicate via a network (even it is within a local machine/host). Ensure any host names referred to within configuration files are accessible via your networking setup. This may mean defining the hosts to the machines, to the DNS for name resolution and even your firewall to allow machines to communicate within your network. Make sure the ports used for any of the infrastructure are accessible (even through your firewall) and are unique within the host. Host duplication can cause the product to fail on startup as the port is already in use. If there are still issues, consider using localhost as your host name. I have used this in so many situations that I tend to use it now as a default anytime I install anything myself. Most Oracle products suggest to use localhost when using dynamic host or dynamic IP addresses and this is no different for the Oracle Utilities Application Framework. If you do use localhost then installing a Loopback Adapter for the operating system is recommended to force networking to a minimum. Usually localhost resolves to 127.0.0.1. When using multiple network connections, especially in a virtualized environment, ensure the host and ports used are relevent for the network cards you have setup. One of the common issues is finding the product is using a vierualized network card only to find that it is not setup for correct networking. If you are using the batch component, do not forget to ensure that the multicast protocol is enabled on your host and that the multicast address and port number specified are valid and accessible from all machines in the batch cluster (if clustering used). The same advice applies if you are using unicast where each host/port combination should be accessible. Hopefully these basic networking recommendations will help minimize any networking issues you might encounter.

    Read the article

  • The Case for Complimentary Software Copies

    - by GGBlogger
    As the Geriatric Geek you can understand that I’ve been writing and studying for over 60 years. That means that I’ve seen insane changes in the computer software industry. I’ve made the joke that I get a new college education every 6 months or so. Of course that’s an exaggeration but it doesn’t make the feeling go away. I have a long standing and strong relationship with Microsoft so I’m armed with virtually every tool they make. It also means that I have access to tons of training material. But here’s the rub… Last year I started a definitive read of Professional Visual Basic 2008. The purpose was to fill in holes in my understanding of various things. I’m currently on page 1119 of some 1400 pages. During this sojourn I’ve decided that the future is web related which is to say that the future of “thick client” applications running as Windows applications is likely to slowly disappear. To that end I’ve taken a side trip or two into the world of ASP (including XML), Silverlight and cloud development. After carefully avoiding (that’s tongue in cheek) XML for years I finally had to bite the bullet, so to speak, and start learning XML in earnest. The most recent result of that was trail downloads of Altova’s MissionKit 2010 for Software Architects and Liquid Technologies Liquid XML Studio Developer Edition. These are both beautiful products and I want to learn them and write about them. Now comes the rub… While 30 day evaluations are generous in allowing casual users to assess these technologies for purchase they are NOT long enough to allow an author to evaluate, learn and ultimately write about them. Even if I devoted the full 30 days to learning, using and writing about say Altova’s suite I wouldn’t have enough time. Liquid XML may be a little easier to learn (one product as opposed to 8).  Add to that the fact that I frequently get sidetracked to add to my kit and it really blows out. It can be extremely frustrating when I’ve devoted hours to a project and suddenly discover that to complete it I will either need to purchase a license or abandon the project. Since my life blood does not depend on the product I end up abandoning the project and moving on. So to the folks from whom I request complimentary copies… I guarantee that if I convert your product to doing paid development work I will purchase a license to do that but as long as I am using your product to study for the purpose of writing samples, teaching use or otherwise promoting your product to other paying customers I will ask that you give me a license so that I can do that without facing the dread expiration of a 30 day trial.

    Read the article

  • USB Flash Drive not Detected on 12.10 x64

    - by Falguni Roy
    My Mediatek usb flash drive is not get detected. The o/p of lsusb falguni@falguni-M61PME-S2P:~$ lsusb Bus 002 Device 002: ID 0e8d:0003 MediaTek Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub and the o/p of usb-devices falguni@falguni-M61PME-S2P:~$ usb-devices T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=480 MxCh=10 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0002 Rev=03.05 S: Manufacturer=Linux 3.5.0-18-generic ehci_hcd S: Product=EHCI Host Controller S: SerialNumber=0000:00:02.1 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=02 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=03.05 S: Manufacturer=Linux 3.5.0-18-generic ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub But in 12.04, the o/p of usb-devices was: falguni@falguni-M61PME-S2P:~$ usb-devices T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=480 MxCh=10 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0002 Rev=03.05 S: Manufacturer=Linux 3.5.0-18-generic ehci_hcd S: Product=EHCI Host Controller S: SerialNumber=0000:00:02.1 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=02 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=03.05 S: Manufacturer=Linux 3.5.0-18-generic ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=02 Lev=01 Prnt=01 Port=04 Cnt=01 Dev#= 2 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=02(commc) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=0e8d ProdID=0003 Rev=02.00 S: Manufacturer=MediaTek Inc S: Product=MT6235 C: #Ifs= 2 Cfg#= 1 Atr=80 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm I: If#= 1 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm It was working fine in 12.04. Now after upgrading to 12.10 the problem started. Where is the problem and how to solve it?

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Problem upgrading 11.04

    - by Krazy_Kaos
    I've been trying to upgrade my ubuntu 11.04 desktop computer, but when I click on the ugrade button: I get this error: I've tryied to change my repositories, but it changes nothing in the error((on the "setting new software channel"). Can someone point me in the right direction? This is my sources.list: # deb http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb-src http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb cdrom:[Ubuntu 9.04 _Jaunty Jackalope_ - Release i386 (20090421.3)]/ jaunty main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty main restricted multiverse universe ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty-updates main restricted multiverse universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb-src http://pt.archive.ubuntu.com/ubuntu/ jaunty-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu natty partner deb-src http://archive.canonical.com/ubuntu natty partner deb http://us.archive.ubuntu.com/ubuntu/ natty-security main restricted multiverse universe deb http://us.archive.ubuntu.com/ubuntu/ natty-proposed restricted main multiverse universe # deb http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick # deb-src http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick deb http://extras.ubuntu.com/ubuntu natty main #Third party developers repository

    Read the article

  • POST and PUT requests – is it just the convention?

    - by bckpwrld
    I've read quite a few articles on the difference between POST and PUT and in when the two should be used. But there are still few things confusing me ( hopefully questions will make some sense ): 1) We should use PUT to create resources when we want clients to specify the URI of the newly created resources and we should use POST to create resources when we let service generate the URI of the newly created resources. a) Is it just by convention that POST create request doesn't contain an URI of the newly created resource or POST create request actually can't contain the URI of the newly created resource? b) PUT has idempotent semantics and thus can be safely used for absolute updates ( ie we send entire state of the resource to the server ), but not also for relative updates ( ie we send just changes to the resource state ), since that would violate its semantics. But I assume it's still possible for PUT to send relative updates to the server, it's just that in that case the PUT update won't be idempotent? 2) I've read somewhere that we should "use POST to append a resource to a collection identified by a service-generated URI". a) What exactly does that mean? That if URIs for the resources were generated by a server ( thus the resources were created via POST ), then ALL subsequent resources should also be created via POST? Thus, in such situation no resource should be created via PUT? b) If my assumption under a) is correct, could you elaborate why we shouldn't create some resources via POST and some via PUT ( assuming server already contains a collection of resources created via POST )? REPLY: 1) Please correct me if I'm wrong, but from your post and from the link you've posted, it seems: a) The Request-URI in POST is interpreted by server as the URI of the service. Thus, it could just as easily be interpreted as an URI of a newly created resource, if server code was written to recognize Request-URI as such b) Similarly, PUT is able to send relative updates, it's just that service code is usually written such that it will complain if PUT updates are relative. 2) Usually, create has fallen into the POST camp, because of the idea of "appending to a collection." It's become the way to append a resource to a list of resources. I don't quite understand the reasoning behind the idea of "appending to a collection" and why this idea prefers POST for create. Namely, if we create 10 resources via PUT, then server will contain a collection of 10 resources and if we then create another resource, then server will append this resource to that collection ( which will now contain 11 resources )?! Uh, this is kinda confusing thank you

    Read the article

  • File Folder copy

    - by Dario Dias
    Below is the VBScript code. If the file/s or folder exist I get scripting error, "File already exists". How to fix that? How to create folder only if it does not exist and copy files only that are new or do not exist in source path? How to insert the username (Point 1) after "Welcome" and at (Poin 3) instead of user cancelled? Can the buttons be changed to Copy,Update,Cancel instead of Yes,No,Cancel? (Point 2) The code: Set objFSO = CreateObject("Scripting.FileSystemObject") Set wshShell = WScript.CreateObject( "WScript.Shell" ) strUserName = wshShell.ExpandEnvironmentStrings( "%USERNAME%" ) Message = " Welcome to the AVG Update Module" & vbCR '1* Message = Message & " *****************************" & vbCR & vbCR Message = Message & " Click Yes to Copy Definition Files" & vbCR & vbCR Message = Message & " OR " & vbCR & vbCR Message = Message & " Click No to Update Definition Files." & vbCR & vbCR Message = Message & " Click Cancel (ESC) to Exit." & vbCR & vbCR X = MsgBox(Message, vbYesNoCancel, "AVG Update Module") '2* 'Yes Selected Script If X = 6 then objFSO.FolderExists("E:\Updates") if TRUE then objFSO.CreateFolder ("E:\Updates") objFSO.CopyFile "c:\Docume~1\alluse~1\applic~1\avg8\update\download\*.*", "E:\Updates\" , OverwriteFiles MsgBox "Files Copied Succesfully.", vbInformation, "Copy Success" End If 'No Selected Script If X = 7 then objFSO.FolderExists("Updates") if TRUE then objFSO.CreateFolder("Updates") objFSO.CopyFile "E:\Updates\*.*", "Updates", OverwriteFiles Message = "Files Updated Successfully." & vbCR & vbCR Message = Message & "Click OK to Launch AVG GUI." & vbCR & vbCR Message = Message & "Click Cancel (ESC) to Exit." & vbCR & vbCR Y = MsgBox(Message, vbOKCancel, "Update Success") If Y = 1 then Set WshShell = CreateObject("WScript.Shell") WshShell.Run chr(34) & "C:\Progra~1\avg\avg8\avgui.exe" & Chr(34), 0 Set WshShell = Nothing End if If Y = 3 then WScript.Quit End IF 'Cancel Selection Script If X = 2 then MsgBox "No Files have been Copied/Updated.", vbExclamation, "User Cancelled" '3* End if

    Read the article

  • How to write this Linq SQL as a Dynamic Query (using strings)?

    - by Dr. Zim
    Skip to the "specific question" as needed. Some background: The scenario: I have a set of products with a "drill down" filter (Query Object) populated with DDLs. Each progressive DDL selection will further limit the product list as well as what options are left for the DDLs. For example, selecting a hammer out of tools limits the Product Sizes to only show hammer sizes. Current setup: I created a query object, sent it to a repository, and fed each option to a SQL "table valued function" where null values represent "get all products". I consider this a good effort, but far from DDD acceptable. I want to avoid any "programming" in SQL, hopefully doing everything with a repository. Comments on this topic would be appreciated. Specific question: How would I rewrite this query as a Dynamic Query? A link to something like 101 Linq Examples would be fantastic, but with a Dynamic Query scope. I really want to pass to this method the field in quotes "" for which I want a list of options and how many products have that option. from p in db.Products group p by p.ProductSize into g select new Category { PropertyType = g.Key, Count = g.Count() } Each DDL option will have "The selection (21)" where the (21) is the quantity of products that have that attribute. Upon selecting an option, all other remaining DDLs will update with the remaining options and counts. Edit: Additional notes: .OrderBy("it.City") // "it" refers to the entire record .GroupBy("City", "new(City)") // This produces a unique list of City .Select("it.Count()") //This gives a list of counts... getting closer .Select("key") // Selects a list of unique City .Select("new (key, count() as string)") // +1 to me LOL. key is a row of group .GroupBy("new (City, Manufacturer)", "City") // New = list of fields to group by .GroupBy("City", "new (Manufacturer, Size)") // Second parameter is a projection Product .Where("ProductType == @0", "Maps") .GroupBy("new(City)", "new ( null as string)")// Projection not available later? .Select("new (key.City, it.count() as string)")// GroupBy new makes key an object Product .Where("ProductType == @0", "Maps") .GroupBy("new(City)", "new ( null as string)")// Projection not available later? .Select("new (key.City, it as object)")// the it object is the result of GroupBy var a = Product .Where("ProductType == @0", "Maps") .GroupBy("@0", "it", "City") // This fails to group Product at all .Select("new ( Key, it as Product )"); // "it" is property cast though What I have learned so far is LinqPad is fantastic, but still looking for an answer. Eventually, completely random research like this will prevail I guess. LOL. Edit:

    Read the article

  • Display multiple new windows

    - by Ricardo Deano
    Afternoon all. I have the following scenario: I have a search page where by a client searches for a product from a drop down list, upon clicking a button, a gridview is produced display the spec. What I would like is the functionality for the user to make their selection and a new window pops up with the spec. So I have a simple code behind for the search page: protected void Button1_Click(object sender, EventArgs e) { Session["Product"] = DropDownList1.SelectedValue; string strScript = "window.open('GridViewPage.aspx', 'Key', 'height=500,width=800,toolbar=no,menubar=no,scrollbars=yes,resizable=yes,titlebar=no');"; ScriptManager.RegisterStartupScript(this, typeof(string), "", strScript, true); } And a gridviewpage that presents the data based upon the session created in the search page: <asp:GridView ID="GridView2" runat="server" AutoGenerateColumns="False" DataSourceID="LinqDataSource1"> <Columns> <asp:BoundField DataField="Product" HeaderText="MemberID" SortExpression="MemberID" /> <asp:BoundField DataField="Spec" HeaderText="Spec" SortExpression="Spec" /> </Columns> </asp:GridView> <asp:LinqDataSource ID="LinqDataSource1" runat="server" ContextTypeName="GridViewInNewWindow.ProductDataContext" EntityTypeName="" TableName="tblProducts" Where="Product == @Product"> <WhereParameters> <asp:SessionParameter Name="Product" SessionField="Product" Type="String" /> </WhereParameters> </asp:LinqDataSource> Now upon first iteration, this does the job...gridview presented in new window...hurrah! i.e. a user searches for egg, the spec for an egg is presented in a new window. However, what I would like to happen is that the user can make multiple searches so a number of new windows are opened. i.e. a user searches for egg once, the spec is returned in a new window; they then wish to see the spec for a chicken, so they use the search page to find said chicken, click the button and another new window is shown displaying the chicken's specs. Does anyone know how I can achieve this? Apologies if this is simple stuff, I am just finding my feet.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >