Search Results

Search found 2076 results on 84 pages for 'keyword'.

Page 70/84 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • October in Review

    - by Richard Bingham
    With OpenWorld over October was time to get back to serious work for everyone, including the Fusion Applications Developer Relations team. Don't forget the OpenWorld content is still available, including presentation downloads, for a limited period of time so be sure to grab anything you found useful or take another scan for anything you might have missed. Of all the announcements, the continued evolution of the Oracle Cloud services for extending and integrating with Fusion Applications is increasing in popularity, and certainly the Cloud Marketplace is something we're becoming involved in. More details to follow. Fusion Concepts Last week Vik from our team started the new "Fusion Concepts" series of articles, providing those new to Fusion Applications an explanation of the architectural basics, with the aim to reduce the learning curve and lay the platform for more efficient and effective development. The series begun with an insightful first post on the different schemas that exist in the Fusion Applications database. Look out for upcoming posts on multi-lingual entities, profile options, look-ups and more. New Learning Resources Our YouTube channel continued to expand with more 'how to' videos on using page composer, extending the Simplified UI (aka FUSE), and integrating BI reports and analytics. Also the Oracle Learning Library is now well established as a central resource for knowledge, now with thousands of tutorials, videos, and documents. Of particular note are the great new extensibility-related videos added by the CRM Product Management team, including more on the ever-expanding capabilities of Application Composer. To see some examples of these search using keyword 'customization' or the product 'Sales Cloud'. Finally on learning resources, as Oliver mentioned the Oracle Press book on Fusion Application Customization and Extensibility is now available for pre-order on Amazon (due out 1st Jan). Out And About October also saw us attend the annual Apps Conference held by the UK Oracle User Group in London. Interestingly there was an Applications Transformation stream of sessions and content that included Fusion Applications with all the latest in the Oracle Applications evolution, as always focused around the three tenets of social, mobile, and cloud. Read more in Richard's post-event write up. Other teams around Oracle have also been busy. Angelo from the Platform Technical Services group has done quite a bit of work using web services with Fusion SaaS and has published many interesting findings on his blog. It's definitely recommended reading if you are working on any related integration projects. The middleware-for-applications group has built a new tool called "AppAdvantage" offering an online assessment of your use of Fusion Middleware technologies with Oracle Applications. As the popularity of integrating cloud applications with on-premises systems continued to grow, leveraging existing middleware technologies (and licenses) to support the integration solution is likely to be of paramount importance. Similarly the "Build Enterprise Application Extensions with Ease" section of the related webpage has AppsUX director Killan Evers speaking about customization using the composer tools. Both are useful resources for those just getting started with a move to Fusion Applications. The Oracle A-Team, specialists in middleware technical architecture, always publish superb content via their 'chronicles' site, now with a substantial amount specifically related to Fusion Applications. Click on the Fusion Applications menu on the top right of their homepage to see more. Last month of particular note was an article on customizing the timeout pop-up message that shows to inactive users, providing design-time insight and easy-to-follow steps. Finally if you're looking at using Oracle Middleware and Cloud to tailor and extend your applications then you may also be interested in this new blog post on the roadmap for Oracle SOA and the latest on-demand Cloud Development webcast.

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery – Part 1

    - by Bob Zurek
    Information Discovery, a core capability of Oracle Endeca Information Discovery, enables business users to rapidly search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. One of the key capabilities, among many, that differentiate our solution from others in the Information Discovery market is our deep support for search across this growing amount of varied big data. Our method and approach is very different than classic simple keyword search that is found in may information discovery solutions. In this first part of a series on the topic of search, I will walk you through many of the key capabilities that go beyond the simple search box that you might experience in products where search was clearly an afterthought or attempt to catch up to our core capabilities in this area. Lets explore. The core data management solution of Oracle Endeca Information Discovery is the Endeca Server, a hybrid search-analytical database that his highly scalable and column-oriented in nature. We will talk in more technical detail about the capabilities of the Endeca Server in future blog posts as this post is intended to give you a feel for the deep search capabilities that are an integral part of the Endeca Server. The Endeca Server provides best-of-breed search features aw well as a new class of features that are the first to be designed around the requirement to bridge structured, semi-structured and unstructured big data. Some of the key features of search include type a heads, automatic alphanumeric spell corrections, positional search, Booleans, wildcarding, natural language, and category search and query classification dialogs. This is just a subset of the advanced search capabilities found in Oracle Endeca Information Discovery. Search is an important feature that makes it possible for business users to explore on the diverse data sets the Endeca Server can hold at any one time. The search capabilities in the Endeca server differ from other Information Discovery products with simple “search boxes” in the following ways: The Endeca Server Supports Exploratory Search.  Enterprise data frequently requires the user to explore content through an ad hoc dialog, with guidance that helps them succeed. This has implications for how to design search features. Traditional search doesn’t assume a dialog, and so it uses relevance ranking to get its best guess to the top of the results list. It calculates many relevance factors for each query, like word frequency, distance, and meaning, and then reduces those many factors to a single score based on a proprietary “black box” formula. But how can a business users, searching, act on the information that the document is say only 38.1% relevant? In contrast, exploratory search gives users the opportunity to clarify what is relevant to them through refinements and summaries. This approach has received consumer endorsement through popular ecommerce sites where guided navigation across a broad range of products has helped consumers better discover choices that meet their, sometimes undetermined requirements. This same model exists in Oracle Endeca Information Discovery. In fact, the Endeca Server powers many of the most popular e-commerce sites in the world. The Endeca Server Supports Cascading Relevance. Traditional approaches of search reduce many relevance weights to a single score. This means that if a result with a good title match gets a similar score to one with an exact phrase match, they’ll appear next to each other in a list. But a user can’t deduce from their score why each got it’s ranking, even though that information could be valuable. Oracle Endeca Information Discovery takes a different approach. The Endeca Server stratifies results by a primary relevance strategy, and then breaks ties within a strata by ordering them with a secondary strategy, and so on. Application managers get the explicit means to compose these strategies based on their knowledge of their own domain. This approach gives both business users and managers a deterministic way to set and understand relevance. Now that you have an understanding of two of the core search capabilities in Oracle Endeca Information Discovery, our next blog post on this topic will discuss more advanced features including set search, second-order relevance as well as an understanding of faceted search mechanisms that include queries and filters.  

    Read the article

  • Find Knowledge Quickly

    - by Get Proactive Customer Adoption Team
    Untitled Document Get to relevant knowledge on the Oracle products you use in a few quick steps! Customers tell us that the volume of search results returned can make it difficult to find the information they need, especially when similar Oracle products exist. These simple tips show you how to filter, browse, search, and refine your results to get relevant answers faster. Filter first: PowerView is your best friend Powerview is an often ignored feature of My Oracle Support that enables you to control the information displayed on the Dashboard, the Knowledge tab and regions, and the Service Request tab based on one or more parameters. You can define a PowerView to limit information based on product, product line, support ID, platform, hostname, system name and others. Using PowerView allows you to restrict: Your search results to the filters you have set The product list when selecting your products in Search & Browse and when creating service requests   The PowerView menu is at the top of My Oracle Support, near the title You turn PowerView on by clicking PowerView is Off, which is a button. When PowerView is On, and filters are active, clicking the button again will toggle Powerview off. Click the arrow to the right to create new filters, edit filters, remove a filter, or choose from the list of previously created filters. You can create a PowerView in 3 simple steps! Turn PowerView on and select New from the PowerView menu. Select your filter from the Select Filter Type dropdown list and make selections from the other two menus. Hint: While there are many filter options, selecting your product line or your list of products will provide you with an effective filter. Click the plus sign (+) to add more filters. Click the minus sign (-) to remove a filter. Click Create to save and activated the filter(s) You’ll notice that PowerView is On displays along with the active filters. For more information about the PowerView capabilities, click the Learn more about PowerView… menu item or view a short video. Browse & Refine: Access the Best Match Fast For Your Product and Task In the Knowledge Browse region of the Knowledge or Dashboard tabs, pick your product, pick your task, select a version, if applicable. A best match document – a collection of knowledge articles and resources specific to your selections - may display, offering you a one-stop shop. The best match document, called an “information center,” is an aggregate of dynamically updated links to information pertinent to the product, task, and version (if applicable) you chose. These documents are refreshed every 24 hours to ensure that you have the most current information at your fingertips. Note: Not all products have “information centers.” If no information center appears as a best match, click Search to see a list of search results. From the information center, you can access topics from a product overview to security information, as shown in the left menu. Just want to search? That’s easy too! Again, pick your product, pick your task, select a version, if applicable, enter a keyword term, and click Search. Hint: In this example, you’ll notice that PowerView is on and set to PeopleSoft Enterprise. When PowerView is on and you select a product from the Knowledge Base product list, the listed products are limited to the active PowerView filter. (Products you’ve previously picked are also listed at the top of the dropdown list.) Your search results are displayed based on the parameters you entered. It’s that simple! Related Information: My Oracle Support - User Resource Center [ID 873313.1] My Oracle Support Community For more tips on using My Oracle Support, check out these short video training modules. My Oracle Support Speed Video Training [ID 603505.1]

    Read the article

  • News From EAP Testing

    - by Fatherjack
    There is a phrase that goes something like “Watch the pennies and the pounds/dollars will take care of themselves”, meaning that if you pay attention to the small things then the larger things are going to fare well too. I am lucky enough to be a Friend of Red Gate and once in a while I get told about new features in their tools and have a test copy of the software to trial. I got one of those emails a week or so ago and I have been exploring the SQL Prompt 6 EAP since then. One really useful feature of long standing in SQL Prompt is the idea of a code snippet that is automatically pasted into the SSMS editor when you type a few key letters. For example I can type “ssf” and then press the tab key and the text is expanded to SELECT * FROM. There are lots of these combinations and it is possible to create your own really easily. To create your own you use the Snippet Manager interface to define the shortcut letters and the code that you want to have put in their place. Let’s look at an example. Say I am writing a blog about something and want to have the demo code create a temporary table. It might looks like this; The first time you run the code everything is fine, a lovely set of dates fill the results grid but run it a second time and this happens.   Yep, we didn’t destroy the temporary table so the CREATE statement fails when it finds the table already exists. No matter, I have a snippet created that takes care of this.   Nothing too technical here but you will see that in the Code section there is $CURSOR$, this isn’t a TSQL keyword but a marker for SQL Prompt to place the cursor in that position when the Code is pasted into the SSMS Editor. I just place my cursor above the CREATE statement and type “ifobj” – the shortcut for my code to DROP the temporary table – which has been defined in the Snippet Manager as below. This means I am right-away ready to type the name of the offending table. Pretty neat and it’s been very useful in saving me lots of time over many years.   The news for SQL Prompt 6 is that Red Gate have added a new Snippet Command of $PASTE$. Let’s alter our snippet to the following and try it out   Once again, we will type type “ifobj” in the SSMS Editor but first of all, highlight the name of the table #TestTable and copy it to your clipboard. Now type “ifobj” and press Tab… Wherever the string $PASTE$ is placed in the snippet, the contents of your clipboard are merged into the pasted TSQL. This means I don’t need to type the table name into the code snippet, it’s already there and I am seeing a fully functioning piece of TSQL ready to run. This means it is it even easier to write TSQL quickly and consistently. Attention to detail like this from Red Gate means that their developer tools stay on track to keep winning awards year after year and help take the hard work out of writing neat, accurate TSQL. If you want to try out SQL Prompt all the details are at http://www.red-gate.com/products/sql-development/sql-prompt/.

    Read the article

  • Why googling by keycaptcha gives results on reCAPTCHA? [closed]

    - by vgv8
    EDIT: I'd like to change this title to: How to STOP Google's manipulation of Google search engine presented to general public? I am frequently googling and more and more frequently bump when searching by one software product I am given instead the results on Google's own products. For ex., if I google by keyword keycaptcha for the "Past 24 hours" (after clicking on "Show search tools" -- "Past 24 hours" on the left sidebar of a browser) I am getting the Google's search results show only results on reCAPTCHA. Image uploaded later: Though, if confine keycaptcha in quotes the results are "correct" (well, kind of since they are still distorted in comparison with other search engines). I checked this during few months from different domains at different ISPs, different operating systems and from a dozen of browsers. The results are the same. Why is it and how can it be possibly corrected? My related posts: "How Gmail spam filter works?" IP adresses blacklisting Update: It is impossible for me to directly start using google.com as I am always redirected to google.ru (from google.com) by my ip-address "auto-detect location" google's "convenience". The google's help tells that it is impossible to switch off my location auto-detection because it is very helpful feature. There is a work-around to use google.com/ncr (to get google.com) (?anybody know what does it mean) to prevent redirection from google.com but even. But all results are exactly the same OK, I can search by quoted "keycaptcha", I am already accustomed to these google's quirks, but the question arises why the heck to burn time promoting someone's product if GOOGLE uses other product brands for showing its own interests/brands (reCAPTCHA) instead and what can be done with it? The general user will not understand that he was cheated and just will pick up the first (wrong) results Update2: Note that this googling behaviour: is independent on whether I am logged-in (or log-out-ed of) a google account, which account, on browser (I tried Opera, Chrome, FireFox, IE of different versions, Safari), OS or even domain; there are many such cases but I just targeted one concrete restricted example speciffically to to prevent wandering between unrelated details and peculiarities; @Michael, first it is not true and this text contains 2 links for real and significant results.. I also wrote that this is just one concrete example from many and based on many-month exp. These distortions happen upon clicking on: Past 24 hours, Past week, Past month, Past year in many other keywords, occasions/configurations of searches, etc. Second, the absence of the results is the result and there is no point to sneakingly substitute it by another unsolicited one. It is the definition of spam and scam. 3d, the question is not abt workarounds like how to write search queries or use another searching engines. The question is how to straighten the googling's results in order to stop disorienting general public about. Update: I could not understand: nobody reproduces the described by me behavior (i.e. when I click "Past 24 hours" link in google search searching for keycaptcha, the presented results are only on reCAPTCHA presented)? Update: And for the "Past week":

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • New Version 3.1 Endeca Information Discovery Now Available

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Business User Self-Service Data Mash-up Analysis and Discovery integrated with OBI11g and Hadoop Oracle Endeca Information Discovery 3.1 (OEID) is a major release that incorporates significant new self-service discovery capabilities for business users, including agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI.  · Self-Service Data Mashup and Discovery Dashboards: business users can combine information from multiple sources, including their own up-loaded spreadsheets, to conduct analysis on the complete set.  Creating discovery dashboards has been made even easier by intuitive drag-and drop layouts and wizard-based configuration.  Business users can now build new discovery applications in minutes, without depending on IT. · Enhanced Integration with Oracle BI: OEID 3.1 enhances its’ native integration with Oracle Business Intelligence Foundation. Business users can now incorporate information from trusted BI warehouses, leveraging dimensions and attributes defined in Oracle’s Common Enterprise Information Model, but evolve them based on the varying day-to-day demands and requirements that they personally manage. · Deep Unstructured Analysis: business users can gain new insights from a wide variety of enterprise and public sources, helping companies to build an actionable Big Data strategy.  With OEID’s long-standing differentiation in correlating unstructured information with structured data, business users can now perform their own text mining to identify hidden concepts, without having to request support from IT. They can augment these insights with best in class keyword search and pattern matching, all in the context of rich, interactive visualizations and analytic summaries. · Enterprise-Class Self-Service Discovery:  OEID 3.1 enables IT to provide a powerful self-service platform to the business as part of a broader Business Analytics strategy, preserving the value of existing investments in data quality, governance, and security.  Business users can take advantage of IT-curated information to drive discovery across high volumes and varieties of data, and share insights with colleagues at a moment’s notice. · Harvest Content from the Web with the Endeca Web Acquisition Toolkit:  Oracle now provides best-of-breed data access to website content through the Oracle Endeca Web Acquisition Toolkit.  This provides an agile, graphical interface for developers to rapidly access and integrate any information exposed through a web front-end.  Organizations can now cost-effectively include content from consumer sites, industry forums, government or supplier portals, cloud applications, and myriad other web sources as part of their overall strategy for data discovery and unstructured analytics. For more information: OEID 3.1 OTN Software and Documentation Download And Endeca available for download on Software Delivery Cloud (eDelivery) New OEID 3.1 Videos on YouTube Oracle.com Endeca Site /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • C# Dev Challenge Part 1 of n &ndash; Beginner Edition

    - by mbcrump
    I developed this challenge to test one’s knowledge of C Sharp. I am planning on creating several challenges with different skill sets, so don’t get mad if this challenge doesn’t well challenge you... I noticed that most people like short quizzes so this one only contains 5 questions. All of the challenges are clear and concise of what I am asking you to do. No smoke and mirrors here, meaning that none of the code has syntax errors. The purpose of this exercise is to test several OOP concepts and see how much of the C# language you really know. Question #1 – Lets start off Easy… Will the following code snippet compile successfully? What does this question test? - Can this compile without a namespace? Do you have to have an entry point of “static void Main()”? class Test { static int Main() { System.Console.WriteLine("Developer Challenge"); return 0; } } Answer (select text in box below): Yes, it will compile successfully. Question #2 – What is the value of the Console.WriteLine statements? What does this question test? – Do I understand reference types/value types? If a variable is declared with the @ symbol and its not a reserved keyword does the application compile successfully? using System; internal struct MyStruct { public int Value; } internal class MyClass { public int Value; } class Test { static void Main() { MyStruct @struct1 = new MyStruct(); MyStruct @struct2 = @struct1; @struct2.Value = 100; MyClass @ref1 = new MyClass(); MyClass @ref2 = @ref1; @ref2.Value = 100; Console.WriteLine("Value Type: {0} {1}", @struct1.Value, @struct2.Value); Console.WriteLine("Reference Type: {0} {1}", @ref1.Value, @ref2.Value); } } Answer (select text in box below): Value Type: 0 100 Reference Type: 100 100 Question #3 – What is the value of the Console.WriteLine statements? What does this question test? – Can 2 objects reference the same point in memory? using System; class Test { static void Main() { string s1 = "Testing2"; string t1 = s1; Console.WriteLine(s1 == t1); Console.WriteLine((object)s1 == (object)t1); } } Answer (select text in box below): True True Question #4 – What is the value of the Console.WriteLine statements? What does this question test? – How does the “Stack” work – LIFO or FIFO?   using System; using System.Collections; class Test { static void Main() { Stack a = new Stack(5); a.Push("1"); a.Push("2"); a.Push("3"); a.Push("4"); a.Push("5"); foreach (var o in a) { Console.WriteLine(o); } } } Answer (select text in box below): 5 4 3 2 1 Question #5 – What is the value of the Console.WriteLine statements? What does this question test? – Array and General Looping Knowledge. using System; namespace ConsoleApplication5 { class Program { static void Main(string[] args) { int[] J_LIST = new int[5] { 1, 2, 3, 4, 5 }; int K = 10; int L = 5; foreach (var J in J_LIST) { K = K - J; L = K + 2 * J; Console.WriteLine("J = {0, 5} K = {1, 5} L = {2, 5}", J, K, L); } Console.ReadLine(); } } } Answer (select text in box below): J = 1 K = 9 L = 11 J = 2 K = 7 L = 11 J = 3 K = 4 L = 10 J = 4 K = 0 L = 8 J = 5 K = -5 L = 5 Stay Tuned for more challenges!

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • Populating Models from other Models in Django?

    - by JT
    This is somewhat related to the question posed in this question but I'm trying to do this with an abstract base class. For the purposes of this example lets use these models: class Comic(models.Model): name = models.CharField(max_length=20) desc = models.CharField(max_length=100) volume = models.IntegerField() ... <50 other things that make up a Comic> class Meta: abstract = True class InkedComic(Comic): lines = models.IntegerField() class ColoredComic(Comic): colored = models.BooleanField(default=False) In the view lets say we get a reference to an InkedComic id since the tracer, err I mean, inker is done drawing the lines and it's time to add color. Once the view has added all the color we want to save a ColoredComic to the db. Obviously we could do inked = InkedComic.object.get(pk=ink_id) colored = ColoredComic() colored.name = inked.name etc, etc. But really it'd be nice to do: colored = ColoredComic(inked_comic=inked) colored.colored = True colored.save() I tried to do class ColoredComic(Comic): colored = models.BooleanField(default=False) def __init__(self, inked_comic = False, *args, **kwargs): super(ColoredComic, self).__init__(*args, **kwargs) if inked_comic: self.__dict__.update(inked_comic.__dict__) self.__dict__.update({'id': None}) # Remove pk field value but it turns out the ColoredComic.objects.get(pk=1) call sticks the pk into the inked_comic keyword, which is obviously not intended. (and actually results in a int does not have a dict exception) My brain is fried at this point, am I missing something obvious, or is there a better way to do this?

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • Android Client : Web service - what's the correct SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I should

    - by Hubert
    if I want to use the following Web service (help.be is just an example, let's say it does exist): http://www.help.be/webservice/webservice_help.php (it's written in PHP=client's choice, not .NET) with the following WSDL : <?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" name="webservice_help" targetNamespace="http://www.help.be/webservice/webservice_help.php" xmlns:tns="http://www.help.be/webservice/webservice_help.php" xmlns:impl="http://www.help.be/webservice/webservice_help.php" xmlns:xsd1="http://www.help.be/webservice/webservice_help.php" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"> <portType name="webservice_helpPortType"> <operation name="webservice_help"> <input message="tns:Webservice_helpRequest"/> </operation> <operation name="getLocation" parameterOrder="input"> <input message="tns:GetLocationRequest"/> <output message="tns:GetLocationResponse"/> </operation> <operation name="getStationDetail" parameterOrder="input"> <input message="tns:GetStationDetailRequest"/> <output message="tns:GetStationDetailResponse"/> </operation> <operation name="getStationList" parameterOrder="input"> <input message="tns:GetStationListRequest"/> <output message="tns:GetStationListResponse"/> </operation> </portType> <binding name="webservice_helpBinding" type="tns:webservice_helpPortType"> <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="webservice_help"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#webservice_help"/> <input> <soap:body use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> </operation> <operation name="getLocation"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getLocation"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> <operation name="getStationDetail"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getStationDetail"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> <operation name="getStationList"> <soap:operation soapAction="urn:webservice_help#webservice_helpServer#getStationList"/> <input> <soap:body parts="input" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body parts="return" use="encoded" namespace="http://www.help.be/webservice/webservice_help.php" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> </binding> <message name="Webservice_helpRequest"/> <message name="GetLocationRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetLocationResponse"> <part name="return" type="xsd:array"/> </message> <message name="GetStationDetailRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetStationDetailResponse"> <part name="return" type="xsd:string"/> </message> <message name="GetStationListRequest"> <part name="input" type="xsd:array"/> </message> <message name="GetStationListResponse"> <part name="return" type="xsd:string"/> </message> <service name="webservice_helpService"> <port name="webservice_helpPort" binding="tns:webservice_helpBinding"> <soap:address location="http://www.help.be/webservice/webservice_help.php"/> </port> </service> </definitions> What is the correct SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I should use below ? I've tried with this : public class Main extends Activity { /** Called when the activity is first created. */ private static final String SOAP_ACTION_GETLOCATION = "getLocation"; private static final String METHOD_NAME_GETLOCATION = "getLocation"; private static final String NAMESPACE = "http://www.help.be/webservice/"; private static final String URL = "http://www.help.be/webservice/webservice_help.php"; TextView tv; @SuppressWarnings("unchecked") @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); tv = (TextView)findViewById(R.id.TextView01); // -------------------------------------------------------------------------------------- SoapObject request_location = new SoapObject(NAMESPACE, METHOD_NAME_GETLOCATION); request_location.addProperty("login", "login"); // -> string required request_location.addProperty("password", "password"); // -> string required request_location.addProperty("serial", "serial"); // -> string required request_location.addProperty("language", "fr"); // -> string required (available « fr,nl,uk,de ») request_location.addProperty("keyword", "Braine"); // -> string required // -------------------------------------------------------------------------------------- SoapSerializationEnvelope soapEnvelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); //soapEnvelope.dotNet = true; // don't forget it for .NET WebServices ! soapEnvelope.setOutputSoapObject(request_location); AndroidHttpTransport aht = new AndroidHttpTransport(URL); try { aht.call(SOAP_ACTION_GETLOCATION, soapEnvelope); // Get the SAOP Envelope back and then extract the body SoapObject resultsRequestSOAP = (SoapObject) soapEnvelope.bodyIn; Vector XXXX = (Vector) resultsRequestSOAP.getProperty("GetLocationResponse"); int vector_size = XXXX.size(); Log.i("Hub", "testat="+vector_size); tv.setText("OK"); } catch(Exception E) { tv.setText("ERROR:" + E.getClass().getName() + ": " + E.getMessage()); Log.i("Hub", "Exception E"); Log.i("Hub", "E.getClass().getName()="+E.getClass().getName()); Log.i("Hub", "E.getMessage()="+E.getMessage()); } // -------------------------------------------------------------------------------------- } } I'm not sure of the SOAP_ACTION, METHOD_NAME, NAMESPACE, URL I have to use? because soapAction is pointing to a URN instead of a traditional URL and it's PHP and not .NET ... also, I'm not sure if I have to use request_location.addProperty("login", "login"); of request_location.addAttribute("login", "login"); ? = <message name="GetLocationRequest"> <part name="input" type="xsd:array"/> What would you say ? Txs for your help. H. EDIT : Here is some code working in PHP - I simply want to have the same but in Android/JAVA : <?php ini_set("soap.wsdl_cache_enabled", "0"); // disabling WSDL cache $request['login'] = 'login'; $request['password'] = 'password'; $request['serial'] = 'serial'; $request['language'] = 'fr'; $client= new SoapClient("http://www.test.be/webservice/webservice_test.wsdl"); print_r( $client->__getFunctions()); ?><hr><h1>getLocation</h1> <h2>Input:</h2> <? $request['keyword'] = 'Bruxelles'; print_r($request); ?><h2>Result</h2><? $result = $client->getLocation($request); print_r($result); ?>

    Read the article

  • Getting Facebook Posts Permalink from Facebook Graph API Search

    - by Alexia
    I want to use the Facebook Graph API to search the public status updates concerning a keyword. For example, this works great: http://graph.facebook.com/search?q=obama&type=post It shows me all the posts with the word "obama" in it. If the post is a picture, it actually returns a field called "link" which is the permalink to the picture on the actual Facebook website, in the user's profile. Which is exactly what I want, but for pictures. But if the post in question is just a status update, i.e. just text, all it returns is the 3 fields: message, created_time, and updated_time. How do I view this actual status update on www.facebook.com? I realize I can view it on graph.facebook.com in JSON format, but I want to actually be able to show the permalink to the status update, or post. The final result I would like to retrieve might look something like this: http://www.facebook.com/[user id]/posts/[post id] With the [user id] and [post id] fields swapped out with the actual IDs, obviously. TIA!

    Read the article

  • nHibernate, Automapping and Chained Abstract Classes

    - by Mr Snuffle
    I'm having some trouble using nHibernate, automapping and a class structure using multiple chains of abstract classes It's something akin to this public abstract class AbstractClassA {} public abstract class AbstractClassB : AbstractClassA {} public class ClassA : AbstractClassB {} When I attempt to build these mappings, I receive the following error "FluentNHibernate.Cfg.FluentConfigurationException was unhandled Message: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail. Database was not configured through Database method." However, if I remove the abstract keyword from AbstractClassB, everything works fine. The problem only occurs when I have more than one abstract class in the class hierarchy. I've manually configured the automapping to include both AbstractClassA and AbstractClassB using the following binding class public class BindItemBases : IManualBinding { public void Bind(FluentNHibernate.Automapping.AutoPersistenceModel model) { model.IncludeBase<AbstractClassA>(); model.IncludeBase<AbstractClassB>(); } } I've had to do a bit of hackery to get around this, but there must be a better way to get this working. Surely nHibernate supports something like this, I just haven't figured out how to configure it right. Cheers, James

    Read the article

  • Drupal image management

    - by vian
    Please suggest how should I approach these requirements. What ready-to-use solutions (modules) are best suited to achieve something like this: What I need is an image library that has searchable, tagged images that are already resized when we publish them. If the author searches the library and the image he needs isn’t there, he can upload one and have it added to the index. The important thing is that images in the library can be sorted into three categories: News images, top story images and feature images so that, over time, we don’t end up with hundreds of images crammed into one folder, thus making browsing a pain (and to prevent someone from something like: Searching for a keyword so they can find an image for the news, picking an image, and then seeing it’s 1600X. 1200). Also, I need something which will assemble thumbnail galleries easily. I don’t want to have to go to the image library, get a URL, go back, paste it in, etc. I should be able to pick, say, 8 images and say “create gallery”. How this objective is achieved is flexible, but I am looking for a shortcut to get around assembling screenshot galleries by hand.

    Read the article

  • Custumizing Syntax Highlighting in Vim

    - by sixtyfootersdude
    Hey I have defined a few custom file types for vim. I have done this like this: In vimrc: au BufWinEnter,BufRead,BufNewFile *.jak set filetype=jak And then in jak.vim syn match arrows /<-/ syn match arrows /->/ syn match arrows /=>/ syn match arrows /<=/ highlight arrows ctermfg=brown ... This works. (Formatting applies to any file opened with vim with file extension .jak) My question is how I can keep all the current formatting for a file type but add functionality. For example I would like to add this functionality for .vim files: syn keyword yellow yellow highlight yellow ctermfg=yellow ... (so that I can see how my terminal interpenetrates different colors before choosing them.) I have created ~/.vim/syntax/vim.vim (file only contains the above) and put this into my vimrc: au BufWinEnter,BufRead,BufNewFile *.vim set filetype=vim This has no effect. The word yellow is not colored yellow. I have also tried putting my vim.vim file into ~/.vim/after/syntax/vim.vim As suggested here This is the approach that I would like to take. Seems clean and easily maintainable.

    Read the article

  • Still confused about JavaScript's 'this'.

    - by Nick Lowman
    I've been reading through quite a few articles on the 'this' keyword when using JavaScript objects and I'm still somewhat confused. I'm quite happy writing object orientated Javascript and I get around the 'this' issue by referring the full object path but I don't like the fact I still find 'this' confusing. I found a good answer here which helped me but I'm still not 100% sure. So, onto the example. The following script is linked from test.html with <script src="js/test.js"></script> if (!nick) { var nick = {}; } nick.lowman = function(){ var helloA = 'Hello A'; console.log('1.',this, this.helloA); var init = function(){ var helloB = 'Hello B'; console.log('2.',this, this.helloB); } return { init: init } }(); nick.lowman.init(); What kind of expected to see was 1. Object {} nick.lowman, 'Hello A' 2. Object {} init, 'Hello B' But what I get is this? 1. Window test.html, undefined 2. Object {} init, undefined I think I understand some of what's happening there but I would mind if someone out there explains it to me. Also, I'm not entirely sure why the first 'console.log' is being called at all? If I remove the call to the init function //nick.lowman.init() firebug still outputs 1. Window test.html, undefined. Why is that? Why does nick.lowman() get called by the window object when the html page loads? Many thanks

    Read the article

  • Can I select 0 columns in SQL Server?

    - by Woody Zenfell III
    I am hoping this question fares a little better than the similar Create a table without columns. Yes, I am asking about something that will strike most as pointlessly academic. It is easy to produce a SELECT result with 0 rows (but with columns), e.g. SELECT a = 1 WHERE 1 = 0. Is it possible to produce a SELECT result with 0 columns (but with rows)? e.g. something like SELECT NO COLUMNS FROM Foo. (This is not valid T-SQL.) I came across this because I wanted to insert several rows without specifying any column data for any of them. e.g. (SQL Server 2005) CREATE TABLE Bar (id INT NOT NULL IDENTITY PRIMARY KEY) INSERT INTO Bar SELECT NO COLUMNS FROM Foo -- Invalid column name 'NO'. -- An explicit value for the identity column in table 'Bar' can only be specified when a column list is used and IDENTITY_INSERT is ON. One can insert a single row without specifying any column data, e.g. INSERT INTO Foo DEFAULT VALUES. One can query for a count of rows (without retrieving actual column data from the table), e.g. SELECT COUNT(*) FROM Foo. (But that result set, of course, has a column.) I tried things like INSERT INTO Bar () SELECT * FROM Foo -- Parameters supplied for object 'Bar' which is not a function. -- If the parameters are intended as a table hint, a WITH keyword is required. and INSERT INTO Bar DEFAULT VALUES SELECT * FROM Foo -- which is a standalone INSERT statement followed by a standalone SELECT statement. I can do what I need to do a different way, but the apparent lack of consistency in support for degenerate cases surprises me. I read through the relevant sections of BOL and didn't see anything. I was surprised to come up with nothing via Google either.

    Read the article

  • Typedef and Struct in C and H files

    - by Leif Andersen
    I've been using the following code to create various struct, but only give people outside of the C file a pointer to it. (Yes, I know that they could potentially mess around with it, so it's not entirely like the private keyword in Java, but that's okay with me). Anyway, I've been using the following code, and I looked at it today, and I'm really surprised that it's actually working, can anyone explain why this is? In my C file, I create my struct, but don't give it a tag in the typedef namespace: struct LABall { int x; int y; int radius; Vector velocity; }; And in the H file, I put this: typedef struct LABall* LABall; I am obviously using #import "LABall.h" in the c file, but I am NOT using #import "LABall.c" in the header file, as that would defeat the whole purpose of a separate header file. So, why am I able to create a pointer to the LABall* struct in the H file when I haven't actually included it? Does it have something to do with the struct namespace working accross files, even when one file is in no way linked to another? Thank you.

    Read the article

  • Can I avoid explicit 'self'?

    - by bguiz
    I am fairly new to Python and trying to pick it up, so please forgive if this question has a very obvious answer! I have been learning Python by following some pygame tutorials. Therein I found extensive use of the keyword self, and coming from a primarily Java background, I find that I keep forgetting to type self. For example, instead of self.rect.centerx I would type rect.centerx, because, to me, rect is already a member variable of the class. The Java parallel I can think of for this situation is having to prefix all references to member variables with this. Am I stuck prefixing all member variables with self, or is there a way to declare them that would allow me to avoid having to do so? Even if what I am suggesting isn't "pythonic", I'd still like to know if it is possible. I have taken a look at these related SO questions, but they don't quite answer what I am after: Python - why use “self” in a class? Why do you need explicitly have the “self” argument into a Python method?

    Read the article

  • How to sort linq result by most similarity/equality

    - by aNui
    I want to do a search for Music instruments which has its informations Name, Category and Origin as I asked in my post. But now I want to sort/group the result by similarity/equality to the keyword such as. If I have the list { Drum, Grand Piano, Guitar, Guitarrón, Harp, Piano} << sorted by name and if I queried "p" the result should be { Piano, Grand Piano, Harp } but it shows Harp first because of the source list's sequence and if I add {Grand Piano} to the list and query "piano" the result shoud be like { Piano, Grand Piano } or query "guitar" it should be { Guitar, Guitarrón } here's my code static IEnumerable<MInstrument> InstrumentsSearch(IEnumerable<MInstrument> InstrumentsList, string query, MInstrument.Category[] SelectedCategories, MInstrument.Origin[] SelectedOrigins) { var result = InstrumentsList .Where(item => SelectedCategories.Contains(item.category)) .Where(item => SelectedOrigins.Contains(item.origin)) .Where(item => { if ( (" " + item.Name.ToLower()).Contains(" " + query.ToLower()) || item.Name.IndexOf(query) != -1 ) { return true; } return false; } ) .Take(30); return result.ToList<MInstrument>(); } Or the result may be like my old self-invented algorithm that I called "by order of occurence", that is just OK to me. And the further things to do is I need to search the Name, Category or Origin such as. If i type "Italy" it should found Piano or something from Italy. Or if I type "string" it should found Guitar. Is there any way to do those things, please tell me. Thanks in advance.

    Read the article

  • javascript using 'this' in global object

    - by Marco Demaio
    What does 'this' keyword refer to when used in gloabl object? Let's say for instance we have: var SomeGlobalObject = { rendered: true, show: function() { /* I should use 'SomeGlobalObject.rendered' below, otherwise it won't work when called from event scope. But it works when called from timer scope!! How can this be? */ if(this.rendered) alert("hello"); } } Now if we call in an inline script in the HTML page: SomeGlobalObject.show(); window.setTimeout("Msg.show()", 1000); everything work ok. But if we do something like AppendEvent(window, 'load', Msg.show); we get an error because this.rendered is undefined when called from the event scope. Do you know why this happens? Could you explain then if there is another smarter way to do this without having to rewrite every time "SomeGlobalObject.someProperty" into the the SomeGlobalObject code? Thanks! AppendEvent is just a simple cross-browser function to append an event, code below, but it does not matter in order to answer the above questions. function AppendEvent(html_element, event_name, event_function) { if(html_element.attachEvent) //IE return html_element.attachEvent("on" + event_name, event_function); else if(html_element.addEventListener) //FF html_element.addEventListener(event_name, event_function, false); }

    Read the article

  • identation control while developing a small python like language

    - by sap
    Hello, Im developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for identation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop thats inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 end end #after break 2 it would jump right here but since i dont have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: i would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then everytime i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the identation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesnt use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appretiated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on google and had no luck). thanks in advance.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >