Search Results

Search found 3183 results on 128 pages for 'david lazar'.

Page 11/128 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Distinctly LINQ &ndash; Getting a Distinct List of Objects

    - by David Totzke
    Let’s say that you have a list of objects that contains duplicate items and you want to extract a subset of distinct items.  This is pretty straight forward in the trivial case where the duplicate objects are considered the same such as in the following example: List<int> ages = new List<int> { 21, 46, 46, 55, 17, 21, 55, 55 }; IEnumerable<int> distinctAges = ages.Distinct(); Console.WriteLine("Distinct ages:"); foreach (int age in distinctAges) { Console.WriteLine(age); } /* This code produces the following output: Distinct ages: 21 46 55 17 */ What if you are working with reference types instead?  Imagine a list of search results where items in the results, while unique in and of themselves, also point to a parent.  We’d like to be able to select a bunch of items in the list but then see only a distinct list of parents.  Distinct isn’t going to help us much on its own as all of the items are distinct already.  Perhaps we can create a class with just the information we are interested in like the Id and Name of the parents.  public class SelectedItem { public int ItemID { get; set; } public string DisplayName { get; set; } } We can then use LINQ to populate a list containing objects with just the information we are interested in and then get rid of the duplicates. IEnumerable<SelectedItem> list = (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>() select new SelectedItem { ItemID = item.ParentId, DisplayName = item.ParentName }) .Distinct(); Most of you will have guessed that this didn’t work.  Even though some of our objects are now duplicates, because we are working with reference types, it doesn’t matter that their properties are the same, they’re still considered unique.  What we need is a way to define equality for the Distinct() extension method. IEqualityComparer<T> Looking at the Distinct method we see that there is an overload that accepts an IEqualityComparer<T>.  We can simply create a class that implements this interface and that allows us to define equality for our SelectedItem class. public class SelectedItemComparer : IEqualityComparer<SelectedItem> { public new bool Equals(SelectedItem abc, SelectedItem def) { return abc.ItemID == def.ItemID && abc.DisplayName == def.DisplayName; } public int GetHashCode(SelectedItem obj) { string code = obj.DisplayName + obj.ItemID.ToString(); return code.GetHashCode(); } } In the Equals method we simply do whatever comparisons are necessary to determine equality and then return true or false.  Take note of the implementation of the GetHashCode method.  GetHashCode must return the same value for two different objects if our Equals method says they are equal.  Get this wrong and your comparer won’t work .  Even though the Equals method returns true, mismatched hash codes will cause the comparison to fail.  For our example, we simply build a string from the properties of the object and then call GetHashCode() on that. Now all we have to do is pass an instance of our IEqualitlyComarer<T> to Distinct and all will be well: IEnumerable<SelectedItem> list =     (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>()         select new SelectedItem { ItemID = item.dahfkp, DisplayName = item.document_code })                         .Distinct(new SelectedItemComparer());   Enjoy. Dave Just because I can… Technorati Tags: LINQ,C#

    Read the article

  • How Hard Can It Be?

    - by David Totzke
    I mean seriously.  Let’s imagine for a moment that by some stroke of luck or genius or cosmic accident that you come to be the owner of sex.com.  You’d think you had won the lottery.  That would be like having a license to print money.  I mean really.  Sex is the most searched term on the entire Internet.  Even without any SEO you’d think that your site would show up on the first page of results on Google. You would think that; and you’d be wrong.  At least in the case of the current owners of that domain name anyways.  The details can be found here but suffice it to say that Escom LLC has managed to fuck it up.  They’ve been forced into bankruptcy by their creditors.  Something doesn’t smell quite right with the whole thing.  Some guy named Mike Mann (please God, don’t let it be this Mike Mann) is an investor in all three creditors.  WTF? Seriously.  How hard can it be? Dave Just because I can…

    Read the article

  • Using Oracle ADF Data Visualization Tools (DVT) Line Graphs to Display Weather Information

    - by Christian David Straub
    OverviewA guest post by Jeanne Waldman.I have a simple JDeveloper Fusion application that retrieves weather data. I wanted to compare the week's temperatures of different locations in a graph. I decided to check out the dvt:lineGraph component, and it took me a few minutes to add it to my jspx page and supply it with data.Drag and Drop the dvt:lineGraph onto your pageI opened my .jspx page in design modeIn the Component Palette, I selected ADF Data Visualization.Then I dragged 'Line' onto my page.A dialog popped up giving me options of the type of line graph. I chose the default.A lineGraph displayed with some default data. Hook up your weather dataNow I wanted to hook up my own data. I browsed the tagdoc, and I found the tabularData attribute.Attribute: tabularDataType: java.util.ListTagDoc:Specifies a list of data that the graph uses to create a grid and populate itself. The List consists of a three-member Object array for each data value to be passed to the graph. The members of each array must be organized as follows: The first member (index 0) is the column label, in the grid, of the data value. This is generally a String. If the graph has a time axis, then this should be a Java Date. Column labels typically identify groups in the graph. The second member (index 1) is the row label, in the grid, of the data value. This is generally a String. Row labels appear as series labels in the graph (usually in the legend). The third member (index 2) is the data value, which is usually a Double.The first member is the column label of the data value. This would be the day of the week.The second member is the row label of the data value. This would be the location name.The third member is the data value, usually a Double. This would be the temperature. I already had all this information, I just needed to put it in a List with a three-member Object array for each data value.   /**    * This is used for the lineGraph to show the data for each location.    */   public List<Object[]> getTabularData()   {      List<Object[]> tabularData = new ArrayList<Object []>();      List<WeatherForecast> weatherForecastList = getWeatherForecastList();      // loop through the list and build up the tabular data. Then cache it.      for(WeatherForecast wf : weatherForecastList)      {        List<ForecastDay> forecastDayList = wf.getForecastDayList();        String location = wf.getLocation();        for (ForecastDay fday : forecastDayList)        {          String day = fday.getPrettyDate();          String highTemp = fday.getHighF();          tabularData.add(new Object[]{day, location, Double.valueOf(highTemp)});        }             }      return tabularData;    }  Now I bound the lineGraph to this method by setting tabularData to#{weatherForAllLocationsBean.tabularData}weatherForAllLocationsBean is my bean that is defined in faces-config.xml. Adding a barGraphIn about 30 seconds, I added a barGraph with the same data. I dragged and dropped a bar graph onto the page, used the same tabularData as I did in the line graph. The page looks like this:  ConclusionI was very happy how fast it was to hook up my weather data to these graphs. They look great, and they have built in functionality. For instance, I can hide/show a location by clicking on the name of the location in the legend.

    Read the article

  • push email / email server tutorial

    - by David A
    Does anyone happen to know the current status of push email in the linux world? From my searching at the moment I have seen Z-push http://www.ifusio.com/blog/setup-your-own-push-mail-server-with-z-push-on-debian-linux and https://peterkieser.com/2011/03/25/androids-k-9-mail-battery-life-and-dovecots-push-imap/ Are there other solutions? Does anyone have any experiences with these? They're somewhat different in that Z-push seems to work in conjunction with an existing imap server? Some time ago I did manage to compile and build Dovecot 2 (since only Dovecot 1 was available in the Ubuntu repos at the time), it would have been a real fluke because I had no idea what I was doing but it seemed to work well with my mobile phone, that said, I can't say for sure that it was pushing, but it seemed like it. Anyway, I'm here again and looking to set up a mail server. I'm hoping to do a better of a job this time around with virtual users and such. Without installing ispconfig3 (or something similar), does anyone have any recent email server tutorials (that cover all aspects MTA, MDA...) that can supply push email on a Ubuntu 12.04 server? (I'm probably of slightly above newb status, but not far) Thanks a bunch

    Read the article

  • How can I use GPRename's regex feature to reinsert the matched-group into the 'replace'?

    - by David Thomas
    I've been using GPRename to batch-rename files; this is rather more efficient than individually correcting each file, but still seems to be less efficient than it might be, primarily because either I don't understand the regex syntax used, or because the regex implementation is incomplete1 Given a list of files of the following syntax: (01) - title of file1.avi (02) - title of file2.avi (03) - title of file3.avi I attempted to use the 'replace' (with the regex option selected, the case-sensitive option deselected): (\(\d{2}\)) The preview then shows (given that I've specified no 'replace with' option as yet): title of file1.avi title of file2.avi title of file3.avi Which is great, clearly the regex is identifying the correct group (the (01)). Now, what I was hoping to do (using the JavaScript syntax) in the 'replace with' option is use: $1 (I also tried using '$1', \1 and '\1') This was just to check that I could access the matched group, and it seems I can't, the matched group is, as I suppose might be expected, replaced with the literal replacement string. So, my question: is it possible to match a particular group of characters, in this case the numbers within the brackets, and then insert those into the replacement string? Therefore: (01) title of file1.avi (02) title of file2.avi (03) title of file3.avi Becomes: 01 title of file1.avi 02 title of file2.avi 03 title of file3.avi I absolutely suspect the former, personally.

    Read the article

  • Fast Fashion Freshness

    - by David Dorf
    Fashion retailers such as H&M, Zara, and Wet Seal have perfected the fast fashion retailing model. The concept requires no replenishment in order to maintain assortment freshness and to create a sense of urgency for the consumer to purchase now. However, maintaining assortment freshness results in high product turnover, making markdown optimization a necessity. Wet Seal, for instance, needed to move from ad-hoc markdowns and dealing with surplus inventory to handling markdowns methodically across 8,000 SKUs with only 12-15 week lifecycle (from DC receipt to exit). By optimizing and automating markdowns, Wet Seal is reaching their goal of assortment freshness, which in turn increases sales. If you're interested in learning more, register for a free webinar occurring on May 13th featuring Join Daniel Ryu, Vice President of Planning and Allocation at Wet Seal. He'll be discussing how the fast fashion retailer maintains their goal of assortment freshness.

    Read the article

  • Post-Purchase Social Media

    - by David Dorf
    When you make a particularly good purchase, the natural tendency is to share the experience with friends. You show them your cool new toy or garment, then explain how you discovered such a great deal, all the while implying you are the world's most savvy shopper. My wife does it with clothes, housewares, and books, and I do it with wiz-bang techie stuff. Post-purchase euphoria or Buyer's remorse are associated with most purchases beyond day-to-day needs. So now let's add social media to the mix. Haul videos are a YouTube phenomenon where a shopper describes their latest haul on video. Blair Fowler, aka juicystar07, is an excellent example. She and her older sister's haul videos have been viewed 75,000,000 times, at times causing particular items to sell out after being showcased. If you're not already on this bandwagon, checkout Blair's haul video from her trip to Forever21. There are a couple good articles on this trend from ABC's GMA, Slate, and NPR. Some retailers are already sending free products to these fashionistas in the hopes they'll be reviewed on camera. For those less willing to exert themselves, there's Blippy, a service that automatically tweets your purchases. Similar to Twitter, your purchases are tweeted so your friends can see what you've purchased and your network can make comments. In the example to the right, co-founder Philip Kaplan purchased a gift for his wife from the store Does Your Mother Know, proving the point that the need for privacy is overblown. Blippy has partnerships with selected merchants like Apple, Amazon, and Netflix and can also get purchases from the credit cards you've registered. When you register, you can configure whether to automatically tweet each purchase, or approve them first. No sense in broadcasting my need for Rogaine, right? This is a good thing for retailers, as it helps spread the word about purchases and gives other people ideas. Rick just bought an ooma from Amazon. What the heck is ooma? Oh, its like Vonage but no monthly bills. I'm there.

    Read the article

  • How is the Linux repository administrated?

    - by David
    I am amazed by the Linux project and I would like to learn how they administrate the code, given the huge number of developers. I found the Linux repository on GitHub, but I do not understand how it is administrated. For example the following commit: https://github.com/torvalds/linux/commit/31fd84b95eb211d5db460a1dda85e004800a7b52 Notice the following part: So one authored and Torvalds committed. How is this possible. I thought that it was only possible to have either pull or pushing rights, but here it seems like there is an approval stage. I should mention that the specific problem I am trying to solve is that we use pull requests to our repo. The problem we are facing is that while a pull request is waiting to get merged, it is often broken by a commit. This leads to a seemingly never ending work to adapt the fork in order to make the pull request merge smoothly. Do Linux solve this by giving lots of people pushing rights (at least there are currently just three pull requests but hundreds of commits per day).

    Read the article

  • Using Completed User Stories to Estimate Future User Stories

    - by David Kaczynski
    In Scrum/Agile, the complexity of a user story can be estimated in story points. After completing some user stories, a programmer or team of programmers can use those experiences to better estimate how much time it might take to complete a future user story. Is there a methodology for breaking down the complexity of user stories into quantifiable or quantifiable attributes? For example, User Story X requires a rich, new view in the GUI, but User Story X can perform most of its functionality using existing business logic on the server. On a scale of 1 to 10, User Story X has a complexity of 7 on the client and a complexity of 2 on the server. After User Story X is completed, someone asks how long would it take to complete User Story Y, which has a complexity of 3 on the client and 6 on the server. Looking at how long it took to complete User Story X, we can make an educated estimate on how long it might take to complete User Story Y. I can imagine some other details: The complexity of one attribute (such as complexity of client) could have sub-attributes, such as number of steps in a sequence, function points, etc. Several other attributes that could be considered as well, such as the programmer's familiarity with the system or the number of components/interfaces involved These attributes could be accumulated into some sort of user story checklist. To reiterate: is there an existing methodology for decomposing the complexity of a user story into complexity of attributes/sub-attributes, or is using completed user stories as indicators in estimating future user stories more of an informal process?

    Read the article

  • Ubuntu Installation Help for an IBM R31 Thinkpad

    - by David Taylor
    I recently acquired an old IBM R31 Thinkpad, and I'd figure I'd install Lubuntu on it. I've followed the quick steps for USB installation on the help wiki page, but I can't seem to get it to boot from my formatted flash drive. I've checked the boot priority on the BIOS page, but the option to boot from USB doesn't even seem to be there. The only bootable options are legacy and USB floppy drives. The CD drive is shot, so I can't install from there either. Do I have any other options for installation without having to pay for a floppy drive or a replacement CD drive? The wiki pages mentions something about installation from within Windows. Would it be possible to remove Windows using this option, or would it just create a partition? Thanks

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • The Disloyalty Card

    - by David Dorf
    Let's take a break from technology for a second; please indulge me. (That's for you Erick.) A few months back, James Hoffmann reported that Gwilym Davies, the 2009 World Barista Champion, had implemented a rather unique idea for his cafe: the disloyalty card. His card lists eight nearby cafes in London that the cardholder must visit and try a coffee. After sampling all eight and collecting the required stamps, Gwilym provides a free coffee from his shop. His idea sends customers to his competitors. What does this say about Gwilym? First, it tells me he's confident in his abilities to make a mean cup of java. Second, it tells me he's truly passionate about his his trade. But was this a sound business endeavor? Obviously the risk is that one of his loyal customers might just find a better product at a competitor and not return. But the goal isn't really to strengthen his customer base -- its to strengthen the market, which will in turn provide more customers over the long run. This idea seems great for frequently purchased products like restaurants, bars, bakeries, music, and of course, cafes. Its probably not a good idea for high priced merchandise or infrequently purchased items like shoes, electronics, and housewares. Nevertheless, its a great example of thinking in reverse. Try this: Instead of telling your staff how you want customers treated, list out the ways you don't want customers treated. Why should you limit people's imagination and freedom to engage customers? Instead, give them guidelines to avoid the bad behavior, and leave them open to be creative with the positive behavior. Instead of asking the question, "how can we get more people in our stores?" try asking the inverse: "why aren't people visiting our stores?" Innovation doesn't only come from asking "why?" Often it comes from asking "why not?"

    Read the article

  • Measuring Social Media Efforts

    - by David Dorf
    So you're on the bandwagon and you've created a Facebook page, you're tweeting everyday, and maybe you've even got a YouTube channel. Now what? After you put any program in place, you need to measure, set new goals, then execute and this is no different. But how does one measure social media efforts? First, I guess we need some goals. Typical ones might be to acquire customers, engage them, then convert them. So that translates to: Increase Facebook fans and Twitter followers Increase comments/posting and retweets Increase redemption of offers via Facebook and Twitter Counting fans and followers is easy, and tracking the redemption of coupons isn't that hard either, but measuring engagement is a tough one. How do you know whether your fans are reading your posts, and whether your posts have any meaning to them? For Facebook, the fan page administrator has access to analytics called Facebook Insights. There you can check weekly metrics such as total fans, new fans, lost fans, demographics of fans, number of postings, numbers clicks, etc. Not nearly as comprehensive as Google Analytics, but well on its way. For Twitter, getting information is a little tougher. Again, its easy to track followers and you can use tools like TweetMeme to encourage and track retweets. An interesting website called WeFollow tries to measure influence for certain topics. For example, the top three influencers for the topic "retail" are retailweek, retailwire, and retailerdaily. Other notables are #10 BestBuy, #11 GapOfficial, #12 JeffPR, and #17 OracleRetail. I assume influence is calculated based on number of followers, number of retweets, frequency of tweets, and perhaps depth of dialogs. If you want to get serious about monitoring and measuring social marketing efforts, you'd be wise to invest in a strong tool. Several are listed on this wiki, including big ones like Radian6, Nielsen, Omniture, and Buzzient. Buzzient might be particularly interesting because its integrated with Oracle CRM OnDemand -- see the demo. As always, I'm interested in hearing how others approach goal setting and monitoring of social media efforts, so feel free to post comments.

    Read the article

  • ODI and OBIEE 11g Integration

    - by David Allan
    Here we will see some of the connectivity options to OBIEE 11g using the JDBC driver. You’ll see based upon some connection properties how the physical or presentation layers can be utilized. In the integrators guide for OBIEE 11g you will find a brief statement indicating that there actually is a JDBC driver for OBIEE. In OBIEE 11g its now possible to connect directly to the physical layer, Venkat has an informative post here on this topic. In ODI 11g the Oracle BI technology is shipped with the product along with KMs for reverse engineering, and using OBIEE models for a data source. When you install OBIEE in 11g a light weight demonstration application is preinstalled in the server, when you open this in the BI Administration tool we see the regular 3 panel view within the administration tool. To interrogate this system via JDBC (just like ODI does using the KMs) need a couple of things; the JDBC driver from OBIEE 11g, a java client program and the credentials. In my java client program I want to connect to the OBIEE system, when I connect I can interrogate what the JDBC driver presents for the metadata. The metadata projected via the JDBC connection’s DatabaseMetadata changes depending on whether the property NQ_SESSION.SELECTPHYSICAL is set when the java client connects. Let’s use the sample app to illustrate. I have a java client program here that will print out the tables in the DatabaseMetadata, it will also output the catalog and schema. For example if I execute without any special JDBC properties as follows; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass Then I get the following returned representing the presentation layer, the sample I used is XML, and has no schema; Catalog Schema Table Sample Sales Lite null Base Facts Sample Sales Lite null Calculated Facts …     Sample Targets Lite null Base Facts …     Now if I execute with the only difference being the JDBC property NQ_SESSION.SELECTPHYSICAL with the value Yes, then I see a different set of values representing the physical layer in OBIEE; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass NQ_SESSION.SELECTPHYSICAL=Yes The following is returned; Catalog Schema Table Sample App Lite Data null D01 Time Day Grain Sample App Lite Data null F10 Revenue Facts (Order grain) …     System DB (Update me)     …     If this was a database system such as Oracle, the catalog value would be the OBIEE database name and the schema would be the Oracle database schema. Other systems which have real catalog structure such as SQLServer would use its catalog value. Its this ‘Catalog’ and ‘Schema’ value that is important when integration OBIEE with ODI. For the demonstration application in OBIEE 11g, the following illustration shows how the information from OBIEE is related via the JDBC driver through to ODI. In the XML example above, within ODI’s physical schema definition on the right, we leave the schema blank since the XML data source has no schema. When I did this at first, I left the default value that ODI places in the Schema field since which was ‘<Undefined>’ (like image below) but this string is actually used in the RKM so ended up not finding any tables in this schema! Entering an empty string resolved this. Below we see a regular Oracle database example that has the database, schema, physical table structure, and how this is defined in ODI.   Remember back to the physical versus presentation layer usage when we passed the special property, well to do this in ODI, the data server has a panel for properties where you can define key/value pairs. So if you want to select physical objects from the OBIEE server, then you must set this property. An additional changed in ODI 11g is the OBIEE connection pool support, this has been implemented via a ‘Connection Pool’ flex field for the Oracle BI data server. So here you set the connection pool name from the OBIEE system that you specifically want to use and this is used by the Oracle BI to Oracle (DBLINK) LKM, so if you are using this you must set this flex field. Hopefully a useful insight into some of the mechanics of how this hangs together.

    Read the article

  • Bump the Bill

    - by David Dorf
    I'm writing this from 3,400 feet in the air somewhere between Chicago and Austin. GoGo In-flight strikes again. Is there anywhere I can't get a WiFi connection? While listening to Deacon Blues by Steely Dan and skimming the news, I just came across an interesting article on mobile payments. Remember when I wrote about the iPhone Bump application and its possible use in retail? Well it looks like PayPal updated their mobile payments application to include the bump technology. Now its possible to transfer money between individuals by bumping iPhones. According to the WSJ, Paypal did 24 million transactions in 2008 and 140 million in 2009 on mobile phones. As the technology gets easier to use, that number is bound to increase. Alternatives to Paypal include Google Checkout, Amazon Payments, wireless carriers ("put it on my phone bill"), smart cards (using your phone's SIM card), and iTunes. That last one comes courtesy of a story Joe Skorupa wrote on mobile payments. It looks like Apple allows iPhone apps to take micro-payments via iTunes accounts, so there may come a time when its possible to use your iPhone to make a purchase in a retail store and have your credit card charged via your iTunes account. There are still some improvements in usability to be made before using a phone will be easier than swiping a credit card, but its already better than fussing with cash.

    Read the article

  • More Mobile Payments

    - by David Dorf
    In the previous post I discussed the Bump Payments from PayPayl, but that's not the only innovative way to make purchases using your phone. Verizon recently announced a partnership with Danal that allows shoppers to charge online purchases to their Verizon bill. For e-commerce sites that accept this type of payment, it's a two step process. At checkout, the shopper enters their mobile number and billing zip code. Then a SMS message is sent to the mobile phone that contains a one-time code that must be entered on the e-commerce site. This two-factor authentication seems pretty secure, and no pre-registration or credit card is necessary. There's a $25 a month maximum, but I bet the limit gets raised as Verizon gets more comfortable with security. Merchants are charged a fee similar to credit card fees. Another example of mobile payments is offered by BlingNation. Customers attach a small NFC sticker to their phones that allows them to "tap" the POS device to make a payment. The NFC chip is connected to their checking account, so the transaction is treated as a debit payment. Text messages are sent to the mobile that confirm the payments so shoppers can easily verify their purchases. BlingNation is working with banks like Adirondack Trust Company and The State Bank of La Junta in Colorado. Heck, you can even send money to inmates in the Arkansas prison system using your mobile phone now that the state of Arkansas supports payments via their mobile website. Everyone is getting into the act now.

    Read the article

  • Tween Animation Cannot Start

    - by David Dimalanta
    Do you have any reasons why my tween code didn't run or work? I already add the tween engine onto the library folder under LibGDX project folder and "Order and Export" it under Java Build Path at the Properties menu. My first two classes ran correctly and workly but my third class didn't work. Here's the sequence: First class is the first screen. Fade animation works on the company's logo. Second class is the second screen. Fade animation for the loading screen works. Third class is the third screen. After the second screen, now calls for the third screen. Animation stopped or won't run since I want the black screen to fade out at the start when the menu is here. Can you check if I did right? Look carefully by comment lines for explanation. //-----[ Animation Setup ]----- Tween.registerAccessor(Sprite.class, new Tween_Animation()); // --> Tween_Animation.java Tween_Manager = new TweenManager(); // --> I initialized it the TweenManager and seems okay. cb_start = new TweenCallback() // --> I'll use this when I choose START and the menu will fade in black. { @Override public void onEvent(int arg0, BaseTween<?> arg1) { goTo(); } }; Tween // --> This is where I focused the problem. .to(black_Sprite, Tween_Animation.ALPHA, 3f) .target(1) .ease(TweenEquations.easeInQuad) .repeatYoyo(200, 2.5f) // --> I set the repeat for 200 times when I noticed that the animation won't work! .start(Tween_Manager);

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • Oracle Database 11gR2 už i na Windows

    - by david.krch
    Na konci týdne byla na OTN uvedena verze Oracle Database 11g Release 2 pro Windows - jak 32-bit, tak i 64-bit. Doplnila tak již dríve dostupné verze pro Linux, Solaris (jak na SPARC, tak i x86), AIX a HP-UX. Jako obvykle je možné stahnout instalacní soubory na všechny tyto platformy z OTN.

    Read the article

  • Step Aside Google

    - by David Dorf
    Step aside Google. While search will always be a huge part of the web, I can see a day in the not-too-distance future where search takes a backseat to the social graph. Links between pages will give way to relationships between people, including context like location. What does this mean for retail? It means your e-commerce strategy will slowly transition to an f-commerce strategy. Remember when a large portion of the online population was held captive inside the walls of AOL? All the commercials listed an AOL keyword, not a web address because that's where the majority of people surfed. Now, people are spending a huge amount of time in Facebook (despite Betty White's proclamation that its a big waste of time). According to Facebook, users spend 500 billion minutes per month on the site. Selling products where consumers are spending their time makes sense. The power of Like and Share are the most effective approach to marketing. More and more stores are popping up on Facebook, and soon they will be the front-end to e-commerce systems. As sites adopt the Facebook Open Graph API, users will have a harder time distinguishing the open web from their Facebook experience, including shopping. Of course e-commerce sites won't go away, but a large portion of their traffic will emanate from Facebook and in some cases Facebook will act as the front-end for the web store. Ignore Facebook Open Graph at your peril. In a Mashable article, Mitchell Harper made several predictions about how e-commerce will change based on Facebook. His five points are not far-fetched at all, so we need to watch this space carefully.

    Read the article

  • Rychlejší aplikace i bez zmen dotazu - 1.díl - vliv castých Commitu

    - by david.krch
    Pamatuji si, jak jsem pred pár lety sedel na konferenci Oracle Develop a sledoval svého ponekud známejšího kolegu Marka Townsenda ukazovat prostinké demo - jak se jedna a tatáž operace dá udelat bud velmi pomalu a nebo velmi rychle. Nešlo o klasické ladení SQL, ale efektivní volání techto dotazu. Jednotlivé techniky samy o sobe jsou pritom asi všem známé. Ohromilo mne ale hlavne to, jak velký rozdíl ve výkonu stejné operace provádené stejným jednoduchým SQL dotazem muže být. Ríkám si, že kdyby ten rozdíl videlo více lidí, možná by se casteji donutili napsat o ten jeden, dva rádky kódu více, aby byl kód treba i o rád efektivnejší a rychlejší. Pokusil jsem se udelat podobný príklad a na jeho základne vznikla trojice clánku, které postupne vycházely na server Databázový svet.Z nejakého duvodu se ale poslední díl nikdy nedockal publikace. Protože jsem od té doby párkrát narazil na potrebu odkázat nekoho na celou trojici clánku, rozhodl jsem se je re-publikovat na tomto blogu. Problém a rešení budu ukazovat v Jave, ale ve skutecnosti jde o postupy, které jsou platné at už voláte databázi odkudkoliv. Úkol je jednoduchý - vložit 100.000 záznamu do jednoduché tabulky. A postupne si ukážeme že ze základního nejpomalejšího rešení se nekolika jednoduchými kroky dostaneme na rešení, které bude 80x rychlejší.

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • Not Playing Nice Together

    - by David Douglass
    One of the things I’ve noticed is that two industry trends are not playing nice together, those trends being multi-core CPUs and massive hard drives.  It’s not a problem if you keep your cores busy with compute intensive work, but for software developers the beauty of multi-core CPUs (along with gobs of RAM and a 64 bit OS) is virtualization.  But when you have only one hard drive (who needs another when it holds 2 TB of data?) you wind up with a serious hard drive bottleneck.  A solid state drive would definitely help, and might even be a complete solution, but the cost is ridiculous.  Two TB of solid state storage will set you back around $7,000!  A spinning 2 TB drive is only $150. I see a couple of solutions for this.  One is the mainframe concept of near and far storage: put the stuff that will be heavily access on a solid state drive and the rest on a spinning drive.  Another solution is multiple spinning drives.  Instead of a single 2 TB drive, get four 500 GB drives.  In total, the four 500 GB drives will cost about $100 more than the single 2 TB drive.  You’ll need to be smart about what drive you place things on so that the load is spread evenly.  Another option, for better performance, would be four 10,000 RPM 300 GB drives, but that would cost about $800 more than the singe 2 TB drive and would deliver only 1.2 TB of space. All pricing based on Microcenter as of March 14, 2010.

    Read the article

  • Selling Solutions, Not Products

    - by David Dorf
    When I think about next-generation retailers, the names that come to mind are Apple, Whole Foods, Lulu Lemon, and IKEA.  They may not be the biggest retailers, but they are certainly growing fast. Success is never defined by just one dimension, and these retailers execute well across many dimensions, but the one that stands out for me is customer experience.  These stores feel...approachable...part of the community...local.  Customers are not intimidated to ask questions, and staff seem to go out of their way to help. What's makes these retailers stand out in the industry?  These retailers aren't selling products -- they're selling solutions.  Think about that.  You think you're going to the Apple store to buy a phone, but you're actually buying a communications solution that handles much, much more.  If you carry an iPhone, your life has changed.  The way you do things is different.  The impacts go much beyond a simple phone. Solutions start with a problem, which is why these retailers greet customers with "what brought you in today," or "can I answer any questions for you?"  Good retailers establish a relationship, even if it lasts only a few minutes. You don't walk into Whole Foods looking for cans of soup.  You are looking for meals: healthy snacks, interesting lunches, exotic dinners.  Its a learning experience where you might discover solutions to problems you didn't know you had.  Mention what foods you like, and you'll get a list of similar items you had not considered.  I didn't know I needed a closet organizer until I visited an IKEA and learned about all the options.  They were able to customize the solution to meet my needs, and now I'm much more organized. One of the differences between selling products and selling solutions is training.  Visit any of these retailers' sites and you'll see a long list of in-store events for the benefit of customers.  You can buy exercise clothing from Lulu Lemon, and also learn new yoga techniques, meet like-minded people, and branch off to other fitness regimes via their ambassadors.  You can visit the Geek Bar at Apple, eat lunch at IKEA, and learn to cook at Whole Foods. These retailers are making an investment in a relationship with their customers.  They are showing loyalty to their customers before asking for it back.  In the long-run, this strategic approach will outlive any scan-and-bag mentality.

    Read the article

  • Debugging Tips for Skinning

    - by Christian David Straub
    Another guest post by Jeanne Waldman.If you are developing a skin for your Fusion Application in JDeveloper you should know these tips.   1. Firebug is your friend 2. Uncompress the css style classes 3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away 4. View the generated CSS File   1. Firebug is your friend Install Firebug (http://getfirebug.com/layout) into Firefox and use it to view your rendered jspx page in the browser. You can select the HTML dom nodes on your page and you can see the css styles applied to each dom node.   2. Uncompress the css style classes By default the styleclasses that are rendered are compressed. You may see style classes like class="x10" and class="x2e". But in your skin css file you have styleclasses like: af|inputText::content or af|panelBox::header   It is easier for you to develop a skin and debug a skin with Firebug if you see the uncompressed styleclasses. To do this, a. open web.xml b. add   <context-param>     <param-name>org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION</param-name>     <param-value>true</param-value>   </context-param> c. save d. restart the server and re-run your page.   3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away   For performance sake the ADF Faces framework does not check if you skin .css file has changed on every render. But this is exactly what you want to happen when you are developing or debugging a skin. You want your changes to get noticed right away, without restarting the server.   To do this, a. open web.xml b. add   <context-param>     <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>     <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>     <param-value>false</param-value>   </context-param> c. save d. restart the server and re-run your page. e. from then on, you can change your skin's .css file, save it and refresh your page and you should see the changes right away   4. View the generated CSS File   There are different ways to view the generated CSS File which is your skin's css file merged in with all the skins it extends and processed and generated to the filesystem and linked to your generated html page. One way is to view it with Firebug. The problem with this approach is you might see something that is a little different than the actual css file because Firebug may do some extra processing. I like to view the generated css file by: Right click on your page in the browser 

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >