Search Results

Search found 14737 results on 590 pages for 'dynamic tables'.

Page 470/590 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • ArchBeat Link-o-Rama for 2012-06-05

    - by Bob Rhubart
    Why is enterprise software often so complicated? | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja shares "a few examples of requirements that lead to creation of complex platform infrastructures that up the complex enterprise software." Educause Top-Ten IT Issues - the most change in a decade or more | Cole Clark blogs.oracle.com Cole Clark discusses why "higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia." Oracle VM RAC template - what it took | Wim Coekaerts blogs.oracle.com Wim Coekaerts shares an example that shows how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. Oracle Cloud and Oracle Platinum Services Announcements oracle.com Featuring Larry Ellison and Mark Hurd. Wednesday, June 06, 2012. 1:00 p.m. PT – 2:30 p.m. PT Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design | Mark Rittman www.rittmanmead.com Oracle ACE Director Mark Rittman launches a new series that dives into "the various stages in building a simple Oracle Endeca Information Discovery application, using the recent Endeca Information Discovery 2.3 release." Introducing Decision Tables in the SOA Suite 11g | Lucas Jellama technology.amis.nl Oracle ACE Director Lucas Jellema demonstrates how "the decision table can be put to good use to implement the business logic behind the classical game of Rock, Paper and Scissors." Application integration: reorganise, recycle, repurpose | Andrew Clarke radiofreetooting.blogspot.com "Integration is a topic which is in everybody's baliwick," says Oracle ACE Andrew Clarke. "The business people want to get the best value from their existing IT investments. The architects need to understand the interfaces between the silos and across the layers. The developers have to implement it." Using XA Transactions in Coherence-based Applications | Jonathan Purdy blogs.oracle.com Purdy shares "a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager." Thought for the Day "The difficulty lies, not in the new ideas, but in escaping from the old ones..." — John Maynard Keynes (June 5, 1883 - April 4, 1946) Source: Quotations Page

    Read the article

  • Microsoft BUILD 2013&ndash;Day 1 Summary

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013ndashday-1-summary.aspx I’m happy to be at BUILD this week, mainly because my flights finally got me here late on Tuesday.  My biggest complaints so far are the flights and the hotel.  It seems that almost every flight into San Francisco were delayed multiple hours.  The Sequester so lovingly forced on America by congress means that the airport was short controllers.   That, along with poor weather and airport construction meant most people were 2-3 hours late arriving.  Add on top of that the fact that the hotel that I picked durring registration is absolutely horrid.  It looks like something out of a ghost hunters show and smells like it too.  I think if Microsoft is going to select a hotel they need to make sure that it is adequate. Rant over! So what happened the first day?  Steve Balmer started off the keynote along with Julie Larson-Green and a cast of others.  We finally found out that there were around six thousand people attending BUILD and that the focus this year would be Windows 8, Windows Phone 8 and Azure.  For the rest of the keynote I am going to have a separate post. You can’t have a Microsoft conference without some fun.  This year they have a hunt for pins that represent different gestures in Windows 8.  I got all of mine.  Now they just need to pull my name. The sessions I attended were really good. They covered live tiles, what’s new in XAML and building Windows Phone UIs presented by Kraig Brockschmidt, Tim Heuer and Shawn Oster respectively.  These will also be covered in separate posts. The exhibit area was interesting, but somewhat disappointing.  TechEd 2012 I think was better organized and better staffed by the vendors.  It also seemed that the Microsoft teams’ booths were also in need of some organization and staffing. Overall it was a really fun day capped off by all six thousand attendees standing in like to get their Acer 8” tables and Surface Pros.  What a day!  Stay tuned for follow up posts. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • Agile Documentation

    - by Nick Harrison
    We all know that one of the premises of the agile manifesto is to value Working Software over Comprehensive Documentation. This is a wonderful idea and it takes a tremendous burden off of project implementations. I have seen as many projects fail because of the maintenance weight of the project documentations as I have for any reason. But this goal as important as it is may not always be practical. Sometimes the client will simply insist on tedious documentation despite the arguments against it. This may be to calm a nervous client. This may be to satisfy an audit / compliance requirement. This may be a non-too subtle attempt at sabotaging the project. Ok, it is probably not an all out attempt to sabotage the project, but it will probably feel that way. So what can we do to keep to the spirit of the Agile Manifesto but still meet the needs of the client wanting the documentation? This is a good question that I have been puzzling over lately! I hope to explore some possible answers more fully here. A common theme that my solutions are likely to follow is the same theme that I often follow with simplifying complex business logic. Make it table driven! My thought is that the sought after documentation could be a report or reports out of a metadata repository. Reports are much easier to maintain than hand written documentation. Here are a few additional advantages that we can explore over time: Reports will take advantage of the fact that different people have different needs and different format requirements Reports and the supporting metadata are more easily validated and the validation can be automated. If the application itself uses this metadata than there never has to be a question as to whether or not the metadata is up to date. It is up to date or the application would not work. In many cases we should be able to automatically gather most of the Meta data that we need using reflection, system tables, etc. I think that this will lower the total cost of ownership for the documentation and may provide something useful beyond having a pretty document to look at.  What are your thoughts?

    Read the article

  • Q&A: Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    We had a great webcast yesterday and wanted to recap the questions that were asked throughout. Can ECM distribute contents to 3rd party sites?ECM, which is now called WebCenter Content can distribute content to 3rd party sites via several means as well as SSXA - Site Studio for External Applications. Will you be able to provide more information on these means and SSXA?If you have an existing JSP application, you can add the SSXA libraries to your IDE where your application was built (JDeveloper for example).  You can now drop some code into your 3rd party site/application that can both create and pull dynamically contributable content out of the Content Server for inclusion in your pages.   If the 3rd party site is not a JSP application, there is also the option of leveraging two Site Studio (not SSXA) specific custom WebCenter Content services to pull Site Studio XML content into a page. More information on SSXA can be found here: http://docs.oracle.com/cd/E17904_01/doc.1111/e13650/toc.htm Is there another way than a ”gadget” to integrate applications (like loan simulator) in WebCenter Sites?There are some other ways such as leveraging the Pagelet Producer, which is a core component of WebCenter Portal. Oracle WebCenter Portal's Pagelet Producer (previously known as Oracle WebCenter Ensemble) provides a collection of useful tools and features that facilitate dynamic pagelet development. A pagelet is a reusable user interface component. Any HTML fragment can be a pagelet, but pagelet developers can also write pagelets that are parameterized and configurable, to dynamically interact with other pagelets, and respond to user input. Pagelets are similar to portlets, but while portlets were designed specifically for portals, pagelets can be run on any web page, including within a portal or other web application. Pagelets can be used to expose platform-specific portlets in other web environments. More on Page Producer can be found here:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#CHDIAEHG Can you describe the mechanism available to achieve the context transfer of content?The primary goal of context transfer is to provide a uniform experience to customers as they transition from one channel to another, for instance in the use-case discussed in the webcast, it was around a customer moving from the .com marketing website to the self-service site where the customer wants to manage his account information. However if WebCenter Sites was able to identify and segment the customers  to a specific category where the customer is a potential target for some promotions, the same promotions should be targeted to the customer when he is in the self-service site, which is managed by WebCenter Portal. The context transfer can be achieved by calling out the WebCenter Sites Engage Server API’s, which will identify the segment that the customer has been bucketed into. Again through REST API’s., WebCenter Portal can then request WebCenter Sites for specific content that needs to be targeted for a customer for the identified segment. While this integration can be achieved through custom integration today, Oracle is looking into productizing this integration in future releases.  How can context be transferred from WebCenter Sites (marketing site) to WebCenter Portal (Online services)?WebCenter Portal Personalization server can call into WebCenter Sites Engage Server to identify the segment for the user and then through REST API’s request specific content that needs to be surfaced in the Portal. Still have questions? Leave them in the comments section! And you can catch a replay of the webcast here.

    Read the article

  • Q&A: Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    We had a great webcast yesterday and wanted to recap the questions that were asked throughout. Can ECM distribute contents to 3rd party sites?ECM, which is now called WebCenter Content can distribute content to 3rd party sites via several means as well as SSXA - Site Studio for External Applications. Will you be able to provide more information on these means and SSXA?If you have an existing JSP application, you can add the SSXA libraries to your IDE where your application was built (JDeveloper for example).  You can now drop some code into your 3rd party site/application that can both create and pull dynamically contributable content out of the Content Server for inclusion in your pages.   If the 3rd party site is not a JSP application, there is also the option of leveraging two Site Studio (not SSXA) specific custom WebCenter Content services to pull Site Studio XML content into a page. More information on SSXA can be found here: http://docs.oracle.com/cd/E17904_01/doc.1111/e13650/toc.htm Is there another way than a ”gadget” to integrate applications (like loan simulator) in WebCenter Sites?There are some other ways such as leveraging the Pagelet Producer, which is a core component of WebCenter Portal. Oracle WebCenter Portal's Pagelet Producer (previously known as Oracle WebCenter Ensemble) provides a collection of useful tools and features that facilitate dynamic pagelet development. A pagelet is a reusable user interface component. Any HTML fragment can be a pagelet, but pagelet developers can also write pagelets that are parameterized and configurable, to dynamically interact with other pagelets, and respond to user input. Pagelets are similar to portlets, but while portlets were designed specifically for portals, pagelets can be run on any web page, including within a portal or other web application. Pagelets can be used to expose platform-specific portlets in other web environments. More on Page Producer can be found here: http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#CHDIAEHG Can you describe the mechanism available to achieve the context transfer of content?The primary goal of context transfer is to provide a uniform experience to customers as they transition from one channel to another, for instance in the use-case discussed in the webcast, it was around a customer moving from the .com marketing website to the self-service site where the customer wants to manage his account information. However if WebCenter Sites was able to identify and segment the customers  to a specific category where the customer is a potential target for some promotions, the same promotions should be targeted to the customer when he is in the self-service site, which is managed by WebCenter Portal. The context transfer can be achieved by calling out the WebCenter Sites Engage Server API’s, which will identify the segment that the customer has been bucketed into. Again through REST API’s., WebCenter Portal can then request WebCenter Sites for specific content that needs to be targeted for a customer for the identified segment. While this integration can be achieved through custom integration today, Oracle is looking into productizing this integration in future releases.  How can context be transferred from WebCenter Sites (marketing site) to WebCenter Portal (Online services)?WebCenter Portal Personalization server can call into WebCenter Sites Engage Server to identify the segment for the user and then through REST API’s request specific content that needs to be surfaced in the Portal. Still have questions? Leave them in the comments section! And you can catch a replay of the webcast here.

    Read the article

  • Call for Papers for both Devoxx UK and France now open!

    - by Yolande
    Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} The two conferences are taking place the last week of March 2013 with London on March 26th and 27 and Paris on March 28th and 29th. Oracle fully supports "Devoxx UK" and "Devoxx France" as a European Platinum Partner. Submit proposals and participate in both conferences since they are a two-hour train ride away from one another. The Devoxx conferences are designed “for developers by developers.” The conference committees are looking for speakers who are passionate developers unafraid to share their knowledge of Java, mobile, web and beyond. The sessions are about frameworks, tools and development with in-depth conference sessions, short practical quickies, and bird-of-a-feather discussions. Those different formats allow speakers to choose the best way to present their topics and can be mentioned during the submission process Devoxx has proven its success under Stephan Janssen, organizer of Devoxx in Belgium for the past 11 years. Devoxx has been the biggest Java conference in Europe for many years. To organize those local conferences, Stephan has enrolled the top community leaders in the UK and France. Ben Evans and Martijn Verberg are the leaders of London Java User Group (JUG) and are also known internationally for starting the Adopt-a-JSR program. Antonio Goncalves is the leader of the Paris JUG. He organized last year’s Devoxx France, which was a big success with twice the size first expected. The organizers made sure to add the local character to the conferences. "The community energy has to feel right," said Ben Evans and for that he picked an "old Victoria hall" for the venue. Those leaders are part of very dynamic Java communities in France and in the UK. France has 22 JUGs; the Paris JUG alone has 2,000 members. The UK has over 50,000 developers working in London and its surroundings; a lot of them are Java developers working in the financial industry. The conference fee is kept as low as possible to encourage those developers to attend. Devoxx promises to be crowded and sold out in advance. Make sure to submit your talks to both Devoxx UK and France before January 31st, 2013. 

    Read the article

  • How do you handle objects that need custom behavior, and need to exist as an entity in the database?

    - by Scott Whitlock
    For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the Event table, except during initial installation, and during an update where a new Event might be created. At some point, when an event is generated, the application needs to lookup the Event and get a list of Users. What's the best way to link the event in the source code to the event in the database? Option 1: Store the EventName in the program as a fixed constant, and look it up by name. Option 2: Store the EventId in the program as a static Guid, and look it up by ID. Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my Event entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection<EventSubscription> EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException("waker"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map WakingEvent to whatever key I'm using to look it up in the database. Let's say that's the EventId. Where do I store this relationship? Does it go in the event repository class? Should the WakingEvent know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the WakingEvent like this: public T GetEvent<T>() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?

    Read the article

  • Welcome To The Nashorn Blog

    - by jlaskey
    Welcome to all.  Time to break the ice and instantiate The Nashorn Blog.  I hope to contribute routinely, but we are very busy, at this point, preparing for the next development milestone and, of course, getting ready for open source. So, if there are long gaps between postings please forgive. We're just coming back from JavaOne and are stoked by the positive response to all the Nashorn sessions. It was great for the team to have the front and centre slide from Georges Saab early in the keynote. It seems we have support coming from all directions. Most of the session videos are posted. Check out the links. Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM. Unfortunately, Marcus - the code generation juggernaut,  got saddled with the first session of the first day. Still, he had a decent turnout. The talk focused on issues relating to optimizations we did to get good performance from the JVM. Much yet to be done but looking good. Nashorn: JavaScript on the JVM. This was the main talk about Nashorn. I delivered the little bit of this and a little bit of that session with an overview, a follow up on the open source announcement, a run through a few of the Nashorn features and some demos. The room was SRO, about 250±. High points: Sam Pullara, from Twitter, came forward to describe how painless it was to get Mustache.js up and running (20x over Rhino), and,  John Ceccarelli, from NetBeans came forward to describe how Nashorn has become an integral part of Netbeans. A healthy Q & A at the end was very encouraging. Meet the Nashorn JavaScript Team. Michel, Attila, Marcus and myself hosted a Q & A. There was only a handful of people in the room (we assume it was because of a conflicting session ;-) .) Most of the questions centred around Node.jar, which leads me to believe, Nashorn + Node.jar is what has the most interest. Akhil, Mr. Node.jar, sitting in the audience, fielded the Node.jar questions. Nashorn, Node, and Java Persistence. Doug Clarke, Akhil and myself, discussed the title topics, followed by a lengthy Q & A (security had to hustle us out.) 80 or so in the room. Lots of questions about Node.jar. It was great to see Doug's use of Nashorn + JPA. Nashorn in action, with such elegance and grace. Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings. Attila discussed how he applied Dynalink to Nashorn. Good turn out for this session as well. I have a feeling that once people discover and embrace this hidden gem, great things will happen for all languages running on the JVM. Finally, there were quite a few JavaOne sessions that focused on non-Java languages and their impact on the JVM. I've always believed that one's tool belt should carry a variety of programming languages, not just for domain/task applicability, but also to enhance your thinking and approaches to problem solving. For the most part, future blog entries will focus on 'how to' in Nashorn, but if you have any suggestions for topics you want discussed, please drop a line.  Cheers. 

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • The battle between Java vs. C#

    The battle between Java vs. C# has been a big debate amongst the development community over the last few years. Both languages have specific pros and cons based on the needs of a particular project. In general both languages utilize a similar coding syntax that is based on C++, and offer developers similar functionality. This being said, the communities supporting each of these languages are very different. The divide amongst the communities is much like the political divide in America, where the Java community would represent the Democrats and the .Net community would represent the Republicans. The Democratic Party is a proponent of the working class and the general population. Currently, Java is deeply entrenched in the open source community that is distributed freely to anyone who has an interest in using it. Open source communities rely on developers to keep it alive by constantly contributing code to make applications better; essentially they develop code by the community. This is in stark contrast to the C# community that is typically a pay to play community meaning that you must pay for code that you want to use because it is developed as products to be marketed and sold for a profit. This ties back into my reference to the Republicans because they typically represent the needs of business and personal responsibility. This is emphasized by the belief that code is a commodity and that it can be sold for a profit which is in direct conflict to the laissez-faire beliefs of the open source community. Beyond the general differences between Java and C#, they also target two different environments. Java is developed to be environment independent and only requires that users have a Java virtual machine running in order for the java code to execute. C# on the other hand typically targets any system running a windows operating system and has the appropriate version of the .Net Framework installed. However, recently there has been push by a segment of the Open source community based around the Mono project that lets C# code run on other non-windows operating systems. In addition, another feature of C# is that it compiles into an intermediate language, and this is what is executed when the program runs. Because C# is reduced down to an intermediate language called Common Language Runtime (CLR) it can be combined with other languages that are also compiled in to the CLR like Visual Basic (VB) .Net, and F#. The allowance and interaction between multiple languages in the .Net Framework enables projects to utilize existing code bases regardless of the actual syntax because they can be compiled in to CLR and executed as one codebase. As a software engineer I personally feel that it is really important to learn as many languages as you can or at least be open to learn as many languages as you can because no one language will work in every situation.  In some cases Java may be a better choice for a project and others may be C#. It really depends on the requirements of a project and the time constraints. In addition, I feel that is really important to concentrate on understanding the logic of programming and be able to translate business requirements into technical requirements. If you can understand both programming logic and business requirements then deciding which language to use is just basically choosing what syntax to write for a given business problem or need. In regards to code refactoring and dynamic languages it really does not matter. Eventually all projects will be refactored or decommissioned to allow for progress. This is the way of life in the software development industry. The language of a project should not be chosen based on the fact that a project will eventually be refactored because they all will get refactored.

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • New Sample Demonstrating the Traversing of Tree Bindings

    - by Duncan Mills
    A technique that I seem to use a fair amount, particularly in the construction of dynamic UIs is the use of a ADF Tree Binding to encode a multi-level master-detail relationship which is then expressed in the UI in some kind of looping form – usually a series of nested af:iterators, rather than the conventional tree or treetable. This technique exploits two features of the treebinding. First the fact that an treebinding can return both a collectionModel as well as a treeModel, this collectionModel can be used directly by an iterator. Secondly that the “rows” returned by the collectionModel themselves contain an attribute called .children. This attribute in turn gives access to a collection of all the children of that node which can also be iterated over. Putting this together you can represent the data encoded into a tree binding in all sorts of ways. As an example I’ve put together a very simple sample based on the HT schema and uploaded it to the ADF Sample project. It produces this UI: The important code is shown here for a Region -> Country -> Location Hierachy: <af:iterator id="i1" value="#{bindings.AllRegions.collectionModel}" var="rgn"> <af:showDetailHeader text="#{rgn.RegionName}" disclosed="true" id="sdh1"> <af:iterator id="i2" value="#{rgn.children}" var="cnty">     <af:showDetailHeader text="#{cnty.CountryName}" disclosed="true" id="sdh2">       <af:iterator id="i3" value="#{cnty.children}" var="loc">         <af:panelList id="pl1">         <af:outputText value="#{loc.City}" id="ot3"/>           </af:panelList>         </af:iterator>       </af:showDetailHeader>     </af:iterator>   </af:showDetailHeader> </af:iterator>  You can download the entire sample from here:

    Read the article

  • Libgdx 2D Game, Random generated World of random size, how to get mouse coordinates?

    - by Solom
    I'm a noob and English is not my mothertongue, so please bear with me! I'm generating a map for a Sidescroller out of a 2D-array. That is, the array holds different values and I create blocks based on that value. Now, my problem is to match mouse coordinates on screen with the actual block the mouse is pointing at. public class GameScreen implements Screen { private static final int WIDTH = 100; private static final int HEIGHT = 70; private OrthographicCamera camera; private Rectangle glViewport; private Spritebatch spriteBatch; private Map map; private Block block; ... @Override public void show() { camera = new OrthographicCamera(WIDTH, HEIGHT); camera.position.set(WIDTH/2, HEIGHT/2, 0); glViewport = new Rectangle(0, 0, WIDTH, HEIGHT); map = new Map(16384, 256); map.printTileMap(); // Debugging only spriteBatch = new SpriteBatch(); } @Override public void render(float delta) { // Clear previous frame Gdx.gl.glClearColor(1, 1, 1, 1 ); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); GL30 gl = Gdx.graphics.getGL30(); // gl.glViewport((int) glViewport.x, (int) glViewport.y, (int) glViewport.width, (int) glViewport.height); spriteBatch.setProjectionMatrix(camera.combined); camera.update(); spriteBatch.begin(); // Draw Map this.drawMap(); // spriteBatch.flush(); spriteBatch.end(); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { // Bounds check (y) if(camera.position.y + camera.viewportHeight < a)// || camera.position.y - camera.viewportHeight > a) break; for(int b = 0; b < map.getWidth(); b++) { // Bounds check (x) if(camera.position.x + camera.viewportWidth < b)// || camera.position.x > b) break; // Dynamic rendering via BlockManager int id = map.getTileMap()[a][b]; Block block = BlockManager.map.get(id); if(block != null) // Check if Air { block.setPosition(b, a); spriteBatch.draw(block.getTexture(), b, a, 1 ,1); } } } } As you can see, I don't use the viewport anywhere. Not sure if I need it somewhere down the road. So, the map is 16384 blocks wide. One block is 16 pixels in size. One of my naive approaches was this: if(Gdx.input.isButtonPressed(Input.Buttons.LEFT)) { Vector3 mousePos = new Vector3(); mousePos.set(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); System.out.println(Math.round(mousePos.x)); // *16); // Debugging // TODO: round // map.getTileMap()[mousePos.x][mousePos.y] = 2; // Draw at mouse position } I confused myself somewhere down the road I fear. What I want to do is, update the "block" (or rather the information in the Map/2D-Array) so that in the next render() there is another block. Basically drawing on the spriteBatch g So if anyone could point me in the right direction this would be highly appreciated. Thanks!

    Read the article

  • Best Practices for serializing/persisting String Object Dictionary entities

    - by Mark Heath
    I'm noticing a trend towards using a dictionary of string to object (or sometimes string to string), instead of strongly typed objects. For example, the new Katana project makes heavy use of IDictionary<string,object>. This approach avoids the need to continually update your entity classes/DTOs and the database tables that persist them with new properties. It also avoids the need to create new derived entity types to support new types of entity, since the Dictionary is flexible enough to store any arbitrary properties. Here's a contrived example: class StorageDevice { public int Id { get; set; } public string Name { get; set; } } class NetworkShare : StorageDevice { public string Path { get; set; } public string LoginName { get; set; } public string Password { get; set; } } class CloudStorage : StorageDevice { public string ServerUri { get; set } public string ContainerName { get; set; } public int PortNumber { get; set; } public Guid ApiKey { get; set; } } versus: class StorageDevice { public IDictionary<string, object> Properties { get; set; } } Basically I'm on the lookout for any talks, books or articles on this approach, so I can pick up on any best practices / difficulties to avoid. Here's my main questions: Does this approach have a name? (only thing I've heard used so far is "self-describing objects") What are the best practices for persisting these dictionaries into a relational database? Especially the challenges of deserializing them successfully with strongly typed languages like C#. Does it change anything if some of the objects in the dictionary are themselves lists of strongly typed entities? Should a second dictionary be used if you want to temporarily store objects that are not to be persisted/serialized across a network, or should you use some kind of namespacing on the keys to indicate this?

    Read the article

  • PL/SQL to delete invalid data from token Strings

    - by Jie Chen
    Previous article describes how to delete the duplicated values from token string in bulk mode. This one extends it and shows the way to delete invalid data. Scenario Support we have page_two and manufacturers tables in database and the table DDL is: SQL> desc page_two; Name NULL? TYPE ----------------------------------------- -------- ------------------------ MULTILIST04 VARCHAR2(765) SQL> SQL> desc manufacturers; Name NULL? TYPE ----------------------------------------- -------- ------ ID NOT NULL NUMBER NAME VARCHAR In table page_two, column multilist04 stores a token string splitted with common. Each token represent a valid ID in manufacturers table. My expectation is to delete invalid token strings from page_two.multilist04, which have no mapping id in manufacturers.id. For example in below SQL result: ,6295728,33,6295729,6295730,6295731,22, , value 33 and 22 are invalid data because there is no ID equals to 33 or 22 in manufacturers table. So I need to delete 33 and 22. SQL> col rowid format a20; SQL> col multilist04 format a50; SQL> select rowid, multilist04 from page_two; ROWID MULTILIST04 -------------------- -------------------------------------------------- AAB+UrADfAAAAhUAAI ,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAJ ,1111,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAK ,6295728,111,6295729,6295730,6295731, AAB+UrADfAAAAhUAAL ,6295728,6295729,6295730,6295731,22, AAB+UrADfAAAAhUAAM ,6295728,33,6295729,6295730,6295731,22, SQL> select id, encode_name from manufacturers where id in (1111,11,22,33); No rows selected SQL> Solution As there is no existing SPLIT function or related in PL/SQL, I should program it by myself. I code Split intermediate function which is used to get the token value between current splitter and next splitter. Next program is main entry point, it get each column value from page_two.multilist04, process each row based on cursor. When it get each multilist04 value, it uses above Split function to get each token string stored to singValue variant, then check if it exists in manufacturers.id. If not found, set fixFlag to 1, pending to be deleted.

    Read the article

  • Parameterized StreamInsight Queries

    - by Roman Schindlauer
    The changes in our APIs enable a set of scenarios that were either not possible before or could only be achieved through workarounds. One such use case that people ask about frequently is the ability to parameterize a query and instantiate it with different values instead of re-deploying the entire statement. I’ll demonstrate how to do this in StreamInsight 2.1 and combine it with a method of using subjects for dynamic query composition in a mini-series of (at least) two blog articles. Let’s start with something really simple: I want to deploy a windowed aggregate to a StreamInsight server, and later use it with different window sizes. The LINQ statement for such an aggregate is very straightforward and familiar: var result = from win in stream.TumblingWindow(TimeSpan.FromSeconds(5))               select win.Avg(e => e.Value); Obviously, we had to use an existing input stream object as well as a concrete TimeSpan value. If we want to be able to re-use this construct, we can define it as a IQStreamable: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value)); The DefineStreamable API lets us define a function, in our case from a IQStreamable (the input stream) and a TimeSpan (the window length) to an IQStreamable (the result). We can then use it like a function, with the input stream and the window length as parameters: var result = avg(stream, TimeSpan.FromSeconds(5)); Nice, but you might ask: what does this save me, except from writing my own extension method? Well, in addition to defining the IQStreamable function, you can actually deploy it to the server, to make it re-usable by another process! When we deploy an artifact in V2.1, we give it a name: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); When connected to the same server, we can now use that name to retrieve the IQStreamable and use it with our own parameters: var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); var result = averageQuery(stream, TimeSpan.FromSeconds(5)); Convenient, isn’t it? Keep in mind that, even though the function “AverageQuery” is deployed to the server, its logic will still be instantiated into each process when the process is created. The advantage here is being able to deploy that function, so another client who wants to use it doesn’t need to ask the author for the code or assembly, but just needs to know the name of deployed entity. A few words on the function signature of GetStreamable: the last type parameter (here: double) is the payload type of the result, not the actual result stream’s type itself. The returned object is a function from IQStreamable<SourcePayload> and TimeSpan to IQStreamable<double>. In the next article we will integrate this usage of IQStreamables with Subjects in StreamInsight, so stay tuned! Regards, The StreamInsight Team

    Read the article

  • WebCenter Content Web Search Performance: Do you really need that folder path info?

    - by Nicolas Montoya
    End-users want content at their fingertips at the speed of thought if possible. When running search operations in the WebCenter Conter Web Interface every second or fraction of a second improvement does matter. When doing some trace analysis on the systemdatabase tracing on a customer environment, we came across some SQL queries that were unnecessarily being triggered! These were related to determining the folder path for every entry part of the search result set. However, this folder path was not even being used as part of the displayed information in the user interface.Why was the folder path information being collected when it was not even displayed in the UI? We found that the configuration parameter 'FolderPathInSearchResults' was set to 'true' under Administration > Admin Server > General Configuration > Additional Configuration Variables as shown below:When executing a quicksearch by keyword we were getting 100 out of 2280 entries in the first page of the result set.When thera 'FolderPathInSearchResults' configuration parameter is set to 'true', the following queries appear in the systemdatabase tracing:100 executions for a query on the FolderFiles table for each of the documents displayed in the first page:>systemdatabase/6       12.13 11:17:48.188      IdcServer-199   1.45 ms. SELECT * FROM FolderFiles WHERE dDocName='SLC02VGVUSORAC140641' AND fLinkRank=0[Executed. Returned row(s): true]382 executions for a query of the folders tables - most of the documents that match the keyword criteria are at a folder depth level of three or four:>systemdatabase/6       12.13 11:17:48.114      IdcServer-199   2.57 ms. SELECT FolderFolders.*,FolderMetaDefaults.* FROM FolderFolders,FolderMetaDefaults WHERE FolderFolders.fFolderGUID=FolderMetaDefaults.fFolderGUID(+) AND((FolderFolders.fFolderGUID = '1EB8E527E19B09ED3FE82EE310AEA13A' ) )[Executed.Returned row(s): true]By setting this 'FolderPathInSearchResults' configuration parameter to 'false', the above queries were no longer reported in the Server Output System Audit Information.Now, let's consider a practical scenario:Search result set page = 100Average folder depth der document in the search result set: 5The number of folder path related queries will be: 100 + 5*500 = 600If each query takes slightly over 3 ms. You would have 2000 ms (2 seconds) spent in server time to get this information.The overall performance impact goes beyond seerver time execution, as this information needs to travel from the server to the browser. If the documents are further nested into the folder hierarchy, additional hundreds of queries may be executed. If folder path is not being displayed in the end-user interface profile, your system may be better of with the 'FolderPathInSearchResults' configuration parameter disabled.

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • Cream of the Crop

    - by KemButller
    JD Edwards has been working hard to ensure that you shouldn't have to work so hard! Yet there are still JD Edwards customers that may not be up to speed on all the new and or improved tools and utilities we have delivered, all designed to make your life easier. So today, I want to share what I consider to be the cream of the crop….those items that every customer should know about and leverage to make ERP life just a little bit (or A LOT) easier! These are my top picks, the cream of a very good crop! Explore and enjoy, and gain some of your time back to do with as you please. · www.runjde.com It’s where to go when you need to know! The Resource Kits available on www.runjde.com provide comprehensive Resource Kits (guides) by user type. The guides provide brief descriptions of the wide array of resources that are available to JD Edwards’s eco system and links to each of those resources. · My Oracle Support (MOS) Information Centers This link will take you to an index that is designed to provide you with simple and quick navigation to the available EnterpriseOne Information Centers. This index provides links to: · EnterpriseOne Application specific Information Centers · EnterpriseOne Tools and Technology Information Centers · EnterpriseOne Performance Information Center · EnterpriseOne 9.1 and 9.0 Information Centers Information Centers give Oracle the ability to aggregate content for a given focus area and present this content in categories for easy browsing by our customers. Information Centers offer a variety of focused dynamic content organized around one or more of the following tasks. · Overview · Use · Troubleshooting · Patching and Maintenance · Install and Configure · Upgrade · Optimize Performance · Security · Certify JD Edwards Newsletters Be in the know by reading the Global Customer Support Product Newsletters. They are PACKED with news and information covering a wide range of topics and news. It is a must read if you want to know what’s happening in the JD Edwards universe! Read the latest EntepriseOne newsletter Read the latest World newsletter Learn How to receive notification when a new newsletter edition is published Oracle Learning Library – (OLL) Oracle Learn Library is the place to go for easy access to JD Edwards Application and Tools training. For a comprehensive view of the training available for a specific product/functional area, explore the Knowledge Paths For Net Change (new feature) training, explore the TOI sessions (TOI stands for Transfer Of Information). Tip: Be sure to experiment with the search filters! · www.upgradejde.com The site designed to help customers and partners with the process of upgrading JD Edwards. The site is a wealth of information, tools and resources designed to assist in the evaluation, planning and execution steps required when upgrading. Of note is the wildly successful upgrade strategy known as “The Art of the Possible” wherein JD Edwards and many of our partners hold free workshops to teach customers how to conduct upgrades in 100 days or less. Equally important is the fact that on www.upgradejde.com, customers can gain visibility into planned enhancements using the Product and Technology Feature Catalogs. The catalogs are great for creating customer specific reports about the net change between older releases and current or planned releases. Examples of other key resources on www.upgradejde.com are the product data base changes between releases, extensibility guides, (formerly known as programmer’s guides), whitepapers, ROI calculators and much more!

    Read the article

  • What books would I recommend?

    - by user12277104
    One of my mentees (I have three right now) said he had some time on his hands this Summer and was looking for good UX books to read ... I sigh heavily, because there is no shortage of good UX books to read. My bookshelves have titles by well-read authors like Nielsen, Norman, Tufte, Dumas, Krug, Gladwell, Pink, Csikszentmihalyi, and Roam. I have titles buy lesser-known authors, many whom I call friends, and many others whom I'll likely never meet. I have books on Excel pivot tables, typography, mental models, culture, accessibility, surveys, checklists, prototyping, Agile, Java, sketching, project management, HTML, negotiation, statistics, user research methods, six sigma, usability guidelines, dashboards, the effects of aging on cognition, UI design, and learning styles, among others ... many others. So I feel the need to qualify any book recommendations with "it depends ...", because it depends on who I'm talking to, and what they are looking for.  It's probably best that I also mention that the views expressed in this blog are mine, and may not necessarily reflect the views of Oracle. There. I'm glad I got that off my chest. For that mentee, who will be graduating with his MS HFID + MBA from Bentley in the Fall, I'll recommend this book: Universal Principles of Design -- this is a great book, which in its first edition held "100  ways to enhance usability, influence perception, increase appeal, make better design decisions, and teach through design." Granted, the second edition expanded that number to 125, but when I first found this book, I felt like I'd discovered the Grail. Its research-based principles are all laid out in 2 pages each, with lots of pictures and good references. A must-have for the new grad. Do I have recommendations for a book that will teach you how to conduct a usability test? Yes, three of them. To communicate what we do to management? Yes. To create personas? Yep -- two or three. Help you with UX in an Agile environment? You bet, I've got two I'd recommend. Create an excellent presentation? Uh hunh. Get buy-in from your team? Of course. There are a plethora of excellent UX books out there. But which ones I recommend ... well ... it depends. 

    Read the article

  • Why can't I install Microsoft Office 2007 in Ubuntu 11.04?

    - by DK new
    I am very new to Ubuntu and only just getting a hang of it, and my questions might sound stupid especially because I am a learner in terms of techie things as well. So because of the nature of work where everyone uses stupid Windows and Microsoft, I need to have access to MS Office 2007/2010 as documents with too many tables or images open all haywire in Libre Office (which has otherwise been great!). I have been reading up about installing MS Office through WINE/PlayonLinux, but have been unsuccessful so far. I downloaded a MS Office 2007 package from Pirate Bay, which I extracted into a folder. I tried numerous different ways to install through WINE and PlayonLinux, but will discuss the one which seems to be getting me somewhere. http://www.webupd8.org/2011/01/how-to-install-microsoft-office-2007-in.html ..... Initially, when I would click on the install button of MS Office, I get a message saying "The install location you selected does not have 1558MB free space. Free up space from the selected install location or choose a different install location". The install location in this case said "C:\Program Files\Microsoft Office", which confused me as I don't have drives named as C, Z etc. I went to configure WINE and under the drives tab, created a drive named A with the path location /media/cd025f16-433b-4a90-abb6-bb7a025d0450/. Also the space thing is confusing as I have at least 450GB of unused space on my computer. anyways, when I selected the A drive for installation, the installation starts, but soon I get the following error message, "Office cannot find Office.en-us\OfficeLR.Cab. Browse to a valid installation source" .... The part saying "OfficeLR.Cab" have said different things after the Office bit every time I have made an attempt. When I select the Office.en-us sub-folder or any other folder within the folder where MS Office 2007 is saved, it says "invalid source"! I have been trying to get this sorted since 15hrs now (addictive!) and have learnt loads of things in the process, but have not managed to crack it. It might be something stupidly simple I am not aware off that is stopping it. I would really appreciate some help! Thanks a lot.. Also I am still getting used to the language, so might have many questions Also I am using Ubuntu 11.04 (tag 11.04). Also I think I don't have windows -- when my friend installed Ubuntu on my new laptop which had Windows 7, he was trying to keep windows in a separate partition, but something happened and windows was not there! Looking forward to some support! Again thanks a lot

    Read the article

  • What HTML and CSS markup is best for SEO for a list of questions (like on Stack Exchange sites)

    - by Oleg9
    On the StackOverflow a question block (in the q-list on the index page and so on) represented by the following html code: <div class="question-summary narrow tagged-interesting" id="question-summary-19832613"> <div onclick="window.location.href='/questions/19832613/how-to-display-only-transit-routesfor-trains-in-google-maps-api'" class="cp"> <div class="votes"> <div class="mini-counts">0</div> <div>votes</div> </div> <div class="status unanswered"> <div class="mini-counts">0</div> <div>answers</div> </div> <div class="views"> <div class="mini-counts">3</div> <div>views</div> </div> </div> <div class="summary"> <h3>...</h3> <div class="tags t-javascript t-google-maps t-google t-google-maps-api-3"> </div> <div class="started"> <a href="/questions/19832613/how-to-display-only-transit-routesfor-trains-in-google-maps-api" class="started-link"><span title="2013-11-07 09:52:29Z" class="relativetime">1 min ago</span></a> <a href="/users/1309392/shirish">Shirish</a> <span class="reputation-score" title="reputation score " dir="ltr">189</span> </div> </div> </div> It uses float positioning. My questions is: Would use of css styled tables be a better choice? (It's a table, isn't it?) Or it just depends on what are you prefer to use and doesn't affect the technical side (search engines or something)? The background information (such as number of views, votes etc.) comes first in the code. And I know that search engines have a limit at viewing each page. So would it better to place div's depending on their importance and then markup them on the page using css methods (like negative margins and absolute positioning)? Or it isn't so important in this instance?

    Read the article

  • "The connection has timed out" - Please help!

    - by gon
    I recently installed a fresh Ubuntu 12.04 LTS on a desktop, and the installation itself was successful (other than 'grub rescue' issue that I encountered but fixed) but this connection problem is really giving me a headache. Symptoms: 1. When I open the FireFox browser and try to connect to a website, it just hangs for a while saying "Connecting..." but eventually loads an error page "The connection has timed out". 2. It's not a browser problem (and I tried setting ipv6 thing to "true" at about:config) because running "sudo apt-get install [some-random-package]" at terminal fails ("E: Unable to locate package [package]") too. All other operations that need internet access are not working. 3. I certainly see a wired network (called "eth1") at the Network Manager, and it says "Connection Established" after disconnecting and then connecting again. I have tried almost everything that could be found from google search results still no luck. Their problems slightly differ from mine or the solutions just don't work. By the way it didn't have internet access when installing Ubuntu 12.04. (I ignored the message that I need internet to install Ubuntu) Could this be a problem? I'm sorry I don't remember if internet worked or not on the previous version of Ubuntu. :( I would really appreciate your help... I don't even know what more to do if this fails too.. Thanks!! Thanks for your comment. Here is the result of ifconfig: eth0 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b9 inet addr:10.10.65.185 Bcast:10.10.65.255 Mask:255.255.255.0 inet6 addr: fe80::7aac:c0ff:fe3d:b2b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3907 errors:0 dropped:0 overruns:0 frame:0 TX packets:771 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:393118 (393.1 KB) TX bytes:73472 (73.4 KB) Interrupt:16 eth1 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b8 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:204 (204.0 B) TX bytes:204 (204.0 B) route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.10.65.1 0.0.0.0 UG 0 0 0 eth0 10.10.65.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 /etc/resolv.conf: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 8.8.8.8 nameserver 8.8.4.4 nameserver 10.81.1.8 nameserver 10.1.2.10 nameserver 127.0.0.1 search yamatake.local /etc/network/interfaces: auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp #auto eth1 #iface eth1 inet dhcp And I'll also include the result of 'sudo lshw -C network' in case it might help: *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 10 serial: 78:ac:c0:3d:b2:b9 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 ip=10.10.65.185 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:93 memory:fc000000-fc00ffff *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 logical name: eth1 version: 10 serial: 78:ac:c0:3d:b2:b8 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 latency=0 link=no multicast=yes port=twisted pair speed=100Mbit/s resources: irq:94 memory:fb000000-fb00ffff

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >