Search Results

Search found 9035 results on 362 pages for 'common misunderstandings'.

Page 241/362 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Source of (programmer) inefficiency

    - by Daniel
    I am interested to gain a better insight about the possible reasons of personal inefficiency as programmers (and only in programming) due to – simply - our own errors (because we are humans – well, almost all of us). I am not interested in how much we are productive or in how many adjustements the customer asks for when the work is done, but where and how each of us spend that part of its time in tasks that are unproductive and there is no one to blame except ourselves. Excluding ego - feeding and / or self – gratification, what I am trying to get (for all of us) is: what are the common issues eating our time; insight on reasons for that issues; identify simple way for us, personally (not delegating actions to other or our organizations), to correct our own problems. Please, do not think in academic terms but aim at the opportunity to compare our daily experiences and understand what are and how we try to fix our personal deficiencies. If you are interested to respond to this post, please: integrate the list if you see something important (or obvious) missing; highlight or name honestly your first issue tellng the way you try to address and solve your issue acting on yourself and yourself only in a sort of "continuous quality improving" My criteria for accepting the answer is: choose the best solution (feasibility and utility) to fix one (or more) of the problems of the list. Of course, selecting an error is not a vote on our skills: maybe we are hyper professional programmers and we lose ten minutes only every year or we are terribly inefficient, losing a couple of days a week: reasons for inefficiency could be really the same - but in a different scale. A possible list: Plain error in the names (variables, functions). Inability to see the obvious in your code. Misreading. Lack of concentration. Trying to use a technology you have not mastered. Errors with data types. Time required to understand your previous code or your documentation. Trying to do something more than requested because you enjoy it Using solutions more complicated than required because you enjoy it. Plain logical errors. Errors due to your fault in communications. Distraction My first personal issue: "Trying to use a technology you do not master." I have to use daily several technologies and I often need to spend significant time correcting code because my assumptions were plainly wrong. Reasons for this: production needs put high pressure and make difficult to find the time to learn. I try to address this reading technical books - as many as I can - even if this actually consumes a lot of time.

    Read the article

  • JWT Token Security with Fusion Sales Cloud

    - by asantaga
    When integrating SalesCloud with a 3rd party application you often need to pass the users identity to the 3rd party application so that  The 3rd party application knows who the user is The 3rd party application needs to be able to do WebService callbacks to Sales Cloud as that user.  Until recently without using SAML, this wasn't easily possible and one workaround was to pass the username, potentially even the password, from Sales Cloud to the 3rd party application using URL parameters.. With Oracle Fusion R8 we now have a proper solution and that is called "JWT Token support". This is based on the industry JSON Web Token standard , for more information see here JWT Works by allowing the user the ability to generate a token (lasts a short period of time) for a specific application. This token is then passed to the 3rd party application as a GET parameter.  The 3rd party application can then call into SalesCloud and use this token for all webservice calls, the calls will be executed as the user who generated the token in the first place, or they can call a special HR WebService (UserService-findSelfUserDetails() ) with the token and Fusion will respond with the users details. Some more details  The following will go through the scenario that you want to embed a 3rd party application within a WebContent frame (iFrame) within the opportunity screen.  1. Define your application using the topology manager in setup and maintenance  See this documentation link on topology manager 2. From within your groovy script which defines the iFrame you wish to embed, write some code which looks like this : def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())def url = thirdpartyapplicationurl +"param1="+OptyId+"&jwt ="+crmkeyreturn (url)  This snippet generates a URL which contains The Hostname/endpoint of the 3rd party application Two Parameters The opportunityId stored in parameter "param1" The JWT Token store in  parameter "jwt" 3. From your 3rd Party Application you now have two options Execute a webservice call by first setting the header parameter "Authentication" to the JWT token. The webservice call will be executed against Fusion Applications "As" the user who execute the process To find out "Who you are" , set the header parameter to "Authentication" and execute the special webservice call findSelfUserDetails(), in the UserDetailsService For more information  Oracle Sales Cloud Documentation , specific chapter on JWT Token OTN samples, specifically the Rich UI With JWT Token Sample Oracle Fusion Applications General Documentation

    Read the article

  • Economic modelling - Resources for valuing goods

    - by Rushyo
    tl;dr: What economic/computer science books would you suggest for learning about economic valuation of goods and simulations thereof? I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling?

    Read the article

  • Unable to add users to Microsoft Dynamics CRM 4.0 after database restore

    - by Wes Weeks
    Working with a client in our Multi-tenant CRM environment who was doing a database migration into CRM and as part of the process, a backup of their Organization_MSCRM database was taken just prior to starting the migration in case it needed to be restored and run a second time. In this case it did, so I restored the database and let the client know he should be good to go.  A few hours later I received a call that they were unable to add some new users, they would appear as available when using the add multiple user wizard, but anyone added would not be added to CRM.  It was also disucussed that these users had been added to CRM initally AFTER the database backup had been taken. I turned on tracing and tried to add the users through both the single user form and multiple user interface and was unable to do so.  The error message in the logs wasn't much help: Unexpected error adding user [email protected]: Microsoft.Crm.CrmException: INVALID_WRPC_TOKEN: Validate WRPC Token: WRPCTokenState=Invalid, TOKEN_EXPIRY=4320, IGNORE_TOKEN=False Searching on Google or bing didn't offer any assitance.  Apparently not a very common problem, or no one has been able to resolve. I did some searching in the MSCRM_CONFIG database and found that their are several user tables there and after getting my head around the structure found that there were enties here for users that were not part of the restored DB.  It seems that new users are added to both the Orgnaization_MSCRM and MSCRM_CONFIG and after the restore these were out of sync. I needed to remove the extra entries in order to address.  Restoring the MSCRM_CONFIG database was not an option as other clients could have been adding users at this point and to restore would risk breaking their instances of CRM.  Long story short, I was finally able to generate a script to remove the bad entries and when I tried to add users again, I was succesful.  In case someone else out there finds themselves in a similar situation, here is the script I used to delete the bad entries. DECLARE @UsersToDelete TABLE (   UserId uniqueidentifier )   Insert Into @UsersToDelete(UserId) Select UserId from [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where CrmuserId Not in (select systemuserid from Organization_MSCRM.dbo.SystemUserBase) And OrganizationId = '00000000-643F-E011-0000-0050568572A1' --Id From the Organization table for this instance   Delete From [MSCRM_CONFIG].[dbo].[SystemUserAuthentication]   Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUser] Where Id in (Select UserId From @UsersToDelete)

    Read the article

  • Please help me give this principle a name

    - by Brent Arias
    As a designer, I like providing interfaces that cater to a power/simplicity balance. For example, I think the LINQ designers followed that principle because they offered both dot-notation and query-notation. The first is more powerful, but the second is easier to read and follow. If you disagree with my assessment of LINQ, please try to see my point anyway; LINQ was just an example, my post is not about LINQ. I call this principle "dial-able power". But I'd like to know what other people call it. Certainly some will say "KISS" is the common term. But I see KISS as a superset, or a "consumerism" practice. Using LINQ as my example again, in my view, a team of programmers who always try to use query notation over dot-notation are practicing KISS. Thus the LINQ designers practiced "dial-able power", whereas the LINQ consumers practice KISS. The two make beautiful music together. I'll give another example. Imagine a C# logging tool that has two signatures allowing two uses: void Write(string message); void Write(Func<string> messageCallback); The purpose of the two signatures is to fulfill these needs: //Every-day "simple" usage, nothing special. myLogger.Write("Something Happened" + error.ToString() ); //This is performance critical, do not call ToString() if logging is //disabled. myLogger.Write( () => { "Something Happened" + error.ToString() }); Having these overloads represents "dial-able power," because the consumer has the choice of a simple interface or a powerful interface. A KISS-loving consumer will use the simpler signature most of the time, and will allow the "busy" looking signature when the power is needed. This also helps self-documentation, because usage of the powerful signature tells the reader that the code is performance critical. If the logger had only the powerful signature, then there would be no "dial-able power." So this comes full-circle. I'm happy to keep my own "dial-able power" coinage if none yet exists, but I can't help think I'm missing an obvious designation for this practice. p.s. Another example that is related, but is not the same as "dial-able power", is Scott Meyer's principle "make interfaces easy to use correctly, and hard to use incorrectly."

    Read the article

  • links for 2011-01-06

    - by Bob Rhubart
    Coming to your town: Oracle Enterprise Cloud Summit During these full-day events, cloud experts will share real-world best practices, reference architectures, detailed customer case studies, and more. Events scheduled in cities around the world.  (tags: oracle otn cloud event) Webcast: Security and Compliance for Private Cloud Consolidation Roxana Bradescu, Senior Director for Oracle Database Security Products, discusses Oracle Database Security Solutions to securely consolidate data and meet compliance requirements within private cloud computing environments. Thursday, January 13, 2011. 10am PST | 1pm EST (tags: oracle cloud security) Answering Questions about Mobile Devices | The AppsLab "How do the numbers of Android and iOS users compare? How often are people switching? Where are all these BlackBerry and Nokia users? Do they plan to jump to Android or iOS? What about webOS? Is it relevant?" Some answers in this AppsLab survey. (tags: oracle otn enterprise2.0 mobilecomputing iphone blackberry android) Webcast: Achieve 24/7 Cloud Availability Without Expensive Redundancy Ashish Ray and Matthew Baier discuss Oracle’s Maximum Availability Architecture and Oracle Database 11g. (tags: oracle cloud highavailability webcast) Converting a PV vm back into an HVM vm (Wim Coekaerts Blog) "I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel...It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) @OTN_Garage: Resources for VirtualBox 4.0 Rick "@OTN_Garage" Ramsey shares links to several resources for those with a VirtualBox jones. (tags: oracle otn virtualization virtualbox) 'Federal Service Bus' Helps Belgian Government Speak a Common Language - SOA in Action Blog "The first SOA-enabled application was developed in less than two months and was fully operational in approximately 10 weeks. In addition, new FSB modules are reusable for other Belgian e-government applications, saving both time and taxpayer dollars." - Joe McKendrick (tags: soa oracle) Show Notes: Architects in the Cloud (ArchBeat Podcast) The complete 4-part interview with Stephen G. Bennett and Archie Reed, the authors of "Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing," is now available. (tags: oracle otn cloud podcast archbeat)

    Read the article

  • World Record Oracle E-Business Consolidated Workload on SPARC T4-2

    - by Brian
    Oracle set a World Record for the Oracle E-Business Suite Standard Medium multiple-online module benchmark using Oracle's SPARC T4-2 and SPARC T4-4 servers which ran the application and database. Oracle's SPARC T4 servers demonstrate performance leadership and world-record results on Oracle E-Business Suite Applications R12 OLTP benchmark by publishing the first result using multiple concurrent online application modules with Oracle Database 11g Release 2 running Solaris.   This results shows that a multi-tier configuration of SPARC T4 servers running the Oracle E-Business Suite R12.1.2 application and Oracle Database 11g Release 2 is capable of supporting 4,100 online users with outstanding response-times, executing a mix of complex transactions consolidating 4 Oracle E-Business modules (iProcurement, Order Management, Customer Service and HR Self-Service).   The SPARC T4-2 server in the application tier utilized about 65% and the SPARC T4-4 server in the database tier utilized about 30%, providing significant headroom for additional Oracle E-Business Suite R12.1.2 processing modules, more online users, and future growth.   Oracle E-Business Suite Applications were run in Oracle Solaris Containers on SPARC T4 servers and provides a consolidation platform for multiple E-Business instances.   Performance Landscape Multiple Online Modules (Self-Service, Order-Management, iProcurement, Customer-Service) Medium Configuration System Users AverageResponse Time 90th PercentileResponse Time SPARC T4-2 4,100 2.08 sec 2.52 sec Configuration Summary Application Tier Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 3 x 300 GB internal disks Oracle Solaris 10 Oracle E-Business Suite 12.1.2 Database Tier Configuration: 1 x SPARC T4-4 server 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 2 x 300 GB internal disks Oracle Solaris 10 Oracle Solaris Containers Oracle Database 11g Release 2 Storage Configuration: 1 x Sun Storage F5100 Flash Array (80 x 24 GB flash modules) Benchmark Description The Oracle R12 E-Business Suite Standard Benchmark combines online transaction execution by simulated users with multiple online concurrent modules to model a typical scenario for a global enterprise. The online component exercises the common UI flows which are most frequently used by a majority of our customers. This benchmark utilized four concurrent flows of OLTP transactions, for Order to Cash, iProcurement, Customer Service and HR Self-Service and measured the response times. The selected flows model simultaneous business activities inclusive of managing customers, services, products and employees. See Also Oracle R12 E-Business Suite Standard Benchmark Results Oracle R12 E-Business Suite Standard Benchmark Overview Oracle R12 E-Business Benchmark Description E-Business Suite Applications R2 (R12.1.2) Online Benchmark - Using Oracle Database 11g on Oracle's SPARC T4-2 and Oracle's SPARC T4-4 Servers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle E-Business Suite oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle E-Business Suite R12 medium multiple-online module benchmark, SPARC T4-2, SPARC T4, 2.85 GHz, 2 chips, 16 cores, 128 threads, 256 GB memory, SPARC T4-4, SPARC T4, 3.0 GHz, 4 chips, 32 cores, 256 threads, 256 GB memory, average response time 2.08 sec, 90th percentile response time 2.52 sec, Oracle Solaris 10, Oracle Solaris Containers, Oracle E-Business Suite 12.1.2, Oracle Database 11g Release 2, Results as of 9/30/2012.

    Read the article

  • Tips On Using The Service Contracts Import Program

    - by LuciaC
    Prior to release 12.1 there was no supported way to import contracts into the EBS Service Contracts application - there were no public APIs nor contract load programs provided.  From release 12.1 onwards the 'Service Contracts Import Program' is provided to load service contracts into the application. The Service Contracts Import functionality is explained in How to Use the Service Contracts Import Program - Scope and Limitations (Doc ID 1057242.1).  This note includes an attached document which explains the program architecture, shows the Entity Relationship Diagram and details the interface table definitions. The Import program takes data from the interface tables listed below and populates the contracts schema tables:  OKS_USAGE_COUNTERS_INTERFACE OKS_SALES_CREDITS_INTERFACEOKS_NOTES_INTERFACEOKS_LINES_INTERFACEOKS_HEADERS_INTERFACEOKS_COVERED_LEVELS_INTERFACEThese interface tables must be loaded via a custom load program.The Service Contracts Import concurrent request is then submitted to create contracts from this legacy data. The parameters to run the Import program are:  Parameter Description  Mode Validate only, Import  Batch Number Batch_Id (unique id populated into the OKS_HEADERS_INTERFACE table)  Number of Workers Number of workers required (these are spawned as separate sub-requests)  Commit size Represents number of successfully processed contracts commited to database The program spawns sub-requests for the import worker(s) and the 'Service Contracts Import Report'.  The data is validated prior to import and into the Contracts tables and will report errors in the Service Contracts Import Report program output file (Import Execution Report).  Troubleshooting tips are provided in R12.1 - Common Service Contract Import Errors (Doc ID 762545.1); this document lists some, but not all, import errors.  The document will be updated over time.  Additional help is given in Debugging Tip for Service Contracts Import Errors (Doc ID 971426.1).After you successfully import contracts, you can purge the records from the interface tables by running the Service Contracts Import Purge concurrent program. Note that there is no supported way to mass delete data from the Contracts schema tables once they are populated, so data loaded by the Import program must be fully tested and verified before the program is run to load data into a Production system.A Service Contracts Import Test program has been provided which will take an existing contract in the application and load the interface tables using the data from that contract.  This can be used as an example for guidance on how to load the interface tables.  The Test program functionality is explained in How to Use the Service Contracts Test Import Program Provided in Release 12.1 (Doc ID 761209.1).  Note that the Test program has some limitations which do not apply to the full Import program and is not a supported program, it is simply a testing tool.  

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • xsltproc killed, out of memory

    - by David Parks
    I'm trying to split up a 13GB xml file into small ~50MB xml files with this XSLT style sheet. But this process kills xsltproc after I see it taking up over 1.7GB of memory (that's the total on the system). Is there any way to deal with huge XML files with xsltproc? Can I change my style sheet? Or should I use a different processor? Or am I just S.O.L.? <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:exsl="http://exslt.org/common" extension-element-prefixes="exsl" xmlns:fn="http://www.w3.org/2005/xpath-functions"> <xsl:output method="xml" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:param name="block-size" select="75000"/> <xsl:template match="/"> <xsl:copy> <xsl:apply-templates select="mysqldump/database/table_data/row[position() mod $block-size = 1]" /> </xsl:copy> </xsl:template> <xsl:template match="row"> <exsl:document href="chunk-{position()}.xml"> <add> <xsl:for-each select=". | following-sibling::row[position() &lt; $block-size]" > <doc> <xsl:for-each select="field"> <field> <xsl:attribute name="name"><xsl:value-of select="./@name"/></xsl:attribute> <xsl:value-of select="."/> </field> <xsl:text>&#xa;</xsl:text> </xsl:for-each> </doc> </xsl:for-each> </add> </exsl:document> </xsl:template>

    Read the article

  • Oracle????????????????????????~????????????????????

    - by Yusuke.Yamamoto
    RDBMS ???????·????????????????????????????????????????????????????????????????????????? ????????Oracle ?????????????????????????????????? Oracle Database ???????????????????????????????? ????????????????????? ????Oracle???????????????????????????????????????????????????????????????????????????? ?????????????? Oracle Database ???????????????????????? ??????????????????????????????????2????????????? 1. ??????(Query Transformation) Query Transformation ???????SQL??????????????????SQL????????????????????? Query Transformation ???Predicate Transformation ? Common Sub-expression Elimination (CSE), Order-BY Elimination (OBYE), Outer Join Elimination (OJE), Simple View Meging (SVM), Predicate Move around (PM), Complex View Merging (CVM), Sub-query Unnesting (SU), Join Predicate Push Down (JPPD) ???? OR Expansion, Star Transformation (ST) ????????????? ···???????????????????????????????????????????????????? Predicate Transformation ?????? Transitive Predicate Generation ????????????? ?????????????SQL???deptno ? 10 ????????????????????????????? select e.ename, d.loc from emp e, dept d where e.deptno=d.deptno and e.deptno=10; ???????????????emp ??? deptno=10 ??????????????dept ??? d.deptno=10 ??????????????????? emp ?? deptno=10 ????????????????????emp ?? deptno=10 ??????10???????10? dept ????????????dept ??20???????????????????????10?*20?=200?????(??????????·?????????)? ??SQL?? Transitive Predicate Generation ??????SQL????????????????? select e.ename, d.location from emp e, dept d where e.deptno=d.deptno and e.deptno=10 and d.deptno=10; ^^^^^^^^^^^ ??????dept ?????? deptno=10 ??????????????????????????10?*1?=10(dept.deptno ?unique????)?1/20????????????????1/20????????????????10??????????30???????????????Query Transformation ???????????????????????????? ?:??????????? dept ?? 1-row table ??????dept ?? driving ???(Outer Table)??? emp ?? probe ???(Inner Table)????????????1?*10?=10 ????????????????????????????????????????????????????????1/20????????????? ?????? Query Transformation ??????SQL????????????????????????????????? Transformation ??????????????????????????????????? 2. ????·????(Access Path Analysis) Access Path Analysis ??Query Transformation ??SQL????????????(Access Path)?????????(Join Method)?????(Join Order)?????????? ??????????????????(FTS)?ROWID?????????????????????????????·?????(Nested Loop Join)???????(Hash Join)????/?????(Sort Merge Join)????????????????????????????????????????????????????????????????????????? Oracle Database ????????? Query Transformation ???? Logical Optimizer?Access Path Analysis ???? Physical Optimizer ????????? ??????????????????????????????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????? Oracle Database ????????????????????? "Oracle ????????" ?????????? Sustaining Engineering?? ?(??? ???) ???????????????? Sustaining Engineering ????????????????????????Oracle Database ???????????????????????? ?????????????????????Ruby????????????????????????? Oracle????????????????????????! Oracle????????????? Oracle????????????????????????

    Read the article

  • Great Expectations - Fusion HCM Highlights at OOW

    - by Kathryn Perry
    A guest post by Lisa Conley, Principal Product Strategy Manager, Fusion HCM, Oracle Applications Development Oracle Open World is just around the corner! There's always so much to see and do and learn at the conference so I want to share some of the 'don't miss' Fusion HCM highlights with you. (Use this tool to search by session number to get a full description.) For starters, we have several customers who will be sharing their Fusion HCM implementation stories. We'll kick off these presentations with a customer panel at 12:15 on Monday in Moscone West 2005 (CON9420). You'll hear from Zillow, the Gerson Lehrman Group, UBS, and ConAgra about their experiences with our products. Oracle partners MarketSphere (CON8581) and eVerge (CON3800) have implemented Fusion HCM themselves and and will talk about how they'll use their experiences to help customers with their implementations (both are in Moscone West 2006). Beth Correa, CEO of Official Payroll Advisor, will highlight her favorite things about Oracle Fusion HCM Payroll on Tuesday at 11:45 in Moscone West 2006 (CON6691). And you'll get to hear from customers again when they speak with Steve Miranda in his Oracle Applications: Strategic Directions and Recommendations session on Tuesday at 1:15 in Moscone West 2002/2004 (CON11434). To bring it all together for you, we've listed all your Fusion HCM opportunities to learn and interact in this Focus On Document. I am really looking forward to the sessions on Human Capital Management in the Cloud. The Oracle Cloud combines the multiple product offerings into a single environment that leverages a common technology infrastructure enabling users to focus on their business - not the business of managing environments. On Tuesday at 10:15 in Moscone West 2002/2004, there is a General Session entitled the Future of Oracle HCM -- Strategy and Roadmap (GEN9505). This will touch on all product lines. Fusion HCM will be highlighted in Gretchen Alarcon's Oracle HCM: Overview, Strategy, Customer Experiences, and Roadmap session on Monday at 12:15 in Moscone West 2005 (CON9410). Also on Tuesday at 1:15 in Moscone West 2006, is a session focused on Talent Management and how you can try out these new products, co-existing with your current product set (CON9430). This is important in that you can test the waters before diving in. ConAgra will be sharing their experience in this session as well.  And of course, if you want to have a personal demonstration, please come by the Oracle DEMOgrounds in West Exhibition Hall Level 1 or the Oracle Cloud Services Lounge at Moscone West Level 3 where our Oracle HCM Cloud Services experts will be ready to answer your questions. I hope you have a wonderful week in San Francisco.Lisa ConleyPrincipal Product Strategy Manager, Fusion HCMApplications DevelopmentOracle Corporation

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Java Certification Exams and Their Move to Pearson VUE

    - by Harold Green
    You may be aware that Oracle recently migrated all Sun-branded certification exams from Prometric to Pearson VUE. Below are answers to some frequently-asked questions that we've been getting recently: What changes to the exams should I be aware of?Only minor changes were made to the exams during the transition to Pearson VUE: Renumbering of all exams to the Oracle exam numbering structure (i.e. 1Z0...). Most exam score reports were enhanced to provide more detailed feedback. Score reports now list every exam objective for which a question (or questions) were answered incorrectly. The previous format provided only section-level performance feedback. For three Java exams, some lengthy (time-consuming) questions were removed & replaced with shorter (less time-consuming) questions. This was done in order to shorten the required exam time (to 150 minutes). Some interactive question types were removed from several Java and Solaris exams (including "matching" and "drag-and-drop" questions). The passing scores (for the exams that were revised) were statistically adjusted to make them equal to their prior passing scores, thus ensuring that the exams maintained the same level of difficulty as before. The exam objectives and the exam questions themselves did not change. Candidates should study the same material and objectives. Are there also new testing practices I should be aware of?Oracle follows a common industry practice of placing occasional un-scored questions on our certification exams. Candidates will not know which questions are unscored. At the time of this blog post, only one of the migrated exams (1Z0-898) contains unscored questions.I started the Master certification path through Prometric, and now I need to complete the requirements through Pearson VUE. Where can I get guidance on this process?Visit our Vendor Transition FAQs to find comprehensive instructions. Oracle has created several specific paths to accommodate candidates who were at at varying stages of completion of their master path when the transition occurred. Make sure to follow the specific path designed for your case, as you will need to know which exam number to select in order to submit/re-submit your requirements. QUICK LINKS Oracle Certification Blog Post: Java, Oracle Solaris, MySQL and Other Former Sun Certification Exams Now Being Delivered At Pearson VUE Oracle Certification Website: Vendor Transition Announcement

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • SQLAuthority News – Presenting at Great Indian Developer Summit 2012 – SQL Server Misconception and Resolutions

    - by pinaldave
    Earlier during TechEd 2012, I presented a session on SQL Server Misconception and Resolutions. It was a pleasure to present this session with Vinod Kumar during the event. Great Indian Developer Summit is around the corner and I will be presenting there once again with the same topic. We had an excellent response during the last event; the hall was so filled, but there were plenty who were not able to get into the session as there was no place for them to sit or stand inside. Well, here is another chance for all who missed the presentation. New Additions During the last session, we were a two-presenter tag team, and we presented the session in a sense that it would suit two speakers in one stage. But this time, I am the only presenter, so I decided to present this session in a much different way. I will still assume there are two presenters. One of the presenters will be me, of course, and the second person will be YOU! Yes, you read that right – you will be presenting this session with me. If you wonder how, well, you will have to attend the session to figure it out. Talking Points We will be talking about the following topics in the session which we will relate to SQL Server: Moon Landing Napoléon Bonaparte Wall of China Bollywood …and of course, SQL Server itself. I promise that this 45 minute- presentation will be the one of the highlights of the event for you. Goodies I can only promise 20 goodies as of the moment. I might bring more when you meet me there. Session Details Title: SQL Server Misconceptions and Resolution – A Practical Perspective (Add to Calendar) Abstract: “The earth is flat”! – An ancient common misconception, which has been proven incorrect as we progressed in modern times. In this session, we will see various database misconceptions prevailing and their resolutions with the aid of the demos. In this unique session, the audience will be a part of the conversation and resolution. Date and Time: April 17, 2012, 16:55 to 17:40 Location: J. N. Tata Auditorium, National Science Symposium Complex (NSSC), Bangalore, India Add to Calendar Reference : Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – How to easily work with Database Diagrams

    - by Pinal Dave
    Databases are very widely used in the modern world. Regardless of the complexity of a database, each one requires in depth designing. To practice along please Download dbForge Studio now.  The right methodology of designing a database is based on the foundations of data normalization, according to which we should first define database’s key elements – entities. Afterwards the attributes of entities and relations between them are determined. There is a strong opinion that the process of database designing should start with a pencil and a blank sheet of paper. This might look old-fashioned nowadays, because SQL Server provides a much wider functionality for designing databases – Database Diagrams. When using SSMS for working with Database Diagrams I realized two things – on the one hand, visualization of a scheme allows designing a database more efficiently; on the other – when it came to creating a big scheme, some difficulties occurred when designing with SSMS. The alternatives haven’t taken long to wait and dbForge Studio for SQL Server is one of them. Its functions offer more advantages for working with Database Diagrams. For example, unlike SSMS, dbForge Studio supports an opportunity to drag-and-drop several tables at once from the Database Explorer. This is my opinion but personally I find this option very useful. Another great thing is that a diagram can be saved as both a graphic file and a special XML file, which in case of identical environment can be easily opened on the other server for continuing the work. During working with dbForge Studio it turned out that it offers a wide set of elements to operate with on the diagram. Noteworthy among such elements are containers which allow aggregating diagram objects into thematic groups. Moreover, you can even place an image directly on the diagram if the scheme design is based on a standard template. Each of the development environments has a different approach to storing a diagram (for example, SSMS stores them on a server-side, whereas dbForge Studio – in a local file). I haven’t found yet an ability to convert existing diagrams from SSMS to dbForge Studio. However I hope Devart developers will implement this feature in one of the following releases. All in all, editing Database Diagrams through dbForge Studio was a nice experience and allowed speeding-up the common database designing tasks. Download dbForge Studio now. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • Developing add-ins for multiple versions of Office

    - by Pranav
    Do you want to develop an add-in targeting multiple versions of Office? And you have basic questions like “Is it possible to do? ” and “How to do it?” ? Then you came to the right place. Few months back, I got a requirement to developed add-ins for Outlook 2003 and Outlook 2007. The functionality for both the versions is same. A doubt stroked… when the functionality is same, why would I develop two add-ins separately? Why don’t I make a single build for both the versions of Office? Then I started searching for techniques to develop add-ins which works in both (2003 and 2007) and read many articles written by VSTO Experts in their blogs, Official VSTO Blog, MSDN, Forums and what not. Misha Says: Theoretically, you can develop an add-in for multiple versions of Microsoft Office by catering to the lowest common denominator. This means if you use an Excel 2003 add-in template in Visual Studio 2008, you would be able to develop and debug this with Excel 2007. However if you try this, you may meet these error messages: “You cannot debug or run this project, because the required version of the Microsoft Office application is not installed.”, followed by “Unable to start debugging.” You can develop Office 2003 add-in in a system where Office 2007 is installed. The following is the procedure that demonstrates how to update your Visual Studio debugging options to use Microsoft Outlook 2007 to debug an add-in targeting Microsoft Outlook 2003. On the Project menu, click on ProjectName Properties Click on the Debug tab In the Start Action pane, click the Start external program radio button Click the file browser button and navigate to %ProgramFiles%\Microsoft Office\Office12 Choose Outlook.exe and click Open Press F5 to debug your add-in For more details. Go through this article in Misha Shneerson’s Blog. There are some tips and tricks to be followed and the things that one needs to take care while developing add-ins targeting multiple versions of Office in Andrew’s Blog. Have a look at this too. You might find it interesting and useful. http://blogs.msdn.com/andreww/archive/2007/06/15/can-you-build-one-add-in-for-multiple-versions-of-office.aspx Here is an MSDN article on Running Solutions in Different Versions of Microsoft Office http://msdn.microsoft.com/en-us/library/bb772080.aspx Hope this helps!

    Read the article

  • WinMo&rsquo;s Demise: Notifying Next of &ldquo;Kin&rdquo;

    - by andrewbrust
    This past Monday, April 12th, Visual Studio 2010 was launched.  And on that same day, Microsoft also launched a new line of  mobile phone handsets, called Kin.  The two product launches are actually connected, but only by what they do not have in common, and what they commonly lack. On the former point: VS 2010 had released to manufacturing a couple weeks prior to its launch.  The Kin phones, meanwhile are not yet available.  We don’t even know what they will cost.  (And I think cost will be a major factor in Kin’s success…I told ChannelWeb’s Yara Souza so in this article). What do the two products both lack? Simple: Windows Mobile 6.x. For example, Kin seems to be based on the same platform as Windows Phone 7 (albeit a subset).  And VS 2010 does not support .NET Compact Framework development, which means no .NET development support for WinMo 6.x and earlier. So I guess April 12th marks Windows Phone “clean slate day.”  If you want to develop for the old phone platform, you will need to use the old version of Visual Studio (i.e. 2008).  Luckily VS 2010 and 2008 can be installed side-by-side.  But I doubt that’s much consolation to developers who still target WinMo 6.5 and earlier. Remember, WinMo isn’t just about the phone.  There are all sorts of non-telephony mobile devices, including ruggedized Pocket PC-style instruments, bar code readers and shop-floor-deployed units that don’t run Windows Phone 7 and couldn’t, even if they wanted to. Where will developers in these markets go?  I would guess some will stick with WinMo 6.x and earlier, until Windows Phone 7 can handle their workloads, assuming that does indeed happen.  Others will likely go to Google’s Android platform. For OEMs and developers who need a customizable mobile software stack, Android is turning out to be out-WinMo-ing WinMo.  As I wrote in this post, Google took Microsoft’s model (minus the licensing fees) and combined it with a modern SmartPhone feature set (rather than a late 90s/early oughts PDA paradigm), to great success.  You might say Google embraced and extended. You might also say Microsoft shunned and withdrew.

    Read the article

  • Where would my different development rhythm be suitable for the work?

    - by DarenW
    Over the years I have worked on many projects, with some successful and a great benefit to the company, and some total failures with me getting fired or otherwise leaving. What is the difference? Naturally I prefer the former and wish to avoid the latter, so I'm pondering this issue. The key seems to be that my personal approach differs from the norm. I write code first, letting it be all spaghetti and chaos, using whatever tools "fit my hand" that I'm fluent in. I try to organize it, then give up and start over with a better design. I go through cycles, from thinking-design to coding-testing. This may seem to be the same as any other development process, Agile or whatever, cycling between design and coding, but there does seem to be a subtle difference: The methods (ideally) followed by most teams goes design, code; design, code; ... while I'm going code, design; code, design; (if that makes any sense.) Music analogy: some types of music have a strong downbeat while others have prominent syncopation. In practice, I just can't think in terms of UML, specifications and so on, but grok things only by attempting to code and debug and refactor ad-hoc. I need the grounding provided by coding in order to think constructively, then to offer any opinions, advice or solutions to the team and get real work done. In positions where I can initially hack up cowboy code without constraints of tool or language choices, I easily gain a "feel" for the data, requirements etc and eventually do good work. In formalized positions where paperwork and pure "design" comes first and only later any coding (even for small proof-of-concept projects), I am lost at sea and drown. Therefore, I'd like to know how to either 1) change my rhythm to match the more formalized methodology-oriented team ways of doing things, or 2) find positions at organizations where my sense of development rhythm is perfect for the work. It's probably unrealistic for a person to change their fundamental approach to things. So option 2) is preferred. So where I can I find such positions? How common is my approach and where is it seen as viable but different, and not dismissed as undisciplined or cowboy coder ways?

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by [email protected]
    Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: ·         "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." ·         "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." ·         "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.  

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • Torchlight Black Screen and doesn't show up

    - by Lelouch Reyiz
    When I open it in full screen I get a black screen that covers whole screen,in windowed mode middle of screen.Here is a video: https://copy.com/fvrGw7QIJ8Z0 Terminal Output: alperen@alperen-Inspiron-N5010 /usr/local/games/Torchlight $ ./Torchlight.bin.x86_64 Creating resource group General Creating resource group Internal Creating resource group Autodetect SceneManagerFactory for type 'DefaultSceneManager' registered. Registering ResourceManager for type Material Registering ResourceManager for type Mesh Registering ResourceManager for type Skeleton MovableObjectFactory for type 'ParticleSystem' registered. OverlayElementFactory for type Panel registered. OverlayElementFactory for type BorderPanel registered. OverlayElementFactory for type TextArea registered. Registering ResourceManager for type Font ArchiveFactory for archive type FileSystem registered. ArchiveFactory for archive type Zip registered. FreeImage version: 3.13.1 This program uses FreeImage, a free, open source image library supporting all common bitmap formats. See http://freeimage.sourceforge.net for details Supported formats: bmp,ico,jpg,jif,jpeg,jpe,jng,koa,iff,lbm,mng,pbm,pbm,pcd,pcx,pgm,pgm,png,ppm,ppm,ras,tga,targa,tif,tiff,wap,wbmp,wbm,psd,cut,xbm,xpm,gif,hdr,g3,sgi,exr,j2k,j2c,jp2,pfm,pct,pict,pic,bay,bmq,cr2,crw,cs1,dc2,dcr,dng,erf,fff,hdr,k25,kdc,mdc,mos,mrw,nef,orf,pef,pxn,raf,raw,rdc,sr2,srf,arw,3fr,cine,ia,kc2,mef,nrw,qtk,rw2,sti,drf,dsc,ptx,cap,iiq,rwz DDS codec registering Registering ResourceManager for type HighLevelGpuProgram Registering ResourceManager for type Compositor MovableObjectFactory for type 'Entity' registered. MovableObjectFactory for type 'Light' registered. MovableObjectFactory for type 'BillboardSet' registered. MovableObjectFactory for type 'ManualObject' registered. MovableObjectFactory for type 'BillboardChain' registered. MovableObjectFactory for type 'RibbonTrail' registered. Loading library lib64/OGRE/RenderSystem_GL Installing plugin: GL RenderSystem OpenGL Rendering Subsystem created. Plugin successfully installed Loading library lib64/OGRE/Plugin_ParticleFX Installing plugin: ParticleFX Particle Emitter Type 'Point' registered Particle Emitter Type 'Box' registered Particle Emitter Type 'Ellipsoid' registered Particle Emitter Type 'Cylinder' registered Particle Emitter Type 'Ring' registered Particle Emitter Type 'HollowEllipsoid' registered Particle Affector Type 'LinearForce' registered Particle Affector Type 'ColourFader' registered Particle Affector Type 'ColourFader2' registered Particle Affector Type 'ColourImage' registered Particle Affector Type 'ColourInterpolator' registered Particle Affector Type 'Scaler' registered Particle Affector Type 'Rotator' registered Particle Affector Type 'DirectionRandomiser' registered Particle Affector Type 'DeflectorPlane' registered Plugin successfully installed Loading library lib64/OGRE/Plugin_OctreeSceneManager Installing plugin: Octree & Terrain Scene Manager Plugin successfully installed *-*-* OGRE Initialising *-*-* Version 1.6.5 (Shoggoth) terminate called after throwing an instance of 'std::out_of_range' what(): basic_string::substr Error: signal: 6 ./Torchlight.bin.x86_64(_ZN10LinuxUtils13crash_handlerEi+0x25)[0x17eb6f5] /lib/x86_64-linux-gnu/libc.so.6(+0x37000)[0x7fc647877000] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39)[0x7fc647876f89] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7fc64787a398] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x155)[0x7fc6481826b5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e836)[0x7fc648180836] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e863)[0x7fc648180863] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5eaa2)[0x7fc648180aa2] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZSt20__throw_out_of_rangePKc+0x67)[0x7fc6481d25d7] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xbe3d3)[0x7fc6481e03d3] ./Torchlight.bin.x86_64(_ZN11CFileSystem21buildMassiveDataGroupEv+0x453)[0x1617805] ./Torchlight.bin.x86_64(_ZN11CFileSystemC1Eb+0x14be)[0x16145ae] ./Torchlight.bin.x86_64(_ZN22CMasterResourceManagerC1EP9CSettings+0x41a)[0xfe1d0a] ./Torchlight.bin.x86_64(_ZN5CGame5setupEb+0x79a)[0x73ceaa] ./Torchlight.bin.x86_64(_ZN5CGame5beginEPv+0x28d)[0x73b839] ./Torchlight.bin.x86_64(main+0x649)[0x146dbe4] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fc647861ec5] ./Torchlight.bin.x86_64[0x739ca9]

    Read the article

  • Hello From South Florida

    - by Sam Abraham
    Fellow Blog Readers: I figured I use my first blog post on GeeksWithBlogs to introduce myself.   I recently relocated from Long Island, NY to South Florida where I joined a local company as Software Engineer specializing in technologies such as C#, ASP.Net 3.5, WCF, Silverlight, SQL Server 2008 and LINQ, to name a few. I am an MCP and MCTS ASP.Net 3.5, looking to get my .Net 4.0 certification soon.   Having been in industry for a few years so far, I figured I would share with you my take on the importance of being involved(at least attending) in local user groups.   I am a firm believer that besides using a certain technology, the best way to expand one’s knowledge is by sharing it with others and being equally open to learn from others just as much as you are willing to share what you know.   In my opinion, an important factor that makes a good developer stand-out is his/her ability to keep abreast with the latest and greatest even in areas outside his/her direct expertise.   Additionally, having spoken to various recruiters, technical user group attendees are always favorably looked upon as genuinely interested in their field and willing to take the initiative to expand their knowledge which offers job candidates good leverage when competing for jobs.   I believe I am very blessed to be in an area with a very strong and vibrant developer community. I found in the local .Net community leadership a genuine interest in constantly extending the opportunity to all developers to get more involved and encouraging those who are willing to take that initiative achieve their goal: Speak in meetings, volunteer at events or write and publish articles/blogs about latest and greatest technologies.   With Vishal Shukla (Site director for the West Palm Beach .Net User Group) traveling overseas, I have been extended the opportunity to come on board as site coordinator for FladotNet's WPB .Net User Group along with Venkata Subramanian, an opportunity which I gratefully accepted.   Being involved in running a .Net User Group will surely help me personally and professionally, but my real hope is to use this opportunity to assist in delivering the ultimate common-goal: spread the word about new .Net Technologies, help everybody get more involved and simply have fun learning new things.   With my introduction out of the way, in the next few days I will be posting some notes on an upcoming talk I will be giving about MVC2 and VS2010 in mid-April.   Environment.Exit(0); --Sam

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >