Search Results

Search found 837 results on 34 pages for 'structured'.

Page 13/34 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Using Oracle Receivables Diagnostics: How To Run, Read & Use To Troubleshoot

    - by Robert Story
    Upcoming WebcastTitle: Using Oracle Receivables Diagnostics: How To Run, Read & Use To TroubleshootDate: March 31, 2010 Time: 10:00 am EDT Product Family: Receivables Community Summary This one-hour session is recommended for functional users who want to take a more active role in the generation of Diagnostics in Oracle Receivables. This session will provide an overview of how diagnostics are structured and give some tips on how to read/analyze the output as well as some simple troubleshooting tips. Topics will include: Review of Diagnostic Catalogs in Release 11i, 12.0.x and 12.1.1How to run some of the more popular Receivables DiagnosticsHow to read and analyze diagnostic data Examine the log viewer A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Data Quality Through Data Governance

    Data Quality Governance Data quality is very important to every organization, bad data cost an organization time, money, and resources that could be prevented if the proper governance was put in to place.  Data Governance Program Criteria: Support from Executive Management and all Business Units Data Stewardship Program  Cross Functional Team of Data Stewards Data Governance Committee Quality Structured Data It should go without saying but any successful project in today’s business world must get buy in from executive management and all stakeholders involved with the project. If management does not fully support a project because they see it is in there and the company’s best interest then they will remove/eliminate funding, resources and allocated time to work on the project. In essence they can render a project dead until it is official killed by the business. In addition, buy in from stake holders is also very important because they can cause delays increased spending in time, money and resources because they do not support a project. Data Stewardship programs are administered by a data steward manager who primary focus is to support, train and manage a cross functional data stewards team. A cross functional team of data stewards are pulled from various departments act to ensure that all systems work to ensure that an organization’s goals are achieved. Typically, data stewards are subject matter experts that act as mediators between their respective departments and IT. Data Quality Procedures Data Governance Committees are composed of data stewards, Upper management, IT Leadership and various subject matter experts depending on a company. The primary goal of this committee is to define strategic goals, coordinate activities, set data standards and offer data guidelines for the business. Data Quality Policies In 1997, Claudia Imhoff defined a Data Stewardship’s responsibility as to approve business naming standards, develop consistent data definitions, determine data aliases, develop standard calculations and derivations, document the business rules of the corporation, monitor the quality of the data in the data warehouse, define security requirements, and so forth. She further explains data stewards responsible for creating and enforcing polices on the following but not limited to issues. Resolving Data Integration Issues Determining Data Security Documenting Data Definitions, Calculations, Summarizations, etc. Maintaining/Updating Business Rules Analyzing and Improving Data Quality

    Read the article

  • SQL SERVER – Get Free Books on While Learning SQL Server 2012 Error Handling

    - by pinaldave
    Fans of this blog are aware that I have recently released my new books SQL Server Functions and SQL Server 2012 Queries. The books are available in market in limited edition but you can avail them for free on Wednesday Nov 14, 2012. Not only they are free but you can additionally learn SQL Server 2012 Error Handling as well. My book’s co-author Rick Morelan is presenting a webinar tomorrow on SQL Server 2012 Error Handling. Here is the brief abstract of the webinar: People are often shocked when they see the demo in this talk where the first statement fails and all other statements still commit. For example, did you know that BEGIN TRAN…COMMIT TRAN is not enough to make everything work together? These mistakes can still happen to you in SQL Server 2012 if you are not aware of the options. Rick Morelan, creator of Joes2Pros, will teach you how to predict the Error Action and control it with & without structured error handling. Register for the webinar now to learn: How to predict the Error Action and control it Nuances between successful and failing SQL statements Essential SQL Server 2012 configuration options Register for the Webinar and be present during the webinar. My co-author will announce a winner (may be more than 1 winner) during the session. If you are present during the session – you are eligible to win the book. The webinar is scheduled for 2 different times to accommodate various time zones. 1) 10am ET/7am PT 2) 1pm ET/11am PT. Each webinar will have their own winner. You can increase your chances by attending both the webinars. Do not miss this opportunity and register for the webinar right now. The recordings of the webinar may not be available. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • What would you do to improve the working of a small Development team?

    - by Omar Kooheji
    My company is having a reshuffle and I'm applying for my boss' job as he's moved up the ladder. The new role would give me a chance to move our development team into the 21st century and I'd like to make sure that: I can provide sensible suggestions in the interview to get the job so I can fix the team If I get the job I can actually enact some changes to actually improve the lives of the developers and their output. I want to know what I can suggest to improve the way we work, because I think it's a mess but every time I've suggested a change it's been shot down because any time spend implementing the change would be time that isn't spent developing software. Here is the state of play at the moment: My team consists of 3-4 developers (Mainly Java but I do some .Net work) Each member of the team is usually works on 2-3 projects at a time We are each responsible for the entire life cycle of the project from design to testing. Usually only one person works on a project (Although we have the odd project that will have more than one person working on it.) Projects tend to be bespoke to single customer, or are really heavilly reliant on a particular customer environment. We have 2-3 "Products" which we evolve to meet customer requirements. We use SVN for source control We don't do continuous integration (I'd like to start) We use a really basic bug tracker for internal issue tracking (I'd like to move to an issue/task management system) Any changes that bring a sudden dip in revenue generation will probably be rejected, the company isn't structured for development most of the rest of the technical team's jobs can be broken down to install this piece of hardware, configure that piece of hardware and once a job is done it's done and you never have to look at it again. This mentality has crept into development team because it's part of the company culture.

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Why does SEO based code tips not appear to affect ranking?

    - by Ben
    I've been researching various methods for SEO where pages have precise titles, keywords are highlighted with h tags and tick the many boxes stated in good page mark up for SEO. However when looking at some top ranked search sites on google for key terms they have terrible SEO based mark up. Really long page titles, no tags, limited appearance of keywords in the text and so on. SEO analysis services rate them lower than other sites, yet these sites rank really high. Even with a low number of back-links they are high, so I don't understand how these sites earn the position when they appear inferior to those below them which have better mark up and links. I don't want to cause trouble my mentioning sites or keywords etc. but looking in google at 'executive search' the roughly 5th placed site makes no sense why it should be highly rank, especially with all the added .swfs. The same applies for the top of 'Japan Executive Search'. My main point is that these sites seem to not have all the important structural rules stated in seo page rating applications and general suggested best practice, nor do they show large back-links. It makes me feel like there is no point bothering to write decent mark up if it really doesn't matter. Can anyone explain how sites with such mark-up, and low back-links can outrank well written and structured sites with greater linkage? Sorry if this is a fuzzy question, I want to avoid singling out any sites for example, but it really has me perplexed that sites which appear to ignore the suggested best practices rank so well.

    Read the article

  • How best to deal with the frustration that you encounter at the beginning of learning to code [closed]

    - by coderboy
    I am right now a newbie on the job learning to code in Cocoa . In the beginning I decided that I would try and understand everything I was doing . But right now I just feel like a clueless wizard chanting some spells . Its all just a matter of googling the right incantation . Frequently getting stuck and having to google for answers is proving to be a major demotivator for me . I know that this will get better over time but still I feel that somewhere , somehow I'm just approaching things the wrong way . I sit there stumped and then finally just look at sample code from Apple and I go Wow ! This is so logical and well structured ! . But just reading it is not going to get me to that level . So I would like to know , how do you guys approach learning something new . Do you read the whole documentation first , or do you read sample code or maybe its just about making lots of small programs first ?

    Read the article

  • Short Look at Frends Helium 2.0 Beta

    - by mipsen
    Pekka from Frends gave me the opportunity to have a look at the beta-version of their Helium 2.0. For all of you, who don't know the tool: Helium is a web-application that collects management-data from BizTalk which you usually have to tediously collect yourself, like performance-data (throttling, throughput (like completed Orchestrations/hour), other perfomance-counters) and data about the state of BTS-Applications and presents the data in clearly structured diagrams and overviews which (often) even allow drill-down.  Installing Helium 2 was quite easy. It comes as an msi-file which creates the web-application on IIS. Aditionally a windows-service is deployt which acts as an agent for sending alert-e-mails and collecting data. What I missed during installation was a link to the created web-app at the end, but the link can be found under Program Files/Frends... On the start-page Helium shows two sections: An overview about the BTS-Apps (Running?, suspended messages?) Basic perfomance-data You can drill-down into the BTS-Apps further, to see ReceiveLocations, Orchestrations and SendPorts. And then a very nice feature can be activated: You can set a monitor to each of the ports and/or orchestrations and have an e-mail sent when a threshold of executions/day or hour is not met. I think this is a great idea. The following screeshot shows the configuration of this option. Conclusion: Helium is a useful monitoring  tool for BTS-operations that might save a lot of time for collecting data, writing a tool yourself or documentation for the operations-staff where to find the data. Pros: Simple installation Most important data for BTS-operations in one place Monitor for alerts, if throughput is not met Nice Web-UI Reasonable price Cons: Additional Perormance-counters cannot be added Im am not sure when the final version is to be shipped, but you can see that on Frend's homepage soon, I guess... A trial version is available here

    Read the article

  • Empirical evidence for choice of programming paradigm to address a problem

    - by Graham Lee
    The C2 wiki has a discussion of Empirical Evidence for Object-Oriented Programming that basically concludes there is none beyond appeal to authority. This was last edited in 2008. Discussion here seems to bear this out: questions on whether OO is outdated, when functional programming is a bad choice and the advantages and disadvantages of AOP are all answered with contributors' opinions without reliance on evidence. Of course, opinions of established and reputed practitioners are welcome and valuable things to have, but they're more plausible when they're consistent with experimental data. Does this evidence exist? Is evidence-based software engineering a thing? Specifically, if I have a particular problem P that I want to solve by writing software, does there exist a body of knowledge, studies and research that would let me see how the outcome of solving problems like P has depended on the choice of programming paradigm? I know that which paradigm comes out as "the right answer" can depend on what metrics a particular study pays attention to, on what conditions the study holds constant or varies, and doubtless on other factors too. That doesn't affect my desire to find this information and critically appraise it. It becomes clear that some people think I'm looking for a "turn the crank" solution - some sausage machine into which I put information about my problem and out of which comes a word like "functional" or "structured". This is not my intention. What I'm looking for is research into how - with a lot of caveats and assumptions that I'm not going into here but good literature on the matter would - certain properties of software vary depending on the problem and the choice of paradigm. In other words: some people say "OO gives better flexibility" or "functional programs have fewer bugs" - (part of) what I'm asking for is the evidence of this. The rest is asking for evidence against this, or the assumptions under which these statements are true, or evidence showing that these considerations aren't important. There are plenty of opinions on why one paradigm is better than another; is there anything objective behind any of these?

    Read the article

  • Programming in academic environment vs industry environment [closed]

    - by user200340
    Possible Duplicate: Differences between programming in school vs programming in industry? This is a general discussion about programming in the industry environment. The background story is that my colleague sent me a very interesting article called "10 Things Entrepreneurs Don’t Learn in College." The first point in that post is about the author's experience of programming in the academic environment vs industry environment. After finishing a 4 year Computer Science degree course, I am currently working in the academic environment as a developer, mainly writing Java, J2EE, Javascript code. I know there are differences between academic programming and industry programming, but I was shocked after reading that post. Trying to avoid this happening on me in the future, or the others. Can anyone from industry give some general advice about how to program in industry. For example, What exactly happens when a task is received? What is the flow from the beginning to the end? What are the main differences between the programming in industry and academia? Is it more structured? Are more frameworks used? It would be great if some code examples could be given. Thanks.

    Read the article

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • Managing flash animations for a game

    - by LoveMeSomeCode
    Ok, I've been writing C# for a while, but I'm new to ActionScript, so this is a question about best practices. We're developing a simple match game, where the user selects tiles and tries to match various numbers - sort of like memory - and when the match is made we want a series of animations to take place, and when they're done, remove the tile and add a new one. So basically it's: User clicks the MC Animation 1 on the MC starts Animation 1 ends Remove the MC from the stage Add a new MC Start the animation on the new MC The problem I run into is that I don't want to make the same timeline motion tween on each and every tile, when the animation is all the same. It's just the picture in the tile that's different. The other method I've come up with is to just apply the tweens in code on the main stage. Then I attach an event handler for MOTION_FINISH, and in that handler I trigger the next animation and listen for that to finish etc. This works too, but not only do I have to do all the tweening in code, I have a seperate event handler for each stage of the animation. So is there a more structured way of chaining these animations together?

    Read the article

  • Why does Bing webmaster tool's SEO analyzer complain about multiple <h1> tags?

    - by Mathew Foscarini
    I used the Bing webmaster tool's SEO analyzer on my website, and it reported: There are multiple tags on the page. It recommends that there should only be one <h1> tag on the page. The page is a listing of blog posts for a category. So each blog entry is structured like this. <article> <header><h1><a>...</a></h1></header> <p>summary...</p> </article> <article> <header><h1><a>...</a></h1></header> <p>summary...</p> </article> <article> <header><h1><a>...</a></h1></header> <p>summary...</p> </article> <article> <header><h1><a>...</a></h1></header> <p>summary...</p> </article> How is this invalid? I thought this was the correct way to describe a post in HTML5.

    Read the article

  • How to manage focus for a small set of simple widgets

    - by Christoph
    I'm developing a set of simple widgets for a small (128x128) display. For example I'd like to have a main screen with an overlay menu which I can use to toggle visibilty of main screen elements. Each option would be an icon with a box around it while it is selected. Button (left, up, right, down, enter) events should be given to the widget that has "focus". Focus is a simple thing to understand when using a GUI, but I'm having trouble implementing this. Can you suggest a simple concept for managing focus and input events? I have these simple ideas: Only one widget can have focus, so I need a single pointer to that widget. When this widget gets some kind of "cycle" input (as in "highlight the next item in this list"), the focus is given to a different widget. a widget must have a way of telling the application which widget the focus is given to next. if a widget cannot give a "next focus" hint, the application must be able to figure out where the focus should go. Widgets are currently structured like this: A widget can have a parent, which is passed to the constructor. Widgets are created statically, as I want to avoid dynamic memory allocation (I only have 16kB of RAM and I'd like to have control over that). widgets have siblings, implemented as an intrusive linked list (they have a next member). A parent has a pointer to the head of its list of children. Input events are arguments to the widgets buttonEvent methods which can accept or ignore that event. If it ignores the event, it can pass the event to its parent. My Questions: How can I manage focus for these widgets? Am I making this too complicated?

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • How to pick a great working team?

    - by Javierfdr
    I've just finished my master and I'm starting to dig into the laboral world, i.e. learning how programming teams and technology companies work in the real world. I'm starting to design the idea of my own service or product based on free software, and I will require a well coupled, enthusiast and fluid team to build and the idea. My problem is that I'm not sure which would be the best skills to ask for a programming team of 4-5 members. I have many friends and acquaintances, with whom I've worked during my studies. Must of those ones I have in mind are very capable and smart people, with a good logic and programming base, although some of them have some characteristics that I believe that could influtiate negatively in the group: lack of communication, fear to debate ideas, hard to give when debating, lack of structured programming (testing, good commenting, previous design and analysis). Some of them have this negative characteristics, but must of them have a lot of enthusiasm, nice working skills (from an individual point of view), and ability to see the whole picture. The question is: how to pick the best team for a large scale project, with a lot of programming? Which of these negative skills do you think are just too influential? Which can be softened with good leadership? Wich good skills are to be expected? And any other opinion about social and programming skills of a programming team.

    Read the article

  • Turn-based JRPG battle system architecture resources

    - by BenoitRen
    The past months I've been busy programming a 2D JRPG (Japanese-style RPG) in C++ using the SDL library. The exploration mode is more or less done. Now I'm tackling the battle mode. I have been unable to find any resources about how a classic turn-based JRPG battle system is structured. All I find are discussions about damage formula. I've tried googling, searching gamedev.net's message board, and crawling through C++-related questions here on Stack Exchange. I've also tried reading source code of existing open source RPGs, but without a guide of some sort it's like trying to find a needle in a haystack. I'm not looking for a set of rules like D&D or anything similar. I'm talking purely about code and object structure design. A battle system asks the player for input using menus. Next the battle turn is executed as the heroes and the enemies execute their actions. Can anyone point me in the right direction? Thanks in advance.

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • Do ORMs enable the creation of rich domain models?

    - by Augusto
    After using Hibernate on most of my projects for about 8 years, I've landed on a company that discourages its use and wants applications to only interact with the DB through stored procedures. After doing this for a couple of weeks, I haven't been able to create a rich domain model of the application I'm starting to build, and the application just looks like a (horrible) transactional script. Some of the issues I've found are: Cannot navigate object graph as the stored procedures just load the minimum amount of data, which means that sometimes we have similar objects with different fields. One example is: we have a stored procedure to retrieve all the data from a customer, and another to retrieve account information plus a few fields from the customer. Lots of the logic ends up in helper classes, so the code becomes more structured (with entities used as old C structs). More boring scaffolding code, as there's no framework that extracts result sets from a stored procedure and puts it in an entity. My questions are: has anyone been in a similar situation and didn't agree with the store procedure approch? what did you do? Is there an actual benefit of using stored procedures? appart from the silly point of "no one can issue a drop table". Is there a way to create a rich domain using stored procedures? I know that there's the posibility of using AOP to inject DAOs/Repositories into entities to be able to navigate the object graph. I don't like this option as it's very close to voodoo.

    Read the article

  • Oracle Unveils AutoVue Release 20.1

    - by prasenjit.niyogi(at)oracle.com
    We are extremely pleased to announce the availability of Oracle's AutoVue Release 20.1. AutoVue 20.1 is the latest major release of the family of Enterprise Visualization solutions from Oracle. Highlights of the release include: Unparalleled new format support and enhancements for 3D CAD, 2D, CAD, ECAD and PDF documents New capabilities that support end-to-end design to manufacture processes in the Electronics & High Tech space, that allow manufacturing engineers to perform accurate manufacturability reviews through better support for variants, overlays and polarity Significant printing enhancements, such as printing of markup notes; support for Excel file print settings; and print in grayscale; which serve to optimize paper-based business processes Powerful integration enablement capabilities to extend visualization into existing enterprise architectures and systems; including AutoVue Hotspots that enable visual navigation and action by linking visual data to structured enterprise data, and new AutoVue Document Print Services (DPS) to enrich enterprise applications with format and platform agnostic printing of any document type Improvements for cost-effective AutoVue deployment and administration, including support for virtualization Release 20.1 Webcast - Attend the webcast on April 13th at 12:00 pm EST to discover what is new and exciting in the latest release. Encourage your customers, prospects, and partners to attend. Title: Oracle Unveils AutoVue Release 20.1 Channel: Oracle AutoVue Channel Register Here: http://www.brighttalk.com/webcast/26282 To discover more about the latest release, and to find out what the customers and partners are saying about the value of this offering, check out the: What's New is AutoVue 20.1 Datasheet You can also learn all about the latest format support here AutoVue 20.1 Format Support Sheet We look forward to seeing you at the webcast. If you have any questions feel free to ask, and we will answer it in this forum. Enjoy AutoVue 20.1!

    Read the article

  • links for 2011-02-07

    - by Bob Rhubart
    Creating JAXWS Service in WebLogic Workshop Middleware Magic Jay SenSharma shares "a simple demo which explains how we can create a Complex JAXWS WebService using WebLogic Workshop." (tags: WebLogic jaxws middleware) Wentari: Re-Learning PeopleSoft "If I truly want to be an enterprise architect, what better way than to have hands on knowledge about all the Oracle offerings outside of my specialization in Siebel and OBIEE." -- Peter Yeung (tags: oracle otn businessintelligence obiee siebel) Andrejus Baranovskis's Blog: CreateWithParams Operation for Oracle ADF BC 11g Oracle ACE Director Andrejus Baranovski illustrates how you can apply a CreateWithParams operation in two easy steps. (tags: oracle otn oracleace soa) APEX plugins contributed to the APEX community by AMIS developers AMIS Technology blog The APEX 4.0 plugin framework "allows for more more organized, better structured development with lots more reuse potential," according to Oracle ACE Director Lucas Jellema. (tags: oracle otn oracleace apex) Oracle BI EE 11g and Oracle ADF - Part 2 - Real Time reporting using View Objects Venkatakrishnan J looks into "another reporting innovation (by use of the common ADF Framework) i.e. real time reporting using BI EE by directly reversing metadata from a transactional application. (tags: oracle otn oracleace businessintelligence obiee adf) On-demand Webcast: Java in the Smart Grid (The Java Source) Learn more about the Smart Grid and the role that Java is poised to play in this important initiative. (tags: oracle otn java smartgrid)

    Read the article

  • How do you structure your shared code so that it is "re-findable" for new developers?

    - by awmckinley
    I started working at my current job about 8 months ago, and its been one of the best experiences I've had as a young programmer. It's a small company, and both my co-developers are brilliant guys. One of the practices that they both have been encouraging is lots of code-reuse. Our code base is mainly C#, and we're using a centralized revision control system. The way the repository is currently structured, there is a single folder in which all shared class libraries are placed (along with unit tests for each library), and our revision control system allows for sharing or linking those libraries out to other projects. What I'm trying to understand at this point is how the current structure of the folder can be made more conducive for finding those libraries again. I've talked to the other developers about this, and they agree that it's gotten a little messy. I find that I am sometimes "reinventing the wheel" because I didn't realize that there was an existing piece of code that solved a particular problem. The issue is complicated further by the fact that we're sharing some code between ASP.NET MVC2, WinForms, and Windows CE projects, and sharing code between applications built against multiple versions of .NET. How do other people approach this? Is the answer in naming the libraries in a certain way or is it preferable to invest in some code-search software? Is the answer in doc comments? Should we be sharing libraries at all or should we simply branch the class libraries for re-use? Thanks for any and all help!

    Read the article

  • How to organize a larger project with several sub-projects and their dependencies?

    - by RoToRa
    As a software developer until now, I've mostly worked on projects that were quite "monolithic" with hardly any dependencies on other projects, without building automation (no Make, Ant, Maven, etc.) and kept on a simple version control system (mostly Subversion) with just a few easily managed version branches. Now together with some friends I'm planning a project that is intended to run on multiple platforms (mostly mobile: Android, iOS, Kindle, Windows, etc.), thus written in several languages and on different development platforms. This will lead to many dependencies: All projects sharing the same resources (e.g. images) or projects dependent on each other (e.g. a core Java library project used by the Android and other Java based implementations). So what I need is some basic information on how to answer questions such as: How would the VCS be structured? Would a client-base or a decentralized VCS be better? How to decide building automation system(s) to use? Since this quite an open question I guess for now it would be great if you could point me to any books or web resources that you can recommend for this topic.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >