Search Results

Search found 439 results on 18 pages for 'accuracy'.

Page 9/18 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Are there any font rendering libraries for games development that support hinting?

    - by Richard Fabian
    I've used angel code's bitmap font generator quite a bit and though it's very good, I wondered if there would be a way of using the hinting information to provide a better readable result by using hinting to provide differing thickness based on size/pixel coverage. I imagine any solution would have to use the distance field tech presented in the valve paper on smoothing fonts while maintaining or reducing asset size. (http://www.gamedev.net/community/forums/topic.asp?topic_id=494612) but I haven't found any demos of it being used with hinting information turned on or included in the field gradients in any way. Another way of looking at this is whether there are any font bitmap generators that will output mipmaps that still maintain their readability in the face of pixel size. I think the lower mip levels would try to guarantee fill and space where it is necessary to maintain readability/topology over maintaining style/form (the point of hinting). In response to "Is there a reason you can't just render the size you want", the problem lies in the fact that font rasterisers currently don't render in 3D, and hinting information would be important in different amounts due to the pixel density being different along different axes, even differing in importance along the length of a string due to the size reducing over distance. For example, I only want horizontal hinting in a texture that is viewed from the side, and only really want vertical hinting in a font that is viewed from below or above. This isn't meant to be a renderer that tries to render a perfect outline as accurately as possible, as hinting distorts the reality of the font, instead this is meant to be a rendering solution for quite static scenes, but scenes that have 3D transformed and warped text layout. In this case the legibility is important, more important than the accuracy of representation of the polygon shape.

    Read the article

  • Transferring a single wordpress site from shared account to vps

    - by N e w B e e
    I got a hosting account at Hostgator. it was a shared account and i had three websites present over there. Now, i have upgraded my account to VPS at Hostgator and i want to transfer only ONE website over the new VPS. say, mydom.net. this website includes wordpress installation and other custom pages and setup can somebody please guide me How can I transfer the web to my new account? with speed, accuracy, and such that my website remains in working condition.. what will I do about wordpress? simply copy it will work?(I dont thnk so), if not how can I move it? I need guideline. and I am asking the question with a hope that many others will also learn the things just as i am learning,, thanks to all, and I dont understand the right location to ask the question..sorry if i made somthing wrong...or I have asked it at some wrong place

    Read the article

  • Oracle E-Business Suite (WebADI) integration with Oracle Open Office

    - by Harald Behnke
    Another highlight of the new Oracle Open Office Release 3.3 enterprise features is the Oracle E-Business Suite Release 12.1 (WebADI) integration. The WebADI integration in Oracle Open Office for Windows allows you to bring your Oracle E-Business Suite data into an Oracle Open Office Calc spreadsheet, where familiar data entry and modeling techniques can be used to complete your E-Business Suite tasks. You can create formatted spreadsheets on your desktop that allow you to download, view, edit, and create Oracle E-Business Suite data. Use data entry shortcuts (such as copying and pasting or dragging and dropping ranges of cells), or Calc's Open Document Format (ODF V1.2) compliant spreadsheet formulas, to calculate amounts to save time. You can combine speed and accuracy by invoking lists of values for fields within the spreadsheet. After editing the spreadsheet, you can use WebADI's validation functionality to validate the data before uploading it to the Oracle E-Business Suite. Validation messages are returned to the spreadsheet, allowing you to identify and correct invalid. This video shows a hands-on demonstration of the Oracle E-Business Suite integration: Read more about the Oracle Open Office enterprise features.

    Read the article

  • Python or Ruby for freelance?

    - by Sophia
    Hello, I'm Sophia. I have an interest in self-learning either Python, or Ruby. The primary reason for my interest is to make my life more stable by having freelance work = $. It seems that programming offers a way for me to escape my condition of poverty (I'm on the edge of homelessness right now) while at the same time making it possible for me to go to uni. I intend on being a math/philosophy major. I have messed with Python a little bit in the past, but it didn't click super well. The people who say I should choose Python say as much because it is considered a good first language/teaching language, and that it is general-purpose. The people who say I should choose Ruby point out that I'm a very right-brained thinker, and having multiple ways to do something will make it much easier for me to write good code. So, basically, I'm starting this thread as a dialog with people who know more than I do, as an attempt to make the decision. :-) I've thought about asking this in stackoverflow, but they're much more strict about closing threads than here, and I'm sort of worried my thread will be closed. :/ TL;DR Python or Ruby for freelance work opportunities ($) as a first language? Additional question (if anyone cares to answer): I have a personal feeling that if I devote myself to learning, I'd be worth hiring for a project in about 8 weeks of work. I base this on a conservative estimate of my intellectual capacities, as well as possessing motivation to improve my life. Is my estimate necessarily inaccurate? random tidbit: I'm in Portland, OR I'll answer questions that are asked of me, if I can help the accuracy and insight contained within the dialog.

    Read the article

  • Monitor-Specific Color Profile Results In Yellow Grayscale Images

    - by Zian Choy
    I recently purchased a new Acer S201hl monitor. Many lay reviews compliment its color accuracy with people noting only a bit of a blueish tinge. After a little time, Windows found, and I installed, the Acer drivers via Microsoft Update. During the installation process, the software installed an ICC profile for the monitor from Acer. I recently noticed that when I view photos using Windows Live Photo Gallery, the colors are wrong. For example, grayscale document scans appear with yellow backgrounds instead of white backgrounds. This happens with both my external monitors and my ThinkPad's built-in screen. When I removed the monitor-specific profile from the list of profiles associated with a monitor (for example, removing the Acer profile from the Acer monitor), the problem went away for that screen. I checked with Microsoft KB939395 and though it says "an incorrect color profile [...] is used for the monitor," the profile associated with my ThinkPad's screen seemed to be correct, based on its name.

    Read the article

  • Planning for the Recovery

    - by john.orourke(at)oracle.com
    As we plan for 2011, there are many positive signs in the global economy, but also some lingering issues. Planning no longer is about extrapolating past performance and adjusting for growth. It is now about constantly testing the temperature of the water, formulating scenarios, assessing risk and assigning probabilities.  So how does one plan for recovery and improve forecast accuracy in such a volatile environment?  Here are some suggestions from a recent article I wrote, which was published in the December Financial Planning & Analysis (FP&A) newsletter from the AFP (Association of Financial Professionals): Increase the frequency of forecasting Get more line managers involved in the planning and forecasting process Re-consider what's being measured - i.e. key financial and operational metrics Incorporate risk and probability into forecasts Reduce reliance on spreadsheets - leverage packaged EPM applications To learn more about these best practices, check out the FP&A section of the AFP website and register to receive the FP&A newsletter.  AFP recently launched a new topic area focused on the FP&A function and items of interest to this group of finance professionals.  In addition to the FP&A quarterly newsletter, AFP will be publishing articles, running webinars and will have an FP&A track in their annual conference, which is in Boston next November.  Brian Kalish, AFP's Finance Lead, is hoping this initiative creates a valuable networking and information-sharing resource for FP&A professionals. Here's a link to the FP&A page on the AFP web site:  http://www.afponline.org/pub/res/topics/topics_fpa.html If you register on the site you can access and subscribe to the FP&A newsletter and other resources. Best of luck in your planning for 2011 and beyond!   

    Read the article

  • Oracle MDM at the MDM Summit in San Francisco

    - by David Butler
    Oracle is sponsoring the Product MDM track at this year’s MDM & Data Governance San Francisco Summit. Sachin Patel, Director of Product Strategy, Product Hub Applications, at Oracle will present the keynote: Product Master Data Management for Today’s Enterprise. Here’s the abstract: Today businesses struggle to boost operational efficiency and meet new product launch deadlines due to poor and cumbersome administrative processes. One of the primary reasons enterprises are unable to achieve cohesion is due to various domain silos and fragmented product data. This adversely affects business performance including, but not limited to, excess inventories, under-leveraged procurement spend, downstream invoicing or order errors and lost sales opportunities. In this session, you will learn the key elements and business processes that are required for you to master an enterprise product record. Additionally you will gain insights into how to improve the accuracy of your data and deliver reliable and consistent product information across your enterprise. This provides a high level of confidence that business managers can achieve their goals. In this session, you will understand how adopting a Master Data Management strategy for product information can help your enterprise change course towards a more profitable, competitive and successful business. Cisco Systems will join Sachin and cover their experiences, lessons learned and best practices. If you are in the Bay Area and interested in mastering your product data for the benefit of multiple applications, business processes and analytical systems, please join us at the Hyatt, Fisherman’s Wharf this Thursday, June 30th.

    Read the article

  • Camera field of view: 3D projections & trigonometry

    - by Thomas O
    Okay, here goes. I have a camera at (Xc, Yc, Zc.) The Xc and Yc coordinates are latitude/longitude, and the Zc coordinate is an altitude in metres. I have a point at (Xp, Yp, Zp) and a field of view on the camera (Th1, Th2) - where Th1 is horizontal FOV and Th2 is vertical FOV. Given this information, I'd like to: test if the point is visible (i.e. in the camera's FOV) project the point as the camera would see it I've figured out already that the camera's horizontal view at any given distance is tan(Th1) * distance, but I don't know how to test if the point is visible. Accuracy is not critical. I would prefer a simple solution over a complicated solution, if it works well enough. The computations will be performed by a small microcontroller, which isn't very fast at things like trig functions. P.S. this is not homework, I'm doing this for some game development. It will be integrated with the real world, hence the latitude/longitude/altitude. It involves flying real RC planes through virtual hoops (or chasing virtual targets), so I have to project the positions of these hoops on a display.

    Read the article

  • a young intellect asks: Python or Ruby for freelance?

    - by Sophia
    Hello, I'm Sophia. I have an interest in self-learning either Python, or Ruby. The primary reason for my interest is to make my life more stable by having freelance work = $. It seems that programming offers a way for me to escape my condition of poverty (I'm on the edge of homelessness right now) while at the same time making it possible for me to go to uni. I intend on being a math/philosophy major. I have messed with Python a little bit in the past, but it didn't click super well. The people who say I should choose Python say as much because it is considered a good first language/teaching language, and that it is general-purpose. The people who say I should choose Ruby point out that I'm a very right-brained thinker, and having multiple ways to do something will make it much easier for me to write good code. So, basically, I'm starting this thread as a dialog with people who know more than I do, as an attempt to make the decision. :-) I've thought about asking this in stackoverflow, but they're much more strict about closing threads than here, and I'm sort of worried my thread will be closed. :/ TL;DR Python or Ruby for freelance work opportunities ($) as a first language? Additional question (if anyone cares to answer): I have a personal feeling that if I devote myself to learning, I'd be worth hiring for a project in about 8 weeks of work. I base this on a conservative estimate of my intellectual capacities, as well as possessing motivation to improve my life. Is my estimate necessarily inaccurate? random tidbit: I'm in Portland, OR I'll answer questions that are asked of me, if I can help the accuracy and insight contained within the dialog.

    Read the article

  • How can I use iteration to lead targets?

    - by e100
    In my 2D game, I have stationary AI turrets firing constant speed bullets at moving targets. So far I have used a quadratic solver technique to calculate where the turret should aim in advance of the target, which works well (see Algorithm to shoot at a target in a 3d game, Predicting enemy position in order to have an object lead its target). But it occurs to me that an iterative technique might be more realistic (e.g. it should fire even when there is no exact solution), efficient and tunable - for example one could change the number of iterations to improve accuracy. I thought I could calculate the current range and thus an initial (inaccurate) bullet flight time to target, then work out where the target would actually be by that time, then recalculate a more accurate range, then recalculate flight time, etc etc. I think I am missing something obvious to do with the time term, but my aimpoint calculation does not currently converge after the significant initial correction in the first iteration: import math def aimpoint(iters, target_x, target_y, target_vel_x, target_vel_y, bullet_speed): aimpoint_x = target_x aimpoint_y = target_y range = math.sqrt(aimpoint_x**2 + aimpoint_y**2) time_to_target = range / bullet_speed time_delta = time_to_target n = 0 while n <= iters: print "iteration:", n, "target:", "(", aimpoint_x, aimpoint_y, ")", "time_delta:", time_delta aimpoint_x += target_vel_x * time_delta aimpoint_y += target_vel_y * time_delta range = math.sqrt(aimpoint_x**2 + aimpoint_y**2) new_time_to_target = range / bullet_speed time_delta = new_time_to_target - time_to_target n += 1 aimpoint(iters=5, target_x=0, target_y=100, target_vel_x=1, target_vel_y=0, bullet_speed=100)

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • Need Help With Conflicting Customer Support Goals?

    - by Tom Floodeen
    It seems that every OPS review Customer Support Executives are being asked to improve the customer KPIs while also improving gross margin. This is a tough road for even experienced leaders. You need to reduce your agents research time while increasing their answer accuracy. You want to spend less time training them while growing the number of products and systems being used. You have to deal with increasing service volumes but at the same time you need to focus on creating appropriate service insight. After all, to be a great support center you not only have to be good at answering questions, you also need to be good at preventing them.   Five Key Benefits of knowledge Management in Customer Service will help you start down the path meeting these, and other, objectives. With Oracle Knowledge Products, fully integrated with Oracle’s CRM solutions, you can accomplish both increased  service demand while driving your costs down. And you can handle both while positively impacting the satisfaction and loyalty of your customers.  Take advantage of Oracle to not only provide you with a great integrated tool suite, but also with the vision to drive you down the path of success.

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • Oracle Supply Chain at Pella Showcase, April 24-25, 2012

    - by Stephen Slade
    Nothing promtes a product like a grest customer testimony! For nearly a decade, Pella has been holding these 'open-houses' or Showcases as they are called, to illustrate the utilization of Oracle products in their operations. Building custom windows and doors is not an easy task.  With about a trillion combinations of unique sizes, colors and features availalbe, getting the complex multi-unit custom order wrong can be easy to do. I've been to a few of these Showcases and each time,  continually impressed by the precision, best practices and lean disciplines enacted at Pella. Operations representatives and users at Pella, demonstrate the way in which they use Oracle Supply Chain products to deliver fulfillment excellence. Orders are all custom made and delivered in about a week.  Factory tours are conducted and visitors have a chance to see Oracle in operation on the shop floor, driving informational flow and order accuracy in the 99+% range.  It's a must see for anyone considering expansion of their supply chain footprint.  The event is April 24-25 in Pella Iowa, outside Des Moines.   This year, there is a seperate track for CIOs and executives. Register at 1.800.820.5592  - ask for event 10281

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques

    The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques as documented by Pfleeger. The agile methodology does promote human understanding and communication through the use of short iterative software development life cycles which forces stakeholders to review the project and adjust the project for any requirement changes.  Due to the consistent evaluations of a project and requirements, process are continually being refined, upgraded, and compared against other alternatives to ensure the best design delivered to the client. Due to the short repetitive development cycles, increased time is devoted to process management due to the fact that requirements and designs could be constantly changing. This requires additional forecasting, monitoring, and planning for each iteration. Because things can change so rapidly, automated guidance in performing process must be updated for each iteration because the environment and the available reusable process could change. In addition, the original guidance and suggestions for the project also need to be updated to account for these changes as well.   In essence the automation of process execution is supported by the agile methodology because during every iteration all processes must be tested, evaluated to ensure process integrity and compliance with the customer’s requirements. I do not think the agile approach diminishes modeling, in fact I think it increases the modeling because before the start of every development cycle, modeling must be checked for accuracy based on the changed requirements. So in essence the reduced time spent initially designing the models is in fact gained as the project completes every iteration of the project.

    Read the article

  • Free MPEG cutting/cropping/editing tool

    - by Stijn Sanders
    I'm looking for a free tool to cut MPEG's that are created by recording from a USB cable receiver. There are a number of applications that do this, and some are free, but most of them take the video and audio decoded and pass them into a codec again to write the final output. I already have the MPEG's encoded the way I like, so what I need is a tool that can strip the beginning and the ending of the recording before and optionally commercial breaks, by rearranging the data within the file, not re-encoding it. Now I'm using some old software that came with the cable-receiver, but this actually re-encodes the video when exporting. I've tried VirtualDub but had major trouble getting the audio and video streams to stay in sync (and this was also re-encoding)... I found this which comes pretty close, but doesn't allow much accuracy in selecting cut positions.

    Read the article

  • SQL SERVER – Caption the Cartoon Contest – Last 2 Days

    - by pinaldave
    Developer’s life is very interesting, we often want to start my day early at a job so we can go home early. However, the day never comes as the life of the developer is always about working late hours. If the developer goes to the office early – there are good chances that his co-workers will come late. Additionally, I am confident that there will be always something urgent for developers or DBA to solve right at the time they are ready to go home. This is the life of the developers!  Here is the interesting story of a DBA who was about to go to the home. He had to take his girlfriend to a movie and dinner in 30 minutes. However, his manager asks him to fix the performance related issues with their production server. In normal case, he had only two choices a) Job or b) Girlfriend. Well, our super hero DBA decided to use efficient tools and improve the performance of the production server in merely 30 minutes. When he was done, his manager was absolutely surprised by his efficiency and accuracy of the work. He asked him following question - Here is the contest – you need to guess what was the answer of our Super Hero DBA. If you guess the answer correct you may win Star Wars R2-D2 Inflatable Remote Controlled device. Additionally, if you Download DB Optimizer before Dec 8, 2012 – you will be eligible for USD 25 Amazon Gift Card (there are total 10 such awards). Please do not leave comments in this thread – to participate in the contest – please leave a comment here in the original contest page. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is there an open source clone of a game in the Total War Series?

    - by sinekonata
    I loved Shogun:Total War gameplay and then later on spent weeks re-enacting historical wars and battles with Europa Barbarorum. It's a mod for Rome:TW that focuses on historical accuracy in the peoples, units, sounds, visuals, everything from macro mechanics to actual battles (e.g. a lot more missiles). Since that time I kind of turned my back on Windows cause it sucks and use Linux cause Mac sucks even worse. So as I miss that game (Eur. Barb.) and consider it the most realistic RTS to date, I'd like to know if there are any free and open source alternatives to it because ever since I'm under linux, I became addicted to FOSS so I also turned my back on paying (even kickstarters) for closed source, pay to play games. I have found a clone/alternative for everyone of the best games like Minecraft, CSS, Natural Selection, TA/SupCom etc... It's kind of the last one I need. The Spring engine is amazing for example, is there another open source project of the source in current development? Or would Spring itself be enough (it certainly looks capable) to make it? Thanks in advance guys...

    Read the article

  • Floating point undesireable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • Is the Zune HD's audio card better or worse than the iPod touch's?

    - by MatthewThepc
    Firstly, if this is the wrong site to ask this question I apologize, but I didn't see one for "music players" on the stack exchange website :) After reading a few people online say that music playing from a Zune HD sounds better to them than that on an iPod touch, I was wondering whether there's any truth to that? From what I can tell, the Zune HD uses a Wolfson Microelectronics WM8352, while the first-generation iPod Touch (which the Zune HD was competing with) used a Wolfson Microelectronics WM8758BG, and newer models use the Cirrus Logic CS4398 and CS42L61. Which ones are better (to make the question less subjective, let's say in terms of quality, range, & accuracy of output)? Admittedly, I have almost no idea how everything compares and works together, but it would seem to me that, just by looking at the version numbers, the iPod has been better since it's launch. Is there anything else that effects sound quality? Thanks!

    Read the article

  • How to Assure an Effective Data Model

    As a general rule in my opinion the effectiveness of a data model can be directly related to the accuracy and complexity of a project’s requirements. For example there is no need to work on very detailed data models when the details surrounding a specific data model have not been defined or even clarified. Developing data models when the clarity of project requirements is limited tends to introduce designed issues because the proper details to create an effective data model are not even known. One way to avoid this issue is to create data models that correspond to the complexity of the existing project requirements so that when requirements are updated then new data models can be created based any new discoveries regarding requirements on a fine grain level.  This allows for data models to be composed of general entities to be created initially when a project’s requirements are very vague and then the entities are refined as new and more substantial requirements are defined or redefined. This promotes communication amongst all stakeholders within a project as they go through the process of defining and finalizing project requirements.In addition, here are some general tips that can be applied to projects in regards to data modeling.Initially model all data generally and slowly reactor the data model as new requirements and business constraints are applied to a project.Ensure that data modelers have the proper tools and training they need to design a data model accurately.Create a common location for all project documents so that everyone will be able to review a project’s data models along with any other project documentation.All data models should follow a clear naming schema that tells readers the intended purpose for the data and how it is going to be applied within a project.

    Read the article

  • Business Forces: SOA Adoption

    The only constant in today’s business environment is change. Businesses that continuously foresee change and adapt quickly will gain market share and increased growth. In our ever growing global business environment change is driven by data in regards to collecting, maintaining, verifying and distributing data.  Companies today are made and broken over data. Would anyone still use Google if they did not have one of the most accurate search indexes on the internet? No, because their value is in their data and the quality of their data. Due to the increasing focus on data, companies have been adopting new methodologies for gaining more control over their data while attempting to reduce the costs of maintaining it. In addition, companies are also trying to reduce the time it takes to analyze data in regards to various market forces to foresee changes prior to them actually occurring.   Benefits of Adopting SOA Services can be maintained separately from other services and applications so that a change in one service will only affect itself and client services or applications. The advent of services allows for system functionality to be distributed across a network or multiple networks. The costs associated with maintain business functionality is much higher in standard application development over SOA due to the fact that one Services can be maintained and shared to other applications instead of multiple instances of business functionality being duplicated via hard coding in to several applications. When multiple applications use a single service for a specific business function then the all of the data being processed will be consistent in terms of quality and accuracy through the applications. Disadvantages of Adopting SOA Increased initial costs and timelines are associated with SOA due to the fact that services need to be created as well as applications need to be modified to call the services In order for an SOA project to be successful the project must obtain company and management support in order to gain the proper exposure, funding, and attention. If SOA is new to a company they must also support the proper training in order for the project to be designed, and implemented correctly. References: Tews, R. (2007). Beyond IT: Exploring the Business Value of SOA. SOA Magazine Issue XI.

    Read the article

  • Floating point undesirable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >