Search Results

Search found 2565 results on 103 pages for 'reduce'.

Page 53/103 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Becoming a Certified Information Professional

    - by Lance Shaw
    Yesterday, we participated with AIIM in a webinar about the Certified Information Professional (CIP) program that they are now offering.  The interest level is very high in the program, as evidenced by the high turnout at the event. You might be asking yourself, what does the Oracle WebCenter team care about an AIIM certification program? Well, we sponsored this program because we consistently find that the more educated our customers and prospects are, the more value they are going to get out of the technology we provide.  As an ECM vendor, we provide plenty of WebCenter product training and certifications to help you make the most of WebCenter technology. While these are essential and valuable, technologists that also have an operational command of the business and the various impacts that the flow of information can have are even more valuable to an organization. Thinking about the management of content and information and its effect on business process can have wide-ranging benefits, not only to your company but to your personal bottom line.  And let's be honest, a customer who is looking holistically at how content is managed is going to see more opportunities to leverage that content and in many cases, this will motivate the purchase of additional product licenses.   Now if you are regretting the fact that you missed the webinar yesterday, never fear!  It is now available for playback and you can view it at your convenience by visiting the AIIM website. We hope you find it informative and that you can personally profit from being able to showcase your certification as an Information Professional. Additionally, we hope it will help you identify additional opportunities to leverage Oracle WebCenter in order to further reduce your operational costs and drive your business forward.

    Read the article

  • Customer retention - why most companies have it wrong

    - by Michel Adar
    At least in the US market it is quite common for service companies to offer an initially discounted price to new customers. While this may attract new customers and robe customers from competitors, it is my argument that it is a bad strategy for the company. This strategy gives an incentive to change companies and a disincentive to stay with the company. From the point of view of the customer, after 6 months of being a customer the company rewards the loyalty by raising the price. A better strategy would be to reward customers for staying with the company. For example, by lowering the cost by 5% every year (compound discount so it does never get to zero). This is a very rational thing to do for the company. Acquiring new customers and setting up their service is expensive, new customers also tend to use more of the common resources like customer service channels. It is probably true for most companies that the cost of providing service to a customer of 10 years is lower than providing the same service in the first year of a customer's tenure. It is only logical to pass these savings to the customer. From the customer point of view, the competition would have to offer something very attractive, whether in terms of price or service, in order for the customer to switch. Such a policy would give an advantage to the first mover, but would probably force the competitors to follow suit. Overall, I would expect that this would reduce the mobility in the market, increase loyalty, increase the investment of companies in loyal customers and ultimately, increase competition for providing a better service. Competitors may even try to break the scheme by offering customers the porting of their tenure, but that would not work that well because it would disenchant existing customers and would be costly, assuming that it is costlier to serve a customer through installation and first year. What do you think? Is this better than using "save offers" to retain flip-floppers?

    Read the article

  • Algorithm to measure how "diffused" 5,000 pennies are in an economy?

    - by makerofthings7
    Please allow me to use this example/metaphor to describe an algorithm I need. Objects There are 5 thousand pennies. There are 50 cups. There is a tracking history (Passport "stamp" etc) that is associated with each penny as it moves between cups. Definition I'll define a "highly diffused" penny as one that passes through many cups. A "poorly diffused" penny is one that either passes back and forth between 2 cups Question How can I objectively measure the diffusion of a penny as: The number of moves the penny has gone through The number of cups the penny has been in A unit of time (day, week, month) Why am I doing this? I want to detect if a cup is hoarding pennies. Resistance from bad actors Since hoarding is bad, the "bad cup" may simply solicit a partner and simply move pennies between each other. This will reduce the amount of time a coin isn't in transit, and would skew hoarding detection. A solution might be to detect if a cup (or set of cups) are common "partners" with each other, though I'm not sure how to think though this problem. Broad applicability Any assistance would be helpful, since I would think that this algorithm is common to Economics The study of migration patterns of animals, citizens of a country Other natural occurring phenomena ... and probably exists as a term or concept I'm unfamiliar with.

    Read the article

  • Stylecop 4.7.37.0 has been released

    - by TATWORTH
    Stylecop  4.7.37.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Add docs for new SA1650 spelling rule.Fix for 7395. Dont remove parenthesis around await expressions.Insert a returns element into docs within a see element.Update our tools folder StyleCop dll'sfix for 7392. Insert generic type docs for return types correctly.Fix for 7393. Allow documentation elements with attributes to end the string and still be valid.Make sure the MSBuild Task logs the warning id and type of exception. Unless the description field holds all this info VS cannot show the text in the Error List.Load custom dictionaries for multiple cultures. For a culture like en-GB; we load CustomDictionary.xml, then look for CustomDictionary.en-GB.xml and then CustomDictionary.en.xmlUpdate standard shipping dictionaries.Element documentation spelling fixes.Reduce the standard dictionaryUpdate our own devbuild StyleCop checks.Don't check spelling of xml documentation attributes are anything inside  <c> or <code> elements.Update StylingStyling update.Add timestamps for all the dependant files into the StyleCopResults.cache. Add a FileSystemWatcher to all custom dictionary files.Write out the full violation into the StyleCopResults.cache.Change a rules description text.Styling fixes.Styling fixes.NEW RULE: Check Spelling Of Element Documetation. Fix over 2000 spelling errors in our source code. Update the VS addin to show the rule violation in more detail. Add spelling checker to the deployment.Set our own Culture to en-USDocumentation spelling fixes.First draft of the documentation spelling checker.Fix for 7325. Don't throw 1126 in goto statements.Fix for 7090. Add TargetsDir to registry during install.Fix for 7060. Sort usings after moving them inside namespace.Fix FxCop issues.Fix for 7389. Detect CpuCount on Unix/MACFix for 6788. Allow opening curly brackets for scope. Added new tests.Updating constants.Fix for 7167. Show version number of StyleCop in VS Help window.Only output StyleCop excluded files if there are any.

    Read the article

  • Business Intelligence (BI) Defined

    CIO.com defines Business Intelligence (BI) as a generic reference to a collection of applications that are used to analyze raw organizational data. Typical BI activities include data mining, online analytical processing, querying and reporting. They further explain that the primary reason why a company would utilize BI is to make their more data accessible. The more accessible data is to the users the faster they can identify ways to reduce business cost, discover new business opportunities, and react quickly to adjust prices based on current supply and demand. One area in which a hospital system could use BI derived from a data warehouse can be seen in the Emergency Room (ER) in regards to the number of doctors and nurse they have working during a full moon for each ER location. In order determine this BI needs to determine a trend in the number of patients seen on a full moon, further more they also need to determine the optimal number of staff members working during a full moon be determining the number of employees to patients ration needed to meet standard patient times and also be the most cost effective for the hospital.  This will allow the hospital system to estimate the number of potential patients they will have on the next full moon and adjust their staff schedules accordingly to ensure that patient care is not affected in any way do the influx or lack of influx of patients during this time while also ensuring that they are only working the minimum number of employees to ensure that they still making a profit. Another area where a hospital system could use BI data regards their orders paced to drug and medical supply companies. BI could define trends in prescriptions given to patients, this information could be used for ordering new supplies and forecasting the amount of medicine each hospital needs to keep on site at a given time. For example, a hospital might want to stock up on materials need to set bones in a cast prior to the summer because their BI indicates that a majority of broken bones occur during the summer due to children being out of school and they have more free time.

    Read the article

  • 12.10 upgrade broke brightness keys [closed]

    - by Chris Morgan
    I have been running Ubuntu (64-bit) on my HP 6710b laptop (Core 2 Duo with integrated graphics) for several years, and the backlight brightness keys have always worked. Since I upgraded to Ubuntu 12.10 earlier today, those keys do not work any more. The secondary function keys: Fn+F3: sleep; still works (and considerably faster than ever before!) Fn+F8: battery info; still works Fn+F9: reduce brightness; stopped working in 12.10 Fn+F10: increase brightness; stopped working in 12.10 It may also be worth while mentioning that X does not appear to be receiving the brightness events at all, or at least not sending them out further. (This I detected with a key logger I wrote for a Uni project, which uses X's Record extension; it is informed of the sleep and battery info keystrokes, but doesn't receive the brightness ones at all.) In the mean time, I know that I can use the Brightness & Lock settings screen to alter the brightness. (Wow! I can suddenly make my backlight darker than I could before—I can go right down to turning the backlight off, something I couldn't do before... but this model has a fairly dim screen, so I don't expect to use that much, if ever.) How can I get the brightness keys working again? This question is probably strongly related to I can't control my Brightness in HP Compaq 6710s.

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • Regulation of the software industry

    - by Flexo
    Every few years someone proposes tighter regulation for the software industry. This IEEE article has been getting some attention lately on the subject. If software engineers who write programs for systems that expose the public to physical or financial risk knew they would be tested on their competence, the thinking goes, it would reduce the flaws and failures in code—and maybe save a few lives in the bargain. I'm skeptical about the value and merit of this. To my mind it looks like a land grab by those that proposed it. The quote that clinches that for me is: The exam will test for basic knowledge, not mastery of subject matter because the big failures (e.g. THERAC-25) seem to be complex, subtle issues that "basic knowledge" would never be sufficient to prevent. Ignoring any local issues (such as existing protections of the title Engineer in some jurisdictions): The aims are noble - avoid the quacks/charlatans1 and make that distinction more obvious to those that buy their software. Can tighter regulation of the software industry ever achieve it's original goal? 1 Exactly as regulation of the medical profession was intended to do.

    Read the article

  • Scenes from OpenWorld Day One

    - by Larry Wake
    Sunday's the day that everything comes together, but there's always that last minute scramble. Here are a few peeks at what everyone's doing, and may still be doing far into the night. This is the team putting the final touches on the Hands-On Lab room for  HOL10201, "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications". This should be a great learning experience--plus it's a chance to meet up with some of the top Solaris security people, including Glenn Faden and Darren Moffat. And here's the OTN Garage's own Rick Ramsey, working feverishly to help set up the Oracle Solaris Systems Pavilion. (Moscone South, Booth 733). Several of our featured partners will be demonstrating solutions running on Oracle Solaris systems -- plus, we'll be serving espresso, to help you power through the week. Another panorama shot, courtesy of iOS 6 -- come for the maps, stay for the photos.... Moscone South is also home once again this year to the systems and storage DEMOgrounds. Plenty to learn and see; you might even catch a glimpse of me there on Tuesday afternoon.

    Read the article

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

  • Raspberry Pi Now Shipping with 512MB RAM; Still Only $35

    - by Jason Fitzpatrick
    Fans of the tiny Raspberry Pi will be pleased to hear the new version of their Model B board now ships with 512MB of RAM (up from the previous 256MB). The best part about the upgrade? The price point stays at $35 a board. From the official Raspberry Pi blog: One of the most common suggestions we’ve heard since launch is that we should produce a more expensive “Model C” version of Raspberry Pi with extra RAM. This would be useful for people who want to use the Pi as a general-purpose computer, with multiple large applications running concurrently, and would enable some interesting embedded use cases (particularly using Java) which are slightly too heavyweight to fit comfortably in 256MB. The downside of this suggestion for us is that we’re very attached to $35 as our highest price point. With this in mind, we’re pleased to announce that from today all Model B Raspberry Pis will ship with 512MB of RAM as standard. If you have an outstanding order with either distributor, you will receive the upgraded device in place of the 256MB version you ordered. Units should start arriving in customers’ hands today, and we will be making a firmware upgrade available in the next couple of days to enable access to the additional memory. We’re excited to get our hands on a new board and try out Raspbmc with that extra RAM. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Can Dungeons & Dragons Make You More Successful? [Video]

    - by Jason Fitzpatrick
    Dungeons & Dragons gets a bit of a bad rap in popular culture, but in this video treatise from Idea Channel, they propose that Dungeons & Dragons wires players for success. There are some deeply ingrained stereotypes about Dungeons & Dragons, and those stereotypes usually begin and end with people shouting “NERD!!!” But the reality of the D&D universe is a whole lot more complex. Rather than being an escape from reality, D&D is actually a way to enhance some important real life skillz! It’s a chance to learn problem solving, visualization, interaction, organization, people management… the list could go on and on. Plus, there are some very famous non-nerds who have declared an affinity for D&D, so best stop criticizing and join in if you want to be a successful at the game of life. While we’re trying not to let our love of all things gaming cloud our judgement, we’re finding it difficult to disagree with the premise that open-ended play fosters creative and adaptive thinking. Can Dungeons & Dragons Make You A Confident & Successful Person? [via Boing Boing] HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • FILESTREAM in SQL Server 2008 R2

    - by CatherineRussell
    Much data is unstructured, such as text documents, images, and videos. This unstructured data is often stored outside the database, separate from its structured data. This separation can cause data management complexities. Or, if the data is associated with structured storage, the file streaming capabilities and performance can be limited. FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system. Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data. FILESTREAM uses the NT system cache for caching file data. This helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing. FILESTREAM data is not encrypted even when transparent data encryption is enabled. To read more, go to: http://technet.microsoft.com/en-us/library/bb933993.aspx

    Read the article

  • Sustainability at Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    By Evelyn Neumayr Leading businesses - not to mention individuals - recognize that environmental responsibility is good business. Well-thought out and well-structured environmental practices deliver triple benefit: to people, profits, and our planet. IT, as a central part of most organizations' business strategies, plays a pivotal role in developing environmental initiatives. Any Oracle OpenWorld attendee interested in learning how to use Oracle products to reduce both their organization’s environmental footprint, as well as their costs, should attend one of the many sustainability sessions being held at the conference. If you can only attend one sustainability-focused session, this is the one not to miss, where you can learn about innovative sustainability practices from customers on the leading edge. Eco-Enterprise Innovation Awards and the Business Case for SustainabilityWednesday, October 3, Moscone West 300510:15 - 11:15 a.m. If you can attend several sessions that have a sustainability focus, look here to find the listing of sessions that drill down into a specific product, where the discussion will focus on how that product can help achieve sustainability while improving enterprise operational efficiencies. Regardless of size and scope, all efforts are worthwhile. To learn more, go to the Sustainability Matters blog.

    Read the article

  • The MDM Journey: From the Customer Perspective

    - by Mala Narasimharajan
    Master Data Management is more than just about a single version of  the truth or providing a 360 degree view of the customer.  It spans multiple domains ranging from customers to suppliers to products and beyond.  MDM is pivotal to providing a solid customer experience - one that results in repeat business, continued loyalty and last but not least - high customer satisfaction.  Customer experience is not only defined as accurate information about the customer for the enterprise, but also presenting the customer with the right information about products, orders, product availability, etc.   Let's take a look at a couple of customer use cases with Oracle MDM. Below is a picture from a recent customer panel: Oracle MDM is a key platform for increasing upsell/cross-sell opportunities, improve targeting of customers and uncover new sales opportunies, reduce inaccuracies in mailing marketing materials to prospects, as well as to tap into and uncover the full value of a customer across business units more accurately.  A leading investment and private bank leverages Oracle MDM to do a better job of identifying clients, their levels of investment as well as consistently manage them through a series of areas such as credit, risk, new accounts, etc. Ultimately, they are looking to understand client investments and touchpoints across the company's offerings.  Another use case for Oracle MDM is with a major financial and insurance services company with clients worldwide, looking to resolve customer data inaccuracies and client information stored differently across mulitiple systems.  For more information on Oracle Master Data Management, click here.  

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

  • Where to look for challenging jobs with a relaxed atmosphere?

    - by RBTree
    I'm a dev at one of the big-name tech companies. I like the job for many reasons: I do interesting work on a cool product I solve challenging problems and use a lot of high-level skills (quantitative, creative, writing, presenting) It pays well The problem is that I feel I need a more relaxed atmosphere (shorter hours, less performance pressure, and more flexibility), in order to free up time for other pursuits and reduce stress. The ideal would be a job that's around 30-35 hours a week, where there is flexibility to work more or less in a given week. Can anyone suggest where to look for a job like this, where I wouldn't have to sacrifice too much on the above points? (Obviously I would have to sacrifice pay.) My employer does not generally offer part-time employment. The closest thing I can think of is when I did summer internships at my university's CS department. The work was very intellectually challenging, but if I needed to go home a couple hours early or get flexibility on a due date, nobody batted an eyelash. However, I'd like to find out if there are alternatives to academia since from what I've seen the pay there is a gigantic drop from what I'm currently making. I've done freelance development before, but I do like that as an employee of a large company I have a lot of things taken care of for me (e.g. benefits and guaranteed stable employment).

    Read the article

  • Oracle E-Business Suite is Helping to Save Lives at the National Marrow Donor Program

    - by Di Seghposs
    To improve the management of its life-saving operations, the National Marrow Donor Program recently modernized its financial and procurement operations by upgrading to Oracle E-Business Suite 12.1.   As the global leader in bone marrow and umbilical cord blood transplants, the NMDP manages a complex ecosystem of donor, patient, hospital, and biological data. “Maintaining accurate data and having an efficient matching process is essential, particularly as our global database of bone marrow patients grows and donor lists expand,” says Bruce Schmaltz, director of finance/controller. “We rely on the Oracle E-Business Suite to ensure our procurement and financial management processes meet the highest standards, enabling our growing non-profit to work swiftly and efficiently to help improve and save lives.” As the non-profit organization and its registry grew larger, NMDP needed a modern platform to store and integrate its financial information and complicated procurement process. It selected Oracle E-Business Suite for its ability to fit seamlessly into NMDP’s enterprise architecture. NMDP initially implemented Oracle E-Business Suite release 12 by leveraging Oracle Business Accelerators, which are rapid implementation tools and templates that help reduce implementation time and costs. With Oracle Financial Management and Oracle Procurement, NMDP has streamlined back-office processes and integrated its procure-to-pay business processes by leveraging industry leading accounts payable, accounts receivable, and general ledger modules. NMDP is currently rolling out Oracle Hyperion Performance Management applications and plans to implement Oracle Order Management and Oracle Advanced Pricing by the end of 2012. Read more details about NMDP’s modernization efforts.  For more updates on Oracle Financial Management Solutions, view our November 2012 Oracle Information InDepth Financial Management newsletter. Subscribe Now. 

    Read the article

  • Collision checking problem on a Tiled map

    - by nosferat
    I'm working on a pacman styled dungeon crawler, using the free oryx sprites. I've created the map using Tiled, separating the floor, walls and treasure in three different layers. After importing the map in libGDX, it renders fine. I also added the player character, for now it just moves into one direction, the player cannot control it yet. I wanted to add collision and I was planning to do this by checking if the player's new position is on a wall tile. Therefore as you can see in the following code snippet, I get the tile type of the appropriate tile and if it is not zero (since on that layer there is nothing except the wall tile) it is a collision and the player cannot move further: final Vector2 newPos = charController.move(warrior.getX(), warrior.getY()); if(!collided(newPos)) { warrior.setPosition(newPos.x, newPos.y); warrior.flip(charController.flipX(), charController.flipY()); } [..] private boolean collided(Vector2 newPos) { int row = (int) Math.floor((newPos.x / 32)); int col = (int) Math.floor((newPos.y / 32)); int tileType = tiledMap.layers.get(1).tiles[row][col]; if (tileType == 0) { return false; } return true; } The character only moves one tile with this code: If I reduce the col value by two it two more tiles. I think the problem will be around indexing, but I'm totally confused because the zero in the coordinate system of libGDX is in the bottom left corner of the screen, and I don't know the tiles array's indexing is similair or not. The size of the map is 19x21 tiles and looks like the following (the starting position of the player is marked with blue:

    Read the article

  • Dealing with inflexible programmers.

    - by Singleton
    Sometimes programmers who work on a project for long time get inflexible, and it becomes difficult to reason with them. Even if we do manage to convince them, they can be unlikely to implement our suggestions. For instance, I recently joined a project where the build & release process is too complicated and has unnecessary roadblocks. I suggested that we get rid of some of the development overhead (like filling a few spreadsheets) just by integrating defect management and version control tools (both are IBM-Rational tools so integration can be a very easy one-off effort). Also, if we use tools like Maven & Ant (the project involves Java and some COTS products) build & release can be simplified which should reduce manual errors & intervention. I managed to convince others and I'm ready to put in the effort to develop a proof of concept. But the ‘Senior’ developer is not willing, possibly because the current process makes him more valuable. How do we handle this situation without developing friction in the team?

    Read the article

  • Android - Efficient way to draw tiles in OpenGL ES

    - by Maecky
    Hi, I am trying to write efficient code to render a tile based map in android. I load for each tile the corresponding bitmap (just one time) and then create the according tiles. I have designed a class to do this: public class VertexQuad { private float[] mCoordArr; private float[] mColArr; private float[] mTexCoordArr; private int mTextureName; private static short mCounter = 0; private short mIndex; As you can see, each tile has it's x,y location, a color array, texture coordinates and a texture name. Now, I want to render all my created tiles. To reduce the openGL api calls (I read somewhere that the state changes are costly and therefore I want to keep them to a minimum), I first want to hand ALL the coordinate-arrays, color-arrays and texture-coordinates over to OpenGL. After that I run two for loops. The first one iterates over the textures and binds the texture. The second for loop iterates over all Tiles and puts all tiles with the corresponding texture into an IndexBuffer. After the second for loop has finished, I call gl.gl_drawElements() whith the corresponding index buffer, to draw all tiles with the texture associated. For the next texture I do the same again. Now I run into some problems: Allocating and filling the FloatBuffers at the start of each rendering cycle costs very much time. I just run a test, where i wanted to put 400 coordinates into a FloatBuffer which took me about 200ms. My questions now are: Is there a better way, handling the coordinate and color structures? How is this correctly done, this is obviously not the optimal way? ;) thanks in advance, regards Markus

    Read the article

  • Dealing with the node.js callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Part 4: Development Standards or How to share

    - by volker.eckardt(at)oracle.com
    Although we usually introduce the custom development part in EBS projects as “a small piece only” and “we will avoid as best as possible”, the development effort can be enormous and should therefore be well addressed by project standards. Any additional solution or additional software tool or product shall influence the custom development rules (by adding, removing or replacing sections). It is very common in EBS projects to create a so called “MD.030 Development Standards” document and put everything what’s related to development conventions into it. This document gets approval and will be shared among all developers. Later, additional sections have to be added, and usually the development lead is responsible for doing this. However, sometimes used development techniques are not documented properly, and therefore the development solutions deviate from each other, or from the initially agreed standards. My advice would be the following: keep the MD.030 as a base document, and add a Wiki on top. The “Development Wiki” covers the following: Collect input from every developer without updating the MD.030 directly Collect additional topics that might need further specification Allow a discussion about such topics by reviewing/updating the wiki directly Add also decisions or open questions right into it. In one of my own projects we were using this “Developer Wiki” quite extensive, and my experience is very positive. We had different sections in it, good cross references, but also additional material like code templates, links to external web pages etc. By using this wiki, the development standards became “owned” by the right group of people, the developers. They recognized that information sharing can improve the overall development quality, but will also reduce the workload on individuals. Finally, the wiki was much more accurate and helpful for the daily development work than our initial MD.030, and we all decided to retire the document completely. Summary: Information sharing in the development area is very important! The usual “MD.030 Development Standards“ is a good starting point, but should be combined with a “Development Wiki”, allowing everyone to address and discuss necessary improvements. A well-structured Wiki can replace the document in some sections completely. Side Note: The corresponding task in Oracle OUM (Oracle Unified Method) is DS.050 ‘Determine Design and Build Standards’ Volker

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >