Search Results

Search found 2210 results on 89 pages for 'techniques'.

Page 74/89 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

  • Rotating text using CSS

    - by Renso
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Goal: Rotating text using css only. How: Surprisingly IE supports this feature rather well. You could use property filters in IE, but since this is only supported on IE browsers, I would not recommend it. CSS3, still in proposal state, has a "writing-mode" property for doing this. It has been part of IE's browser engine since IE5.5. Now that it is part of the CSS3 draft specification, would be the best way to implement this going forward. Webkit based browsers; Firefox 3.5+, Opera 11 and IE9 implement this feature differently by utilizing the transform property. Without using third-party JavaScript or CSS properties, we can use the CSS3 "writing-mode" property, supported from IE5.5 up to IE8, the latter adding addition formatting options through -ms extensions. <style type="text/css"> .rightToLeft{ writing-mode: tb-rl; } </style> <p class="rightToLeft">This is my text</p> This will rotate the text 90 degrees, starting from the right to the left. Here are all the options: ·         lr-tb – Default value, left to right, top to bottom ·         rl-tb – Right to left, top to bottom ·         tb-rl – Vertically; top to bottom, right to left ·         bt-rl – Vertically; bottom to top, right to left ·         tb-lr – Available in IE8+: -ms-writing-mode; top to bottom, left to right ·         bt-lr – Bottom to top, left to right ·         lr-bt – Left to right, bottom to top What about Firefox, Safari, etc.? The following techniques need to be used on Webkit browsers like Firefox, Opera 11, Google Chrome and IE9. These browsers require their proprietary vendor extensions: -moz-, -webkit-, -o- and -ms-. -webkit-transform: rotate(90deg);    -moz-transform: rotate(90deg); -ms-transform: rotate(90deg); -o-transform: rotate(90deg); transform: rotate(90deg);

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Seperation of project responsibilities in new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (1 a slightly more experienced and will be the lead) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) PROS: Integration points are quite clear and so developers can work without dependencies on others fairly easily Code practices such as naming conventions and style is more easily managed in regards to consistancy as primarily only one developer will be handling an area CONS: Completion of an entire feature becomes a bit grey as no single person is responsible for an entire feature (story?) A person might not have a full appreciation for all areas of the project and so code overlap might be lacking if suddenly that person left. OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task from view - controller - model. PROS: A person is responsible for one entire feature so it's "complete" state can be clearly defined Code overlap into different areas will occur so each individual has good coverage over the entire application CONS: Overlap of development will occur in all the modules and developers can develop/extend without a true understanding of what the original code owner was intending. This could potentially lead more easily to code bloat? Following a convention might be harder as developers are adding to all areas of the project If a developer sets up a way of doing things would it be harder to enforce the other developers to follow that convention or even build on it (or even discuss it?). Dunno.. Bugs could more easily be introduced into areas not thought about by the developer It's easier to possibly to carry a team member in so far as one member just hacks code together to complete a task whilst another takes time to build a foundation that could be used by others and so help make future tasks easier i.e. starts building a framework? QUESTION: As it might appear I'm more in favor of option 1, however I'm interested to see how others might have approached this or what is the standard or best or preferred way of undertaking a project. Or indeed any different approach to handling this?

    Read the article

  • D&rsquo;Arcy&rsquo;s Book Club - The New Strategic Selling

    - by D'Arcy Lussier
    The New Strategic Selling Miller and Heiman Amazon.ca Amazon.com Chapters Everybody is a salesmen. Every day, without knowing it, we sell something to someone. Now, the typical vision people think of when they hear the word “sales” is the sleazy used car salesperson who does whatever they can to get you to buy the clunker on their lot. But selling is not an action tied to money and products. Selling is about convincing people to see your point of view and act on it. If you want your company to cover a trip to a conference, you may have to sell the idea to your boss. If you want to buy that new big screen TV, you have to sell the idea to your significant other. If you want to go on a weekend fishing trip with the boys you might be called in to help sell the idea to your buddies wife. We all sell, but we don’t all sell very well. So enter The New Strategic Selling, a book based on the sales course put on by the Miller-Heiman group. In fact, this isn’t really a “New” strategy to selling as its been around for a number of years. But the concepts they present, the ideas about selling, these are still very radical based on what most of us have experienced. Gone are the high pressure, win at all cost, GlenGarry-GlenRoss style of sales…instead the book presents a framework to switch to need-based selling. It’s the idea that instead of going in raving about a product or service, you build a relationship where the buyer expresses what their needs are and your response is to present a solution that best fits that need. Instead of focussing on the amount of money you can squeeze out of a client, you focus on whether everyone wins, that they receive win-results from the engagement, that repeat business is developed over time delivering value over and over again. The great thing about the book is that what it teaches…things like how to identify different buying influencers, how to prepare for meetings, techniques to solicit information about what the buyer is really thinking/feeling…these things are entirely applicable in *any* situation that you need to sell to someone…and remember: selling is convincing people to see your point of view and act on it. So that new big screen TV you want to buy but need to convince your wife on? This book can help you. That training opportunity you want your company to send you on? This book can help you. The upgrade to your community park that you want to lobby the local civic authorities for? This book can help you. The book is a bit wordy. I found that the length could have been reduced and the points still have gotten across. That’s really the only knock that I have though; the insight that it provides is so worthwhile that having to chew through extra words is well worth it. You definitely don’t have to be a professional salesperson to benefit from this book. Rating: 4/5

    Read the article

  • What Counts for a DBA: Skill

    - by drsql
    “Practice makes perfect:” right? Well, not exactly. The reality of it all is that this saying is an untrustworthy aphorism. I discovered this in my “younger” days when I was a passionate tennis player, practicing and playing 20+ hours a week. No matter what my passion level was, without some serious coaching (and perhaps a change in dietary habits), my skill level was never going to rise to a level where I could make any money at the sport that involved something other than selling tennis balls at a sporting goods store. My game may have improved with all that practice but I had too many bad practices to overcome. Practice by itself merely reinforces what we know and what we can figure out naturally. The truth is actually closer to the expression used by Vince Lombardi: “Perfect practice makes perfect.” So how do you get to become skilled as a DBA if practice alone isn’t sufficient? Hit the Internet and start searching for SQL training and you can find 100 different sites. There are also hundreds of blogs, magazines, books, conferences both onsite and virtual. But then how do you know who is good? Unfortunately often the worst guide can be to find out the experience level of the writer. Some of the best DBAs are frighteningly young, and some got their start back when databases were stored on stacks of paper with little holes in it. As a programmer, is it really so hard to understand normalization? Set based theory? Query optimization? Indexing and performance tuning? The biggest barrier often is previous knowledge, particularly programming skills cultivated before you get started with SQL. In the world of technology, it is pretty rare that a fresh programmer will gravitate to database programming. Database programming is very unsexy work, because without a UI all you have are a bunch of text strings that you could never impress anyone with. Newbies spend most of their time building UIs or apps with procedural code in C# or VB scoring obvious interesting wins. Making matters worse is that SQL programming requires mastery of a much different toolset than most any mainstream programming skill. Instead of controlling everything yourself, most of the really difficult work is done by the internals of the engine (written by other non-relational programmers…we just can’t get away from them.) So is there a golden road to achieving a high skill level? Sadly, with tennis, I am pretty sure I’ll never discover it. However, with programming it seems to boil down to practice in applying the appropriate techniques for whatever type of programming you are doing. Can a C# programmer build a great database? As long as they don’t treat SQL like C#, absolutely. Same goes for a DBA writing C# code. None of this stuff is rocket science, as long as you learn to understand that different types of programming require different skill sets and you as a programmer must recognize the difference between one of the procedural languages and SQL and treat them differently. Skill comes from practicing doing things the right way and making “right” a habit.

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • Data Mining Email with Thunderbird

    - by user554629
    Oracle has many formal, searchable locations:  Service Requests, BugIDs, Technical Documents. These contain the results of an investigation for a customer crash situation;  they're created after the intense work of resolution is over, and typically contain the "root cause" of the failure ... but not the methods for identifying that cause. Email is still the standby for interacting with quickly formed groups of specialists, focusing on a particular incident.Customer BI, Network and System specialists;  Oracle Tech Support, Development, Consultants; OEM Database, OS technical support.   It is a chaotic, time-oriented set of configuration, call stacks, changes, techniques to discover and repair the failure. I needed to organize that information into something cohesive to prepare the blog entry on Teradata.  My corporate email client of choice is Thunderbird.   My original (flawed) search technique: R-Click on Inbox in Thunderbird left pane, and choose Search Messages Subject:  [ teradata ] Results: A new window titled "Search Messages"Single pane of selected messagesColumn headings:  Subject  From  Date  LocationNo preview window for messages There are 673 email entries in the result ( too many )  R-click icon just above the vertical scroll bar on the rightCheck [x] Tags Click on the Tags header to sort by "Important" View contents of message by double-clickingOpens in the Thunderbird Main Window in a new Tab Not what I was looking for, close the tab and try again. There has to be a better way.  ( and there is ) I need to be more productive, eliminating duplicate-chained messages, for example.   Even the Tag "Important" that was added during the investigation phase, is "not so much" for my current task. In the "Search Messages" window, click [ Save as Search Folder ] [ teradata ]  Appears as a new folder in my Inbox. Focus on that folder and the results appear with a list of messages like every other folder in the Inbox.Only the results of the search are shown A preview window is now available for each message Sort, Select message, Cursor Down ... navigates quickly through the messages. But wait, there's more ... Click Find ( Ctrl-F) Enter a search term for the message body, like.[ LIBPATH ] The search is "sticky" ... each message you cycle through wil focus ( and highlight) the LIBPATH search term. And still more .... Reset the Tag"Important" message.   Press "1" and the tag is removed Press "4" and a new Tag "ToDo" is applied After applying all of the tags, sort by Tag for a new message order Adjust the search criteria ... R-click on the [ teradata ] search folder, and choose Properties Add additional criteria to narrow the search Some of the information I'm looking for did not contain "teradata" in the subject line. + Body  [ contains ] [ Best Practices ] That's it.  Much more efficient search.   Thank you Thunderbird.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Detect blocked popup in Chrome

    - by Andrew
    I am aware of javascript techniques to detect whether a popup is blocked in other browsers (as described in the answer to this question). Here's the basic test: var newWin = window.open(url); if(!newWin || newWin.closed || typeof newWin.closed=='undefined') { //POPUP BLOCKED } But this does not work in Chrome. The "POPUP BLOCKED" section is never reached when the popup is blocked. Of course, the test is working to an extent since Chrome doesn't actually block the popup, but opens it in a tiny minimized window at the lower right corner which lists "blocked" popups. What I would like to do is be able to tell if the popup was blocked by Chrome's popup blocker. I try to avoid browser sniffing in favor of feature detection. Is there a way to do this without browser sniffing? Edit: I have now tried making use of newWin.outerHeight, newWin.left, and other similar properties to accomplish this. Google Chrome returns all position and height values as 0 when the popup is blocked. Unfortunately, it also returns the same values even if the popup is actually opened for an unknown amount of time. After some magical period (a couple of seconds in my testing), the location and size information is returned as the correct values. In other words, I'm still no closer to figuring this out. Any help would be appreciated.

    Read the article

  • android.intent.action.SCREEN_ON doesn't work as a receiver intent filter

    - by Jim Blackler
    I'm trying to get a BroadcastReceiver invoked when the screen is turned on. In my AndroidManifest.xml I have specified : <receiver android:name="IntentReceiver"> <intent-filter> <action android:name="android.intent.action.SCREEN_ON"></action> </intent-filter> </receiver> However it seems the receiver is never invoked (breakpoints don't fire, log statements ignored). I've swapped out SCREEN_ON for BOOT_COMPLETED for a test, and this does get invoked. This is in a 1.6 (SDK level 4) project. A Google Code Search revealed this, I downloaded the project and synced it, converted it to work with latest tools, but it too is not able to intercept that event. http://www.google.com/codesearch/p?hl=en#_8L9bayv7qE/trunk/phxandroid-intent-query/AndroidManifest.xml&q=android.intent.action.SCREEN_ON Is this perhaps no longer supported? Previously I have been able to intercept this event successfully with a call to Context.registerReceiver() like so registerReceiver(new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { // ... } }, new IntentFilter(Intent.ACTION_SCREEN_ON)); However this was performed by a long-living Service. Following sage advice from CommonsWare I have elected to try to remove the long-living Service and use different techniques. But I still need to detect the screen off and on events.

    Read the article

  • Managing Dependency Hell with WiX and C#

    - by Tom the Junglist
    We are on the eve of product launch, and at the last minute I am being bombarded with crash reports that appear to be related to our installer, which is a WiX3 project with separate outputs for x86 and x64 builds. These have been an ongoing problem that I always thought were fixed, only to find out that they were still lurking. The product itself is a collection of binaries that communicate with each other via .Net remoting, including a Windows Service and a small COM component that is loaded as an addon in another app. The service runs as SYSTEM, the COM piece runs in a low-rights context, while the other pieces run in normal user contexts. Other pieces include an third-party COM object library DLL and a shared DLL with the .net Remoting interfaces. I've observed flat-out weird behavior with MSI, particularly on version upgrades. Between MS' anal strong-name implementation (specifically, the exact version check before loading a given assembly), a documented WiX/MSI bug that sees critical files erased on upgrades (essentially, if a file in the upgrade MSI has the same version number as the existing install, that file is deleted), and having to work around Wow64 virtualization (x86 MSI can only write to registry/HD locations via Wow64, yet x64 MSIs cannot run on x86 computers...), I am about ready to trash the whole thing and port it over to a different install system. What I am looking for on tips + tricks, techniques, or suggestions on how to properly do things so that I am not fighting with Windows Installer's twisted sense of logic. I am tired of fighting with WiX/MSI/Windows Installer. All it needs to do is place files and registry keys where I tell it to, upgrade them when appropriate, and don't delete anything until the user uninstalls. Instead, dependencies are deleted willy-nilly, bringing up a whole bunch of uncatchable exceptions (can't wrap a try{} block around function declarations) and GPF'ing the whole app. I am particularly interested in 'best practices' and examples regarding shared and dependency DLLs, and any tips on making sure if a file needs to go to GAC, that it actually goes to the GAC and stays there until it is appropriate to remove it. Thanks! Tom

    Read the article

  • Applying Test Driven Development to a tightly coupled architecture

    - by Chris D
    Hi all, I've recently been studying TDD, attended a conference and have dabbled in few tests and already I'm 100% sold, I absolutely love it TDD. As a result I've raised this with my seniors and they are prepared to give it a chance, so they have tasked me with coming up with a way to implement TDD in the development of our enterprise product. The problem is our system has evolved since the days of VB6 to .NET and implements alot of legacy technology and some far from best practice development techniques i.e. alot of business logic in the ASP.NET code behind and client script. The largest problem however is how our classes are tightly coupled with database access; properties, methods, constructors - usually has some database access in some form or another. We use an in-house data access code generator tool that creates sqlDataAdapters that gives us all the database access we could ever want, which helps us develop extremely quickly, however, classes in our business layer are very tightly coupled to this data layer - we aren't even close to implementing some form of repository design. This and the issues above have created me all sorts of problems. I have tried to develop some unit tests for some existing classes I've already written but the tests take ALOT longer to run since db access is required, not to mention since we use the MS Enterprise Caching framework I am forced to fake a httpcontext for my tests to run successfully which isn't practical. Also, I can't see how to use TDD to drive the design of any new classes I write since they have to be soo tightly coupled to the database ... help! Because of the architecture of the system it appears I can't implement TDD without some real hack which in my eyes just defeats the aim of TDD and the huge benefits that come with. Does anyone have any suggestions how I could implement TDD with the constraints I'm bound too? or do I need to push the repository design pattern down my seniors throats and tell them we either change our architecture/development methodology or forget about TDD altogether? :) Thanks

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • How does Silverlight Image Clipping work?

    - by TreeUK
    I've got a very large image which I'd like to use for sprite techniques (à la css image sprites). I've got the code below: <Image x:Name="testImage" Width="24" Height="12" Source="../Resources/Images/sprites.png"> <Image.Clip> <RectangleGeometry Rect="258,10632,24,12" /> </Image.Clip> </Image> This clips the source image to 24x12 at the relative position of 258, 10632 in the source image. The problem is that I want the cropped image to show at 0,0 in the testImage whereas it shows it at 258, 10632. It's using the geometry as a cutting guide but also as a layout guide. Anyone have any idea how this should be done? if at all. Conclusion: There seems to be no good way of doing this at present, Graeme's solution seems to be the closest to achieving this with Silverlight 2.0. That said, if anyone knows of a better way of doing this, please reply with an answer.

    Read the article

  • SWIG: C++ to C#, pointer to pointer marshalling.

    - by CaRT
    I have some legacy code I want to port to C#. I cannot modify the C++ code, I just have to make do with what I'm given. So, the situation. I'm using SwIG, and I came across this function: void MarshalMe(int iNum, FooClass** ioFooClassArray); If I ran SWIG over this, it wouldn't know what to do with the array, so it will create a SWIGTYPE_p_pFooClass. Fair enough! C# code for this would look like void MarshalMe(int iNum, SWIGTYPE_p_p_FooClass ioFooClassArray); // Not ideal! There are some techniques for marshalling this kind of code correctly, so I tried a few of them: %typemap(ctype) FooClass** "FooClass**" %typemap(cstype) FooClass** "FooClass[]" %typemap(imtype, inattributes="[In, Out, MarshalAs(UnmanagedType.LPArray)]") FooClass** "FooClass[]" %typemap(csin) FooClass** "$csinput" %typemap(in) FooClass** "$1 = $input;" %typemap(freearg) FooClass** "" %typemap(argout) FooClass** "" This effectively creates a nicer signature: void MarshalMe(int iNum, FooClass[] ioFooClassArray); // Looks good! Would it work? However, when I try to run it, I get the following error: {"Exception of type 'System.ExecutionEngineException' was thrown."} Any ideas about the actual typemap?

    Read the article

  • jQuery UI Tabs customization for many tabs

    - by Tauren
    I'd like to implement a tab bar using jquery-ui-tabs that has been customised to work like it does on HootSuite. Try this on HootSuite to see what I mean: Log in to your hootsuite.com account Click the + symbol to the right of your tabs Add tabs named "MMMMMMMMMMMMMMMMM" until a "More..." tab appears You'll see this: HootSuite includes the following features, all of which I would like to do: Fit as many tabs as possible onto the screen. Users with larger screens would see more tabs. If they run out of space, a "More..." tab would appear with a drop-down list Clicking onto the More tab would drop down a list of additional tabs Tabs can be dragged and rearranged. Tabs in the More drop-down list can be dragged to the tab bar Delete tabs with a small X next to the tab name Add tabs with a + icon to the right of the last tab I already have a tab bar working that does 4 and 6. I found a Paging Tab Plugin which is pretty cool, but works differently. Does anyone know of plugins or techniques that would help me accomplish the above? My thought is to not really make the More tab a real tab, but just an object that looks like a tab. I'd add logic to the tabs.add method to calculate if another tab can fit. If it can't, then I would add the details of the tab to my separate "More" list. There'd be a fair amount of effort to get this all working, so if there are any plugins that would help, I'd love to hear about them.

    Read the article

  • Embedding IronPython in a WinForms app and interrupting execution

    - by namenlos
    BACKGROUND I've successfully embedded IronPython in my WinForm apps using techniques like the one described here: http://blog.peterlesliemorris.com/archive/2010/05/19/embedding-ironpython-into-a-c-application.aspx In the context of the embedding, my user may any write loops, etc. I'm using the IronPython 2.6 (the IronPython for .NET 2.0 and IronPython for .NET 4.0) MY PROBLEM Sometimes the users will need to interrupt the execution of their code In other words they need something like the ability to hit CTRL-C to halt execution when running Python or IronPython from the cmdline I want to add a button to the winform that when pressed halts the execution, but I'm not sure how to do this. MY PROBLEM The users will need to interrupt the execution of their code In other words they need something like the ability to hit CTRL-C to halt execution when running Python or IronPython from the cmdline MY QUESTION How can I make it to that pressing the a "stop" button will actually halt the execution of the using entered IronPython code? NOTES Note: I don't wont to simply through away that "session" - I still want the user to be able to interact with session and access any results that were available before it was halted. I am assuming I will need to execute this in a separate thread, any guidance or sample code in doing this correctly will be appreciated.

    Read the article

  • Looking for a good world map generation algorithm

    - by FalconNL
    I'm working on a Civilization-like game and I'm looking for a good algorithm for generating Earth-like world maps. I've experimented with a few alternatives, but haven't hit on a real winner yet. One option is to generate a heightmap using perlin noise and add water at a level so that about 30% of the world is land. While perlin noise (or similar fractal-based techniques) are frequently used for terrain and is reasonably realistic, it doesn't offer much in the way of control over the number, size and position of the resulting continents, which I'd like to have from a gameplay perspective. See http://farm3.static.flickr.com/2792/4462870263_ff26c40365_o.jpg for an example (sorry, can't post pictures yet). A second option is to start with a randomly positioned one-tile seed (I'm working on a grid of tiles), determine the desired size for the continent and each turn add a tile that is horizontally or vertically adjacent to the existing continent until you've reached the desired size. Repeat for the other continents. This technique is part of the algorithm used in Civilization 4. The problem is that after placing the first few continents, it's possible to pick a starting location that's surrounded by other continents, and thus won't fit the new one. Also, it has a tendency to spawn continents too close together, resulting in something that looks more like a river than continents. See http://farm5.static.flickr.com/4069/4462870383_46e86b155c_o.jpg for an example. Does anyone happen to know a good algorithm for generating realistic continents on a grid-based map while keeping control over their number and relative sizes?

    Read the article

  • Why is the compiler not selecting my function-template overload in the following example?

    - by Steve Guidi
    Given the following function templates: #include <vector> #include <utility> struct Base { }; struct Derived : Base { }; // #1 template <typename T1, typename T2> void f(const T1& a, const T2& b) { }; // #2 template <typename T1, typename T2> void f(const std::vector<std::pair<T1, T2> >& v, Base* p) { }; Why is it that the following code always invokes overload #1 instead of overload #2? void main() { std::vector<std::pair<int, int> > v; Derived derived; f(100, 200); // clearly calls overload #1 f(v, &derived); // always calls overload #1 } Given that the second parameter of f is a derived type of Base, I was hoping that the compiler would choose overload #2 as it is a better match than the generic type in overload #1. Are there any techniques that I could use to rewrite these functions so that the user can write code as displayed in the main function (i.e., leveraging compiler-deduction of argument types)?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >