Search Results

Search found 1611 results on 65 pages for 'technique'.

Page 34/65 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • What triggered the popularity of lambda functions in modern mainstream programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.

    Read the article

  • Does TDD's "Obvious Implementation" mean code first, test after?

    - by natasky
    My friend and I are relatively new TDD and have a dispute about the "Obvious Implementation" technique (from "TDD By Example" by Kent Beck). My friend says it means that if the implementation is obvious, you should go ahead and write it - before any test for that new behavior. And indeed the book says: How do you implement simple operations? Just implement them. Also: Sometimes you are sure you know how to implement an operation. Go ahead. I think what the author means is you should test first, and then "just implement" it - as opposed to the "Fake It ('Till You Make It)" and other techniques, which require smaller steps in the implementation stage. Also after these quotes the author talks about getting "red bars" (failing tests) when doing "Obvious Implementation" - how can you get a red bar without a test?. Yet I couldn't find any quote from the book saying "obvious" still means test first. What do you think? Should we test first or after when the implementation is "obvious" (according to TDD, of course)? Do you know a book or blog post saying just that?

    Read the article

  • How to use Pixel Bender (pbj) in ActionScript3 on large Vectors to make fast calculations?

    - by Arthur Wulf White
    Remember my old question: 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing? I figured I could use a Pixel Bender Shader to do the computation for any large group of elements in a game to save on processing time. At least it's a theory worth checking. I also read this question: Pass large array to pixel shader Which I'm guessing is about accomplishing the same thing in a different language. I read this tutorial: http://unitzeroone.com/blog/2009/03/18/flash-10-massive-amounts-of-3d-particles-with-alchemy-source-included/ I am attempting to do some tests. Here is some of the code: private const SIZE : int = Math.pow(10, 5); private var testVectorNum : Vector.<Number>; private function testShader():void { shader.data.ab.value = [1.0, 8.0]; shader.data.src.input = testVectorNum; shader.data.src.width = SIZE/400; shader.data.src.height = 100; shaderJob = new ShaderJob(shader, testVectorNum, SIZE / 4, 1); var time : int = getTimer(), i : int = 0; shaderJob.start(true); trace("TEST1 : ", getTimer() - time); } The problem is that I keep getting a error saying: [Fault] exception, information=Error: Error #1000: The system is out of memory. Update: I managed to partially workaround the problem by converting the vector into bitmapData: (Using this technique I still get a speed boost of 3x using Pixel Bender) private function testShader():void { shader.data.ab.value = [1.0, 8.0]; var time : int = getTimer(), i : int = 0; testBitmapData.setVector(testBitmapData.rect, testVectorInt); shader.data.src.input = testBitmapData; shaderJob = new ShaderJob(shader, testBitmapData); shaderJob.start(true); testVectorInt = testBitmapData.getVector(testBitmapData.rect); trace("TEST1 : ", getTimer() - time); }

    Read the article

  • Light mask map and camera for static lights in XNA Platformer

    - by JiminyCricket
    Using the example for some basic light maps found here : http://blog.josack.com/2011/07/xna-2d-dynamic-lighting.html, I've managed to create a lightmap texture using individual lightmaps and display it over a 2D tiled world as in the Platformer example. I'm using the very basic 2D camera example as found here : http://www.david-amador.com/2009/10/xna-camera-2d-with-zoom-and-rotation/, and the problem is that the lightmap texture scrolls with the player sprite. This looks pretty good and would be excellent for lighting the player sprite as it moves. But, I also want to be able to place static lights (or some initial position for the lights) that do not move with the player or camera. When I turn off the camera or give it a static position, it works as a series of static lights so I believe it's probably caused by the camera transformation matrix following the player around. I'm using RenderTarget2Ds, one for the main game screen after all the backgrounds and tiles are rendered, and one for the "lightmap" which consists of a black background and a bunch of lighting textures which are merged with it using additive blending. For now, I'm doing all of this in PlatformerGame.cs where the camera transformation and position is set and the level.Draw() call is made. I can't figure out how to separate the drawing of the lightmap and the camera following the player. I was thinking it would be better to render the shadows and lighting directly in the drawing of the level itself, but I'm not sure how to do that either because this technique requires RenderTarget2Ds and calling SpriteBatch.Begin()/End().

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • rel="nofollow" SEO impact

    - by Torez
    I saw a technique used where there was a block with three parts: 1. Image (wrapped in an anchor tag) 2. Heading (anchor tag with heading text) 3. Paragraph (regular p tag with synopsis content) e.g. <li class="block"> <a rel="nofollow" class="thumb" href="#"><img src="images/placeholder_service_thumbnail.jpg" alt="" /></a> <a class="h3" href="#"Good SEO Heading</a> <pPellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu...</p> </li> With the image tag there was a rel="nofollow" on the wrapped anchor tag. So the idea is that the users still has the ability to click the image and go to the details page, but the image link does not rank. When users click on the heading text, that is only what ranks for that specific page. Q: Is this the correct approach? Does this even do anything? What is the best practice?

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Dealing with curly brace soup

    - by Cyborgx37
    I've programmed in both C# and VB.NET for years, but primarily in VB. I'm making a career shift toward C# and, overall, I like C# better. One issue I'm having, though, is curly brace soup. In VB, each structure keyword has a matching close keyword, for example: Namespace ... Class ... Function ... For ... Using ... If ... ... End If If ... ... End If End Using Next End Function End Class End Namespace The same code written in C# ends up very hard to read: namespace ... { class ... { function ... { for ... { using ... { if ... { ... } if ... { ... } } } // wait... what level is this? } } } Being so used to VB, I'm wondering if there's a technique employed by c-style programmers to improve readability and to ensure that your code ends up in the correct "block". The above example is relatively easy to read, but sometimes at the end of a piece of code I'll have 8 or more levels of curly braces, requiring me to scroll up several pages to figure out which brace ends the block I'm interested in.

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Modelling work-flow plus interaction with a database - quick and accessible options

    - by cjmUK
    I'm wanting to model a (proposed) manufacturing line, with specific emphasis on interaction with a traceability database. That is, various process engineers have already mapped the manufacturing process - I'm only interested in the various stations along the line that have to talk to the DB. The intended audience is a mixture of project managers, engineers and IT people - the purpose is to identify: points at which the line interacts with the DB (perhaps going so far as indicating the Store Procs called at each point, perhaps even which parameters are passed.) the communication source (PC/Handheld device/PLC) the communication medium (wireless/fibre/copper) control flow (if leak test fails, unit is diverted to repair station) Basically, the model will be used as a focus different groups on outstanding tasks; for example, I'm interested in the DB and any front-end app needed, process engineers need to be thinking about the workflow and liaising with the PLC suppliers, the other IT guys need to make sure we have the hardware and comms in place. Obviously I could just improvise in Visio, but I was wondering if there was a particular modelling technique that might particularly suit my needs or my audience. I'm thinking of a visual model with supporting documentation (as little as possible, as much as is necessary). Clearly, I don't want something that will take me ages to (effectively) learn, nor one that will alienate non-technical members of the project team. So far I've had brief looks at BPMN, EPC Diagrams, standard Flow Diagrams... and I've forgotten most of what I used to know about UML... And I'm not against picking and mixing... as long as it is quick, clear and effective. Conclusion: In the end, I opted for a quasi-workflow/dataflow diagram. I mapped out the parts of the manufacturing process that interact with the traceability DB, and indicated in a significantly-simplified form, the data flows and DB activity. Alongside which, I have a supporting document which outlines each process, the data being transacted for each process (a 'data dictionary' of sorts) and details of hardware and connectivity required. I can't decide whether is a product of genius or a crime against established software development practices, but I do think that is will hit the mark for this particular audience.

    Read the article

  • Wanted: Java Code Brainteasers

    - by Tori Wieldt
    The Jan/Feb Java Magazine will go out next week. It's full of great Java stories, interviews and technical articles. It also includes a Fix This section; the idea of this section is challenging a Java developer's coding skills. It's a multiple-choice brainteaser that includes code and possible answers. The answer is provided in the next issue. For an example, check out Fix This in the Java Magazine premier issue. We are looking for community submissions to Fix This. Do you have a good code brain teaser? Remember, you want tease your fellow devs, not stump them completely! If you have a submission, here's what you do:  1. State the problem, including a short summary of the tool/technique, in about 75 words. 2. Send us the code snippet, with a short set-up so readers know what they are looking at (such as, "Consider the following piece of code to have database access within a Servlet.") 3. Provide four multiple-choice answers to the question, "What's the fix?" 4. Give us the answer, along with a brief explanation of why. 5. Tell us who you are (name, occupation, etc.) 6. Email the above to JAVAMAG_US at ORACLE.COM with "Fix This Submission" in the title. Deadlines for Fix This for next two issues of Java Magazine are Dec. 12th and Jan. 15th. Bring It!

    Read the article

  • What .NET objects should I use to create a cookie based session in MVC?

    - by makerofthings7
    I'm writing a custom password reset application that uses a validation technique that doesn't fit cleanly with ASP.NET Membership Provider's challenge questions. Namely I need to invoke a workflow and collect information from the end user (backup phone number, email address) after the user logs in using a custom form. The only way I know to create a cookie-based session (without too much "innovation" on my part) is to use WIF. What other standard objects can I use with ASP.NET MVC to create an authenticated session that works with non-windows user stores? Ideally I can store "role" or claim information in the session object such as "admin", "departmentXadmin", "normalUser", or "restrictedUser" The workflow would look like this: User logs in with username and password If the username and pw are correct a (stateless) cookie based session is created The user gets redirected to a HTML form that allows them to enter their backup phone number (for SMS dual factor), or validate it if already set. The user can then change their password using the form provided The "forgot password" would look like this User requests OTP code to be sent to the phone User logs in using username and OTP If the OTP is valid and not expired then create a cookie based session and redirect to a form that allows password reset Show password reset form, and process results.

    Read the article

  • Shooting Print Quality Pictures with a Camera Phone [Video]

    - by Jason Fitzpatrick
    Camera phones get a lot of grief for being underpowered but in this video tutorial the crew at SLR Lounge shows how basic technique and a good eye overcome all. Last year over at the photo blog FStoppers they put together a video showing off how you could use the iPhone as a fashion camera–essentially arguing that the camera wasn’t as important as the photographer. A lot of people said “Well yeah, but you had professional models and thousands of dollars in lighting equipment!” in reaction to the video. In turn the crew at SLR Lounge decided to make their own video showing that using only an iPhone camera and two reflectors (as well as an attractive but informal model). It of course helps to have some side kicks to help hold up your reflectors but the point still stands about modern camera phones being perfectly capable of good photos. The SLR Lounge iPhone Photo Shoot – A Follow Up Tribute to The FStoppers [YouTube] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Developing wheel reinventing tendencies into a skill as opposed to reluctantly learning wheel-finding skills? [duplicate]

    - by Korey Hinton
    This question already has an answer here: Is reinventing the wheel really all that bad? 20 answers I am more of a high-level wheel reinventor. I definitely prefer to make use of existing API features built into a language and popular third-party frameworks that I know can solve the problem, however when I have a particular problem that I feel capable of solving within a reasonable time I am very reluctant to find someone else's solution. Here are a few reasons why I reinvent: It takes time to learn a new API API restrictions might exist that I don't know about Avoiding re-work of unfamiliar code I am conflicted between doing what I know and shifting to a new technique I don't feel comfortable with. On one hand I feel like following my instincts and getting really good at solving problems, especially ones that I would never challenge myself with if all I did was try to find answers. And on the other hand I feel like I might be missing out on important skills like saving time by finding the right framework and expanding my knowledge by learning how to use a new framework. I guess my question comes down to this: My current attitude is to stick to the built-in API and APIs I know well* and to not spend my time searching github for a solution to a problem I know I can solve myself within a reasonable amount of time. Is that a reasonable balance for a successful programmer? *Obviously I will still look around for new frameworks that save time and solve/simplify difficult problems.

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • Methodology To Determine Cause Of User Specific Error

    - by user3163629
    We have software that for certain clients fails to download a file. The software is developed in Python and compiled into an Windows Executable. The cause of the error is still unknown but we have established that the client has an active internet connection. We suspect that the cause is due to the clients network setup. This error cannot be replicated in house. What technique or methodology should be applied to this kind of specific error that cannot be replicated in house. The end goal is to determine the cause of this error so we can move onto the solution. For example; Remote Debugging: Produce a debug version of the software and ask the client to send back a debug output file. This involves alot of time (back and forth communication) and requires the client to work and act in a timely manor to be successful. In-house debugging: Visit the client and determine their network setup, etc. Possibly develop a series of script tests before hand to run on the clients computer under the same network. Other methodologies and techniques I am not aware of?

    Read the article

  • Simplest way to render image over top of another with another image used as mask in OpenGL?

    - by Adam Naylor
    The effect I'm looking for is to have a single large background image that is always visible (at full alpha) and then show a second image (what I call a light map or specular map) that is partially shown over the top based on a third image (which is effectively a mask). The effect is similar to this effect except instead of simply darkening or lightening the background image using the third image it needs to mask the second without effecting the first at all. The third image is the only one that moves therefore hard baking the third images alpha into the second image isn't an option. If my explanation isn't clear I'll provide visual examples when I have more time. I'd prefer not to go down a shader route as I haven't taught myself this area yet so unless I have too I'd rather try to achieve this with simple alpha blending. Happy to use a shader approach. Cheers. Additional These third images are obviously light sources being cast onto the first image showing the specular information from the second image to simulate the light 'shining' off the objects in the first image. The solution I implement will need to allow two light sources to potentially overlap so my current thoughts are that the alpha values of the two images will need to be combined (Added?) to produce a final image which masks the second image? Don't worry about things like coloured lights. For this technique the lights are all considered white.

    Read the article

  • Trouble with SAT style vector projection in C#/XNA

    - by ssb
    Simply put I'm having a hard time working out how to work with XNA's Vector2 types while maintaining spatial considerations. I'm working with separating axis theorem and trying to project vectors onto an arbitrary axis to check if those projections overlap, but the severe lack of XNA-specific help online combined with pseudo code everywhere that omits key parts of the algorithm, googling has left me little help. I'm aware of HOW to project a vector, but the way that I know of doing it involves the two vectors starting from the same point. Particularly here: http://www.metanetsoftware.com/technique/tutorialA.html So let's say I have a simple rectangle, and I store each of its corners in a list of Vector2s. How would I go about projecting that onto an arbitrary axis? The crux of my problem is that taking the dot product of say, a vector2 of (1, 0) and a vector2 of (50, 50) won't get me the dot product I'm looking for.. or will it? Because that (50, 50) won't be the vector of the polygon's vertex but from whatever XNA calculates. It's getting the calculation from the right starting point that's throwing me off. I'm sorry if this is unclear, but my brain is fried from trying to think about this. I need a better understanding of how XNA calculates Vector2s as actual vectors and not just as random points.

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Passing Parameters to an ADF Page through the URL - Part 2.

    - by shay.shmeltzer
    I showed before how to pass a parameter on the URL when invoking a taskflow (where the taskflow starts with a method call and then a page). However in some simpler scenarios you don't actually need a full blown taskflow. Instead you can use page level parameters defined for your page in the adfc-config.xml file. So below is a demo of this technique. I'm also taking advantage of this video to show the concept of a view object level service method and how to invoke it from your page. P.S. You might wonder - why not just reference #{param.amount} as the value set for the method parameter? Why do I need to copy it into a viewScope parameter? The advantage of placing the value in the viewScope is that it is available even when the page went through several sumbits. For example if you switch the "partialSumbit" property of the "Next" button to false in the above example - the minute that you press the button to go to the next department - the param.amount value is gone. However the ViewScope is still there as long as you stay on this page.

    Read the article

  • How to deal with a valuable person going in all directions?

    - by JVerstry
    I am working with someone producing user content to be included in a software application. He is not a coder, but rather an expert in his field, sharing the knowledge. His contribution, taken piece by piece is great, but he goes in all directions and has issues producing work sequentially. He works on 25 pieces of content at the same time, and as soon as he reads something 'interesting', he wants to rewrite some of his stuff to improve the quality of it. He does not converge naturally. He collects tons of informations, produces some valuable stuff, but in a completely unstructured way. We addressed this issue with him some time ago and in order to try to solve it, we created a document with the 100 items he had to fill. Problem is, it does not seem to work very well. How to deal with those people and collect information? I was thinking about a new technique: ask him to send his bits, out of order, little by little, as soon as they are ready, and keep a list of what remains to be done, and show him that list to give him direction. This situation is stressing the hell out of me. If his production was not good, I would not be trying so hard to make this work. If you have experience to share, it is welcome.

    Read the article

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • Making particle bounce off a line with friction

    - by Dlaor
    So I'm making a game and I need a particle to bounce off a line. I've got this so far: public static Vector2f Reflect(this Vector2f vec, Vector2f axis) //vec is velocity { Vector2f result = vec - 2f * axis * axis.Dot(vec); return result; } Which works fine, but then I decided I wanted to be able to change the bounciness and friction of the bounce. I got bounciness down... public static Vector2f Reflect(this Vector2f vec, Vector2f axis, float bounciness) //Bounciness goes from 0 to 1, 0 being not bouncy and 1 being perfectly bouncy { var reflect = (1 + bounciness); //2f Vector2f result = vec - reflect * axis * axis.Dot(vec); return result; } But when I tried to add friction, everything went to hell and back... public static Vector2f Reflect(this Vector2f vec, Vector2f axis, float bounciness, float friction) //Does not work at all! { var reflect = (1 + bounciness); //2f Vector2f subtract = reflect * axis * axis.Dot(vec); Vector2f subtract2 = axis * axis.Dot(vec); Vector2f result = vec - subtract; result -= axis.PerpendicularLeft() * subtract2.Length() * friction; return result; } Any physics guys willing to help me out with this? (if you're not sure what I mean with the friction of a bounce see this: http://www.metanetsoftware.com/technique/diagrams/A-1_particle_collision.swf)

    Read the article

  • Hash Algorithm Randomness Visualization

    - by clstroud
    I'm curious if anyone here has any idea how the images were generated as shown in this response: Which hashing algorithm is best for uniqueness and speed? Ian posted a very well-received response but I can't seem to understand how he went about making the images. I hate to make a new question dedicated to this, but I can't find any means to ask him more directly. On the other hand, perhaps someone has an alternative perspective. The best I can personally come up with would be to have it almost like a bar graph, which would illustrate how evenly the buckets of the hash table are being generated. I have a working Cocoa program that does this, but it can't generate anything like what he showed there. So the question is two fold I suppose: A) How does one truly interpret the data he shows? Is it more than "less whitespace = better"? B) How does one generate such an image based on some set of inputs, a hash, and an index? Perhaps I'm misunderstanding entirely, but I really would like to know more about this particular visualization technique. Or maybe I'm mis-applying this to hash tables rather than just hashes in general, but in that case I don't know how it would be "bounded" for the image.

    Read the article

  • Pure functional programming and game state

    - by Fu86
    Is there a common technique to handle state (in general) in a functional programming language? There are solutions in every (functional) programming language to handle global state, but I want to avoid this as far as I could. All state in a pure functional manner are function parameters. So I need to put the whole game state (a gigantic hashmap with the world, players, positions, score, assets, enemies, ...)) as a parameter to all functions which wants to manipulate the world on a given input or trigger. The function itself picks the relevant information from the gamestate blob, do something with it, manipulate the gamestate and return the gamestate. But this looks like a poor mans solution for the problem. If I put the whole gamestate into all functions, there is no benefit for me in contrast to global variables or the imperative approach. I could put just the relevant information into the functions and return the actions which will be taken for the given input. And one single function apply all the actions to the gamestate. But most functions need a lot of "relevant" information. move() need the object position, the velocity, the map for collision, position of all enemys, current health, ... So this approach does not seem to work either. So my question is how do I handle the massive amount of state in a functional programming language -- especially for game development?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >