Search Results

Search found 5245 results on 210 pages for 'confused or too stupid'.

Page 196/210 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • How to drill down with jQuery?

    - by Timothy Reed
    I'm new to jQuery so sorry if this sounds stupid but I'm having truble drilling down to other elemnts. Paticularly I want to fade in the .menu li a:hover class with jquery. .menu { padding:0; margin:0; list-style:none; } .menu li { float:left; margin-left:1px; } .menu li a { display:block; height:44px; line-height:40px; padding:0 5px; float:right; color:#fff; text-decoration:none; font-family:"Palatino Linotype", "Book Antiqua", Palatino, serif; font-size:12px; font-weight:bold; } .menu li a b { text-transform:uppercase; } .menu li a:hover { color:#E4FFC5; background: url(../images/arrow.png) no-repeat center bottom; } .current { background: url(../images/arrow.png) no-repeat center bottom; font-size:16px; font-weight:bold; } .spacer p { display:block; height:44px; line-height:40px; padding:0 5px; float:right; color:#fff; text-decoration:none; font-family:"Palatino Linotype", "Book Antiqua", Palatino, serif; font-size:12px; font-weight:bold; } <ul class="menu"> <li class="current"><a href="index.html">Home</a></li> <li class="spacer"> <p>|</p> </li> <li><a href="#">Mission &amp; Values </a></li> <li class="spacer"> <p>|</p> </li> <li><a href="#">Caregivers</a></li> <li class="spacer"> <p>|</p> </li> <li><a href="#">Special Programs </a></li> <li class="spacer"> <p>|</p> </li> <li><a href="#">Enployment</a></li> <li class="spacer"> <p>|</p> </li> <li><a href="#">Contact</a></li> </ul> <script type="text/javascript"> $(function() { $('a').mouseover(function() { $('.logo').animate ({opacity:'0.6'}, 'normal'); }); $('a').mouseout (function() { $('.logo').animate ({opacity:'1'}, 'normal'); $('.menu li a:hover').fadeIn ('slow'); }); </script>

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

  • Actionscript 3.0 - Enemies do not move right in my platformer game

    - by Christian Basar
    I am making a side-scrolling platformer game in Flash (Actionscript 3.0). I have made lots of progress lately, but I have come across a new problem. I will give some background first. My game level's terrain (or 'floor') is referenced by a MovieClip variable called 'floor.' My desire is to have the Player and enemy characters walk along the terrain. I have gotten the Player character to move on the terrain just fine; he walks up/down hills and falls whenever there is no ground beneath him. Here is the code I created to allow the Player to follow the terrain correctly. Much more code is used to control the Player, but only this code deals with the Player character's following of the terrain and gravity. // If the Player's not on the ground (not touching the 'floor' MovieClip)... if (!onGround) { // Disable ducking downKeyPressed = false; // Increase the Player's 'y' position by his 'y' velocity player.y += playerYVel; } // Increase the 'playerYVel' variable so that the Player will fall // progressively faster down the screen. This code technically // runs "all the time" but in reality it only affects the player // when he's off the ground. playerYVel += gravity; // Give the Player a terminal velocity of 15 px/frame if (playerYVel > 15) { playerYVel = 15; } // If the Player has not hit the 'floor,' increase his falling //speed if (! floor.hitTestPoint(player.x, player.y, true)) { player.y += playerYVel; // The Player is not on the ground when he's not touching it onGround = false; } Since getting this code to work for the Player, I have created a 'SkullDemon' class, which is one of the planned enemies for my game. I want the 'SkullDemon' objects to move along the terrain like the Player does. With lots of great help, I have already coded the EventListeners, etc. necessary for the 'SkullDemons' to move. Unfortunately, I am having trouble getting them to move along the terrain. In fact, they do not touch the terrain at all; they move along the top of the boundary of the 'floor' MovieClip! I had a simple text diagram showing what I mean, but unfortunately Stackoverflow does not format it correctly. I hope my problem is clear from my description. Strangely enough, my code for the Player's movement and the 'SkullDemon's' movement is almost exactly the same, yet the 'SkullDemons' do not move like the Player does. Here is my code for the SkullDemon movement: // Move all of the Skull Demons using this method protected function moveSkullDemons():void { // Go through the whole 'skullDemonContainer' for (var skullDi:int = 0; skullDi < skullDemonContainer.numChildren; skullDi++) { // Set the SkullDemon 'instance' variable to equal the current SkullDemon skullDIns = SkullDemon(skullDemonContainer.getChildAt(skullDi)); // For now, just move the Skull Demons left at 5 units per second skullDIns.x -= 5; // If the Skull Demon has not hit the 'floor,' increase his falling //speed if (! floor.hitTestPoint(skullDIns.x, skullDIns.y, true)) { // Increase the Skull Demon's 'y' position by his 'y' velocity skullDIns.y += skullDIns.sdYVel; // The Skull Demon is not on the ground when he's not touching it skullDIns.sdOnGround = false; } // Increase the 'sdYVel' variable so that the Skull Demon will fall // progressively faster down the screen. This code technically // runs "all the time" but in reality it only affects the Skull Demon // when he's off the ground. if (! skullDIns.sdOnGround) { skullDIns.sdYVel += skullDIns.sdGravity; // Give the Skull Demon a terminal velocity of 15 px/frame if (skullDIns.sdYVel > 15) { skullDIns.sdYVel = 15; } } // What happens when the Skull Demon lands on the ground after a fall? // The Skull Demon is only on the ground ('onGround == true') when // the ground is touching the Skull Demon MovieClip's origin point, // which is at the Skull Demon's bottom centre for (var i:int = 0; i < 10; i++) { // The Skull Demon is only on the ground ('onGround == true') when // the ground is touching the Skull Demon MovieClip's origin point, // which is at the Skull Demon's bottom centre if (floor.hitTestPoint(skullDIns.x, skullDIns.y, true)) { skullDIns.y = skullDIns.y; // Set the Skull Demon's y-axis speed to 0 skullDIns.sdYVel = 0; // The Skull Demon is on the ground again skullDIns.sdOnGround = true; } } } } // End of 'moveSkullDemons()' function It is almost like the 'SkullDemons' are interacting with the 'floor' MovieClip using the hitTestObject() function, and not the hitTestPoint() function which is what I want, and which works for the Player character. I am confused about this problem and would appreciate any help you could give me. Thanks!

    Read the article

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • Flash Media Server Streaming: Content Protection

    - by dbemerlin
    Hi, i have to implement flash streaming for the relaunch of our video-on-demand system but either because i haven't worked with flash-related systems before or because i'm too stupid i cannot get the system to work as it has to. We need: Per file & user access control with checks on a WebService every minute if the lease time ran out mid-stream: cancelling the stream rtmp streaming dynamic bandwidth checking Video Playback with Flowplayer (existing license) I've got the streaming and bandwidth check working, i just can't seem to get the access control working. I have no idea how i know which file is played back or how i can play back a file depending on a key the user has entered. Server-Side Code (main.asc): application.onAppStart = function() { trace("Starting application"); this.payload = new Array(); for (var i=0; i < 1200; i++) { this.payload[i] = Math.random(); //16K approx } } application.onConnect = function( p_client, p_autoSenseBW ) { p_client.writeAccess = ""; trace("client at : " + p_client.uri); trace("client from : " + p_client.referrer); trace("client page: " + p_client.pageUrl); // try to get something from the query string: works var i = 0; for (i = 0; i < p_client.uri.length; ++i) { if (p_client.uri[i] == '?') { ++i; break; } } var loadVars = new LoadVars(); loadVars.decode(p_client.uri.substr(i)); trace(loadVars.toString()); trace(loadVars['foo']); // And accept the connection this.acceptConnection(p_client); trace("accepted!"); //this.rejectConnection(p_client); // A connection from Flash 8 & 9 FLV Playback component based client // requires the following code. if (p_autoSenseBW) { p_client.checkBandwidth(); } else { p_client.call("onBWDone"); } trace("Done connecting"); } application.onDisconnect = function(client) { trace("client disconnecting!"); } Client.prototype.getStreamLength = function(p_streamName) { trace("getStreamLength:" + p_streamName); return Stream.length(p_streamName); } Client.prototype.checkBandwidth = function() { application.calculateClientBw(this); } application.calculateClientBw = function(p_client) { /* lots of lines copied from an adobe sample, appear to work */ } Client-Side Code: <head> <script type="text/javascript" src="flowplayer-3.1.4.min.js"></script> </head> <body> <a class="rtmp" href="rtmp://xx.xx.xx.xx/vod_project/test_flv.flv" style="display: block; width: 520px; height: 330px" id="player"> </a> <script> $f( "player", "flowplayer-3.1.5.swf", { clip: { provider: 'rtmp', autoPlay: false, url: 'test_flv' }, plugins: { rtmp: { url: 'flowplayer.rtmp-3.1.3.swf', netConnectionUrl: 'rtmp://xx.xx.xx.xx/vod_project?foo=bar' } } } ); </script> </body> My first Idea was to get a key from the Query String, ask the web service about which file and user that key is for and play the file but i can't seem to find out how to play a file from server side. My second idea was to let flowplayer play a file, pass the key as query string and if the filename and key don't match then reject the connection but i can't seem to find out which file it's currently playing. The only remaining idea i have is: create a list of all files the user is allowed to open and set allowReadAccess or however it was called to allow those files, but that would be clumsy due to the current infrastructure. Any hints? Thanks.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Monorail - Form submission using GET instead of POST

    - by Septih
    Hello, I'm writing some additions to a Castle MonoRail based site involving an Add and an Edit form. The add form works fine and uses POST but the edit form uses GET. The only major difference I can see is that the edit method is called with the Id of the object being edited in the query string. When the submit button is pressed on the edit form, the only argument passed is this object Id again. Here is the code for the edit form: <form action="edit.ashx" method="post"> <h3>Coupon Description</h3> <textarea name="comments" width="200">$comments</textarea> <br/><br/> <h3>Coupon Actions</h3> <br/> <div>Give Stories</div> <ul class="checklist" style="overflow:auto;height:144px;width:100%"> #foreach ($story in $stories.Values) <li> <label> #set ($associated = "") #foreach($storyId in $storyIds) #if($story.Id == $storyId) #set($associated = " checked='true'") #end #end <input type="checkbox" name="chk_${story.Id}" id="chk_${story.Id}" value="true" class="checkbox" $associated/> $story.Name </label> </li> #end </ul> <br/><br/> <div>Give Credit Amount</div> <input type="text" name="credit" value="$credit" /> <br/><br/> <h3>Coupon Uses</h3> <input type="checkbox" name="multi" #if($multi) checked="true" #end /> Multi-Use Coupon?<br/><br/> <b>OR</b> <br/> <br/> Number of Uses per Coupon: <input type="text" name="uses" value="$uses" /> <br/> <input type="submit" name="Save" /> </form> The differences between this and the add form is the velocity stuff to do with association and the values of the inputs being from the PropertyBag. The Method dealing with this on the controller starts like this: public void Edit(int id) { Coupon coupon = Coupon.GetRepository(User.Site.Id).FindById(id).Value; if(coupon == null) { RedirectToReferrer(); return; } IFutureQueryOfList<Story> stories = Story.GetRepository(User.Site.Id).OnlyReturnEnabled().FindAll("Name", true); if (Params["Save"] == null) { ... } } It reliably gets called but a breakpoint on the Params["Save"] lets me see that the HttpMethod is "GET" and the only arguments passed (In the Form and the Request) are the object Id and additional HTTP headers. At the end of the day I'm not that familiar with MonoRail and this may be a stupid mistake on my behalf, but I would very much appreciate being made a fool out of if it fixes the problem! :) Thanks

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • How is the gimbal locked problem solved using accumulative matrix transformations

    - by Luke San Antonio
    I am reading the online "Learning Modern 3D Graphics Programming" book by Jason L. McKesson As of now, I am up to the gimbal lock problem and how to solve it using quaternions. However right here, at the Quaternions page. Part of the problem is that we are trying to store an orientation as a series of 3 accumulated axial rotations. Orientations are orientations, not rotations. And orientations are certainly not a series of rotations. So we need to treat the orientation of the ship as an orientation, as a specific quantity. I guess this is the first spot I start to get confused, the reason is because I don't see the dramatic difference between orientations and rotations. I also don't understand why an orientation cannot be represented by a series of rotations... Also: The first thought towards this end would be to keep the orientation as a matrix. When the time comes to modify the orientation, we simply apply a transformation to this matrix, storing the result as the new current orientation. This means that every yaw, pitch, and roll applied to the current orientation will be relative to that current orientation. Which is precisely what we need. If the user applies a positive yaw, you want that yaw to rotate them relative to where they are current pointing, not relative to some fixed coordinate system. The concept, I understand, however I don't understand how if accumulating matrix transformations is a solution to this problem, how the code given in the previous page isn't just that. Here's the code: void display() { glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glutil::MatrixStack currMatrix; currMatrix.Translate(glm::vec3(0.0f, 0.0f, -200.0f)); currMatrix.RotateX(g_angles.fAngleX); DrawGimbal(currMatrix, GIMBAL_X_AXIS, glm::vec4(0.4f, 0.4f, 1.0f, 1.0f)); currMatrix.RotateY(g_angles.fAngleY); DrawGimbal(currMatrix, GIMBAL_Y_AXIS, glm::vec4(0.0f, 1.0f, 0.0f, 1.0f)); currMatrix.RotateZ(g_angles.fAngleZ); DrawGimbal(currMatrix, GIMBAL_Z_AXIS, glm::vec4(1.0f, 0.3f, 0.3f, 1.0f)); glUseProgram(theProgram); currMatrix.Scale(3.0, 3.0, 3.0); currMatrix.RotateX(-90); //Set the base color for this object. glUniform4f(baseColorUnif, 1.0, 1.0, 1.0, 1.0); glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, glm::value_ptr(currMatrix.Top())); g_pObject->Render("tint"); glUseProgram(0); glutSwapBuffers(); } To my understanding, isn't what he is doing (modifying a matrix on a stack) considered accumulating matrices, since the author combined all the individual rotation transformations into one matrix which is being stored on the top of the stack. My understanding of a matrix is that they are used to take a point which is relative to an origin (let's say... the model), and make it relative to another origin (the camera). I'm pretty sure this is a safe definition, however I feel like there is something missing which is blocking me from understanding this gimbal lock problem. One thing that doesn't make sense to me is: If a matrix determines the difference relative between two "spaces," how come a rotation around the Y axis for, let's say, roll, doesn't put the point in "roll space" which can then be transformed once again in relation to this roll... In other words shouldn't any further transformations to this point be in relation to this new "roll space" and therefore not have the rotation be relative to the previous "model space" which is causing the gimbal lock. That's why gimbal lock occurs right? It's because we are rotating the object around set X, Y, and Z axes rather than rotating the object around it's own, relative axes. Or am I wrong? Since apparently this code I linked in isn't an accumulation of matrix transformations can you please give an example of a solution using this method. So in summary: What is the difference between a rotation and an orientation? Why is the code linked in not an example of accumulation of matrix transformations? What is the real, specific purpose of a matrix, if I had it wrong? How could a solution to the gimbal lock problem be implemented using accumulation of matrix transformations? Also, as a bonus: Why are the transformations after the rotation still relative to "model space?" Another bonus: Am I wrong in the assumption that after a transformation, further transformations will occur relative to the current? Also, if it wasn't implied, I am using OpenGL, GLSL, C++, and GLM, so examples and explanations in terms of these are greatly appreciated, if not necessary. The more the detail the better! Thanks in advance...

    Read the article

  • What does the `forall` keyword in Haskell/GHC do?

    - by JUST MY correct OPINION
    I've been banging my head on this one for (quite literally) years now. I'm beginning to kinda/sorta understand how the foreach keyword is used in so-called "existential types" like this: data ShowBox = forall s. Show s => SB s (This despite the confusingly-worded explanations of it in the fragments found all around the web.) This is only a subset, however, of how foreach is used and I simply cannot wrap my mind around its use in things like this: runST :: forall a. (forall s. ST s a) -> a Or explaining why these are different: foo :: (forall a. a -> a) -> (Char,Bool) bar :: forall a. ((a -> a) -> (Char, Bool)) Or the whole RankNTypes stuff that breaks my brain when "explained" in a way that makes me want to do that Samuel L. Jackson thing from Pulp Fiction. (Don't follow that link if you're easily offended by strong language.) The problem, really, is that I'm a dullard. I can't fathom the chicken scratchings (some call them "formulae") of the elite mathematicians that created this language seeing as my university years are over two decades behind me and I never actually had to put what I learnt into use in practice. I also tend to prefer clear, jargon-free English rather than the kinds of language which are normal in academic environments. Most of the explanations I attempt to read on this (the ones I can find through search engines) have these problems: They're incomplete. They explain one part of the use of this keyword (like "existential types") which makes me feel happy until I read code that uses it in a completely different way (like runST, foo and bar above). They're densely packed with assumptions that I've read the latest in whatever branch of discrete math, category theory or abstract algebra is popular this week. (If I never read the words "consult the paper whatever for details of implementation" again, it will be too soon.) They're written in ways that frequently turn even simple concepts into tortuously twisted and fractured grammar and semantics. (I suspect that the last two items are the biggest problem. I wouldn't know, though, since I'm too much a dullard to comprehend them.) It's been asked why Haskell never really caught on in industry. I suspect, in my own humble, unintelligent way, that my experience in figuring out one stupid little keyword -- a keyword that is increasingly ubiquitous in the libraries being written these days -- are also part of the answer to that question. It's hard for a language to catch on when even its individual keywords cause years-long quests to comprehend. Years-long quests which end in failure. So... On to the actual question. Can anybody completely explain the foreach keyword in clear, plain English (or, if it exists somewhere, point to such a clear explanation which I've missed) that doesn't assume I'm a mathematician steeped in the jargon?

    Read the article

  • Efficient representation of Hierarchies in Hibernate.

    - by Alison G
    I'm having some trouble representing an object hierarchy in Hibernate. I've searched around, and haven't managed to find any examples doing this or similar - you have my apologies if this is a common question. I have two types which I'd like to persist using Hibernate: Groups and Items. * Groups are identified uniquely by a combination of their name and their parent. * The groups are arranged in a number of trees, such that every Group has zero or one parent Group. * Each Item can be a member of zero or more Groups. Ideally, I'd like a bi-directional relationship allowing me to get: * all Groups that an Item is a member of * all Items that are a member of a particular Group or its descendants. I also need to be able to traverse the Group tree from the top in order to display it on the UI. The basic object structure would ideally look like this: class Group { ... /** @return all items in this group and its descendants */ Set<Item> getAllItems() { ... } /** @return all direct children of this group */ Set<Group> getChildren() { ... } ... } class Item { ... /** @return all groups that this Item is a direct member of */ Set<Group> getGroups() { ... } ... } Originally, I had just made a simple bi-directional many-to-many relationship between Items and Groups, such that fetching all items in a group hierarchy required recursion down the tree, and fetching groups for an Item was a simple getter, i.e.: class Group { ... private Set<Item> items; private Set<Group> children; ... /** @return all items in this group and its descendants */ Set<Item> getAllItems() { Set<Item> allItems = new HashSet<Item>(); allItems.addAll(this.items); for(Group child : this.getChildren()) { allItems.addAll(child.getAllItems()); } return allItems; } /** @return all direct children of this group */ Set<Group> getChildren() { return this.children; } ... } class Item { ... private Set<Group> groups; /** @return all groups that this Item is a direct member of */ Set<Group> getGroups() { return this.groups; } ... } However, this resulted in multiple database requests to fetch the Items in a Group with many descendants, or for retrieving the entire Group tree to display in the UI. This seems very inefficient, especially with deeper, larger group trees. Is there a better or standard way of representing this relationship in Hibernate? Am I doing anything obviously wrong or stupid? My only other thought so far was this: Replace the group's id, parent and name fields with a unique "path" String which specifies the whole ancestry of a group, e.g.: /rootGroup /rootGroup/aChild /rootGroup/aChild/aGrandChild The join table between Groups and Items would then contain group_path and item_id. This immediately solves the two issues I was suffering previously: 1. The entire group hierarchy can be fetched from the database in a single query and reconstructed in-memory. 2. To retrieve all Items in a group or its descendants, we can select from group_item where group_path='N' or group_path like 'N/%' However, this seems to defeat the point of using Hibernate. All thoughts welcome!

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • ERROR: Failed to build gem native extension (mysql2 on rails 3.2.3)

    - by Ryan Arneson
    I'm trying to install the mysql2 gem with Rails 3.2.3 and it's failing: ? bundle install Fetching gem metadata from https://rubygems.org/......... Using rake (0.9.2.2) Using i18n (0.6.0) Using multi_json (1.2.0) Using activesupport (3.2.3) Using builder (3.0.0) Using activemodel (3.2.3) Using erubis (2.7.0) Using journey (1.0.3) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.1) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.2) Using actionpack (3.2.3) Using mime-types (1.18) Using polyglot (0.3.3) Using treetop (1.4.10) Using mail (2.4.4) Using actionmailer (3.2.3) Using arel (3.0.2) Using tzinfo (0.3.32) Using activerecord (3.2.3) Using activeresource (3.2.3) Using bundler (1.1.3) Using coffee-script-source (1.2.0) Using execjs (1.3.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Using json (1.6.6) Using rdoc (3.12) Using thor (0.14.6) Using railties (3.2.3) Using coffee-rails (3.2.2) Using jquery-rails (2.0.2) Installing mysql2 (0.3.11) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /Users/rarneson/.rvm/rubies/ruby-1.9.3-p125/bin/ruby extconf.rb checking for rb_thread_blocking_region()... yes checking for rb_wait_for_single_fd()... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/Users/rarneson/.rvm/rubies/ruby-1.9.3-p125/bin/ruby --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib --with-mlib --without-mlib --with-mysqlclientlib --without-mysqlclientlib --with-zlib --without-zlib --with-mysqlclientlib --without-mysqlclientlib --with-socketlib --without-socketlib --with-mysqlclientlib --without-mysqlclientlib --with-nsllib --without-nsllib --with-mysqlclientlib --without-mysqlclientlib --with-mygcclib --without-mygcclib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /Users/rarneson/.rvm/gems/ruby-1.9.3-p125/gems/mysql2-0.3.11 for inspection. Results logged to /Users/rarneson/.rvm/gems/ruby-1.9.3-p125/gems/mysql2-0.3.11/ext/mysql2/gem_make.out An error occured while installing mysql2 (0.3.11), and Bundler cannot continue. Make sure that `gem install mysql2 -v '0.3.11'` succeeds before bundling. I'm running bundle install and this is in my Gemfile: gem 'mysql2', '~> 0.3.11' I've currently got MySQL running through MAMP. I'm not sure if this is a bad idea and I should run a vanilla MySQl but it seems my current problem is just getting the gem installed. I've seen quite a few of these problems here on stackoverflow but all seem a bit different or have really complicated solutions. Is there something I'm missing? Something simple? Something stupid? I can provide additional info from the out file if necessary. I've read that some people use SQLite for dev and test then MySQL in prod but that sounds like a pretty horrible idea.

    Read the article

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • "Imprinting" as a language feature?

    - by MKO
    Idea I had this idea for a language feature that I think would be useful, does anyone know of a language that implements something like this? The idea is that besides inheritance a class can also use something called "imprinting" (for lack of better term). A class can imprint one or several (non-abstract) classes. When a class imprints another class it gets all it's properties and all it's methods. It's like the class storing an instance of the imprinted class and redirecting it's methods/properties to it. A class that imprints another class therefore by definition also implements all it's interfaces and it's abstract class. So what's the point? Well, inheritance and polymorphism is hard to get right. Often composition gives far more flexibility. Multiple inheritance offers a slew of different problems without much benefits (IMO). I often write adapter classes (in C#) by implementing some interface and passing along the actual methods/properties to an encapsulated object. The downside to that approach is that if the interface changes the class breaks. You also you have to put in a lot of code that does nothing but pass things along to the encapsulated object. A classic example is that you have some class that implements IEnumerable or IList and contains an internal class it uses. With this technique things would be much easier Example (c#) [imprint List<Person> as peopleList] public class People : PersonBase { public void SomeMethod() { DoSomething(this.Count); //Count is from List } } //Now People can be treated as an List<Person> People people = new People(); foreach(Person person in people) { ... } peopleList is an alias/variablename (of your choice)used internally to alias the instance but can be skipped if not needed. One thing that's useful is to override an imprinted method, that could be achieved with the ordinary override syntax public override void Add(Person person) { DoSomething(); personList.Add(person); } note that the above is functional equivalent (and could be rewritten by the compiler) to: public class People : PersonBase , IList<Person> { private List<Person> personList = new List<Person>(); public override void Add(object obj) { this.personList.Add(obj) } public override int IndexOf(object obj) { return personList.IndexOf(obj) } //etc etc for each signature in the interface } only if IList changes your class will break. IList won't change but an interface that you, someone in your team, or a thirdparty has designed might just change. Also this saves you writing a whole lot of code for some interfaces/abstract classes. Caveats There's a couple of gotchas. First we, syntax must be added to call the imprinted classes's constructors from the imprinting class constructor. Also, what happends if a class imprints two classes which have the same method? In that case the compiler would detect it and force the class to define an override of that method (where you could chose if you wanted to call either imprinted class or both) So what do you think, would it be useful, any caveats? It seems it would be pretty straightforward to implement something like that in the C# language but I might be missing something :) Sidenote - Why is this different from multiple inheritance Ok, so some people have asked about this. Why is this different from multiple inheritance and why not multiple inheritance. In C# methods are either virtual or not. Say that we have ClassB who inherits from ClassA. ClassA has the methods MethodA and MethodB. ClassB overrides MethodA but not MethodB. Now say that MethodB has a call to MethodA. if MethodA is virtual it will call the implementation that ClassB has, if not it will use the base class, ClassA's MethodA and you'll end up wondering why your class doesn't work as it should. By the terminology sofar you might already confused. So what happens if ClassB inherits both from ClassA and another ClassC. I bet both programmers and compilers will be scratching their heads. The benefit of this approach IMO is that the imprinting classes are totally encapsulated and need not be designed with multiple inheritance in mind. You can basically imprint anything.

    Read the article

  • How get <body> element from the html which one have as a string

    - by Oleg
    Hello! I have a stupid problem. An jQuery.ajax request return me a full HTML text as a string. I receive such response in an case of error on the server. The server give me an error description which I want to place inside of the corresponding place of my current page. So now the question: I have a string contains full HTML document (which is not an XML!!! see <hr> element inside). I need to have for example only BODY part as a jQuery object. Then I could append it to the corresponding part of my page. Here is an example of the string which I need to parse: <html> <head> <title>The resource cannot be found.</title> <style> body {font-family:"Verdana";font-weight:normal;font-size: .7em;color:black;} p {font-family:"Verdana";font-weight:normal;color:black;margin-top: -5px} // ... </style> </head> <body bgcolor="white"> <span><H1>Server Error in '/' Application.<hr width=100% size=1 color=silver></H1> <h2> <i>The resource cannot be found.</i> </h2></span> <font face="Arial, Helvetica, Geneva, SunSans-Regular, sans-serif "> <b> Description: </b>HTTP 404. The resource you are looking for ...bla bla.... <br><br> <b> Requested URL: </b>/ImportBPImagesInfos/Repository.svc/GetFullProfilimageSw<br><br> <hr width=100% size=1 color=silver> <b>Version Information:</b>&nbsp;Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 </font> </body> </html> <!-- [HttpException]: A public action method &#39;.... at System.Web.Mvc.Controller.HandleUnknownAction(String actionName) at System.Web.Mvc.Controller.ExecuteCore() at System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) at System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) at System.Web.Mvc.MvcHandler.<>c__DisplayClass8.<BeginProcessRequest>b__4() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass1.<MakeVoidDelegate>b__0() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass8`1.<BeginSynchronous>b__7(IAsyncResult _) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult`1.End() at System.Web.Mvc.Async.AsyncResultWrapper.End[TResult](IAsyncResult asyncResult, Object tag) at System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) at System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) at System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) -->

    Read the article

  • Beginner question: ArrayList can't seem to get it right! Pls help

    - by elementz
    I have been staring at this code all day now, but just can't get it right. ATM I am just pushing codeblocks around without being able to concentrate anymore, with the due time being within almost an hour... So you guys are my last resort here. I am supposed to create a few random balls on a canvas, those balls being stored within an ArrayList (I hope an ArrayList is suitable here: the alternative options to choose from were HashSet and HashMap). Now, whatever I do, I get the differently colored balls at the top of my canvas, but they just get stuck there and refuse to move at all. Apart from that I now get a ConcurrentModificationException, when running the code: java.util.ConcurrentModificationException at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372) at java.util.AbstractList$Itr.next(AbstractList.java:343) at BallDemo.bounce(BallDemo.java:109) Reading up on that exception, I found out that one can make sure ArrayList is accessed in a threadsafe manner by somehow synchronizing access. But since I have remember fellow students doing without synchronizing, my guess is, that it would actually be the wrong path to go. Maybe you guys could help me get this to work, I at least need those stupid balls to move ;-) /** * Simulate random bouncing balls */ public void bounce(int count) { int ground = 400; // position of the ground line System.out.println(count); myCanvas.setVisible(true); // draw the ground myCanvas.drawLine(50, ground, 550, ground); // Create an ArrayList of type BouncingBalls ArrayList<BouncingBall>balls = new ArrayList<BouncingBall>(); for (int i = 0; i < count; i++){ Random numGen = new Random(); // Creating a random color. Color col = new Color(numGen.nextInt(256), numGen.nextInt(256), numGen.nextInt(256)); // creating a random x-coordinate for the balls int ballXpos = numGen.nextInt(550); BouncingBall bBall = new BouncingBall(ballXpos, 80, 20, col, ground, myCanvas); // adding balls to the ArrayList balls.add(bBall); bBall.draw(); boolean finished = false; } for (BouncingBall bBall : balls){ bBall.move(); } } This would be the original unmodified method we got from our teacher, which only creates two balls: /** * Simulate two bouncing balls */ public void bounce() { int ground = 400; // position of the ground line myCanvas.setVisible(true); myCanvas.drawLine(50, ground, 550, ground); // draw the ground // crate and show the balls BouncingBall ball = new BouncingBall(50, 50, 16, Color.blue, ground, myCanvas); ball.draw(); BouncingBall ball2 = new BouncingBall(70, 80, 20, Color.red, ground, myCanvas); ball2.draw(); // make them bounce boolean finished = false; while(!finished) { myCanvas.wait(50); // small delay ball.move(); ball2.move(); // stop once ball has travelled a certain distance on x axis if(ball.getXPosition() >= 550 && ball2.getXPosition() >= 550) { finished = true; } } ball.erase(); ball2.erase(); } }

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • Imitating Mouse click - point with known coordinates on a fusion table layer - google maps

    - by Yavor Tashev
    I have been making a script using a fusion table's layer in google maps. I am using geocoder and get the coordinates of a point that I need. I put a script that changes the style of a polygon from the fusion table when you click on it, using the google.maps.event.addListener(layer, "click", function(e) {}); I would like to use the same function that I call in the case of a click on the layer, but this time with a click with the coordinates that I got. I have tried google.maps.event.trigger(map, 'click', {latLng: new google.maps.LatLng(42.701487,26.772308)}); As well as the example here Google Fusion Table Double Click (dblClick) I have tried changing map with layer... I am sorry if my question is quite stupid, but I have tried many options. P.S. I have seen many post about getting the info from the table, but I do not need that. I want to change the style of the KML element in the selected row, so I do not see it happening by a query. Here is the model of my script: function initialize() { geocoder = new google.maps.Geocoder(); map = new google.maps.Map(document.getElementById("map_canvas"),myOptions); layer = new google.maps.FusionTablesLayer({ suppressInfoWindows:true, map : map, query : { select: '??????????????', from: '12ZoroPjIfBR4J-XwM6Rex7LmfhzCDJc9_vyG5SM' } }); google.maps.event.addListener(layer, "click", function(e) { SmeniStilRaionni(layer,e); marker.setMap(null); }); } function SmeniStilRaionni(layer,e) { ... } function showAddress(address) { geocoder.geocode( { 'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var point = results[0].geometry.location; //IMITATE THE CLICK } }); } In response to geocodezip This way you hide all the other elements... I do not wish that. It is like if I want to change the border of the selected element. And I do not wish for a new layer. In the function that I use now I push the style of the options of the layer and then set the option. I use the e from google.maps.event.addListener(layer, "click", function(e)); by inserting e.row['Name'].value inside the where rule. I would like to ask you if there is any info on the e variable in google.maps.event.addListener(layer, "click", function(e)); I found out how to get the results I wanted: For my query after I get the point I use this: var queryText ="SELECT '??????? ???','??????? ???','?????????? ???','??????????????' FROM "+FusionTableID+" WHERE ST_INTERSECTS(\'??????????????\', CIRCLE(LATLNG(" + point.toUrlValue(6) + "),0.5));"; queryText = encodeURIComponent(queryText); document.getElementById("vij query").innerHTML = queryText; var query = new google.visualization.Query('http://www.google.com/fusiontables/gvizdata?tq=' + queryText); And then I get these results: var rsyd = response.getDataTable().getValue(0,0); var osyd = response.getDataTable().getValue(0,1); var apsyd = response.getDataTable().getValue(0,2); And then, I use the following: where: "'??????? ???' = '"+rsyd+"'", Which is the same as: where: "'??????? ???' = '"+e.row['??????? ???'].value+"'", in the click function. This is a working solution for my problem. But still, I cannot find a way to Imitate a Mouse click.

    Read the article

  • Foolishness Check: PHP Class finds Class file but not Class in the file.

    - by Daniel Bingham
    I'm at a loss here. I've defined an abstract superclass in one file and a subclass in another. I have required the super-classes file and the stack trace reports to find an include it. However, it then returns an error when it hits the 'extends' line: Fatal error: Class 'HTMLBuilder' not found in View/Markup/HTML/HTML4.01/HTML4_01Builder.php on line 7. I had this working with another class tree that uses factories a moment ago. I just added the builder layer in between the factories and the consumer. The factory layer looked almost exactly the same in terms of includes and dependencies. So that makes me think I must have done something silly that's causes the HTMLBuilder.php file to not be included correctly or interpreted correctly or some such. Here's the full stack trace (paths slightly altered): # Time Memory Function Location 1 0.0001 53904 {main}( ) ../index.php:0 2 0.0002 67600 require_once( 'View/Page.php' ) ../index.php:3 3 0.0003 75444 require_once( 'View/Sections/SectionFactory.php' ) ../Page.php:4 4 0.0003 81152 require_once( 'View/Sections/HTML/HTMLSectionFactory.php' ) ../SectionFactory.php:3 5 0.0004 92108 require_once( 'View/Sections/HTML/HTMLTitlebarSection.php' ) ../HTMLSectionFactory.php:5 6 0.0005 99716 require_once( 'View/Markup/HTML/HTMLBuilder.php' ) ../HTMLTitlebarSection.php:3 7 0.0005 103580 require_once( 'View/Markup/MarkupBuilder.php' ) ../HTMLBuilder.php:3 8 0.0006 124120 require_once( 'View/Markup/HTML/HTML4.01/HTML4_01Builder.php' ) ../MarkupBuilder.php:3 Here's the code in question: Parent class (View/Markup/HTML/HTMLBuilder.php): <?php require_once('View/Markup/MarkupBuilder.php'); abstract class HTMLBuilder extends MarkupBuilder { public abstract function getLink($text, $href); public abstract function getImage($src, $alt); public abstract function getDivision($id, array $classes=NULL, array $children=NULL); public abstract function getParagraph($text, array $classes=NULL, $id=NULL); } ?> Child Class, (View/Markup/HTML/HTML4.01/HTML4_01Builder.php): <?php require_once('HTML4_01Factory.php'); require_once('View/Markup/HTML/HTMLBuilder.php'); class HTML4_01Builder extends HTMLBuilder { private $factory; public function __construct() { $this->factory = new HTML4_01Factory(); } public function getLink($href, $text) { $link = $this->factory->getA(); $link->addAttribute('href', $href); $link->addChild($this->factory->getText($text)); return $link; } public function getImage($src, $alt) { $image = $this->factory->getImg(); $image->addAttribute('src', $src); $image->addAttribute('alt', $alt); return $image; } public function getDivision($id, array $classes=NULL, array $children=NULL) { $div = $this->factory->getDiv(); $div->setID($id); if(!empty($classes)) { $div->addClasses($classes); } if(!empty($children)) { $div->addChildren($children); } return $div; } public function getParagraph($text, array $classes=NULL, $id=NULL) { $p = $this->factory->getP(); $p->addChild($this->factory->getText($text)); if(!empty($classes)) { $p->addClasses($classes); } if(!empty($id)) { $p->setID($id); } return $p; } } ?> I would appreciate any and all ideas. I'm at a complete loss here as to what is going wrong. I'm sure it's something stupid I just can't see...

    Read the article

  • Recompiling an old fortran 2/4\66 program that was compiled for os\2 need it to run in dos

    - by Mike Hansen
    I am helping an old scientist with some problems and have 1 program that he found and modified about 20 yrs. ago, and runs fine as a 32 bit os\2 executable but i need it to run under dos! I am not a programmer but a good hardware & software man, so I'am pretty stupid about this problem, but here go's I have downloaded 6 different compilers watcom77,silverfrost ftn95,gfortran,2 versions of g77 and f80. Watcom says it is to old of program,find older compiler,silverfrost opens it,debugs, etc. but is changing all the subroutines from "real" to "complex" and vice-vesa,and the g77's seem to install perfectly (library links and etc.) but wont even compile the test.f programs.My problem is 1; to recompile "as is" or "upgrade" the code? PROGRAM xconvlv INTEGER N,N2,M PARAMETER (N=2048,N2=2048,M=128) INTEGER i,isign REAL data(n),respns(m),resp(n),ans(n2),t3(n),DUMMY OPEN(UNIT=1, FILE='C:\QKBAS20\FDATA1.DAT') DO 1 i=1,N READ(1,*) T3(i), data(i), DUMMY continue CLOSE(UNIT-1) do 12 i=1,N respns(i)=data(i) resp(i)=respns(i) continue isign=-1 call convlv(data,N,resp,M,isign,ans) OPEN(UNIT=1,FILE='C:\QKBAS20\FDATA9.DAT') DO 14 i=1,N WRITE(1,*) T3(i), ans(i) continue END SUBROUTINE CONVLV(data,n,respns,m,isign,ans) INTEGER isign,m,n,NMAX REAL data(n),respns(n) COMPLEX ans(n) PARAMETER (NMAX=4096) * uses realft, twofft INTEGER i,no2 COMPLEX fft (NMAX) do 11 i=1, (m-1)/2 respns(n+1-i)=respns(m+1-i) continue do 12 i=(m+3)/2,n-(m-1)/2 respns(i)=0.0 continue call twofft (data,respns,fft,ans,n) no2=n/2 do 13 i=1,no2+1 if (isign.eq.1) then ans(i)=fft(i)*ans(i)/no2 else if (isign.eq.-1) then if (abs(ans(i)) .eq.0.0) pause ans(i)=fft(i)/ans(i)/no2 else pause 'no meaning for isign in convlv' endif continue ans(1)=cmplx(real (ans(1)),real (ans(no2+1))) call realft(ans,n,-1) return END SUBROUTINE realft(data,n,isign) INTEGER isign,n REAL data(n) * uses four1 INTEGER i,i1,i2,i3,i4,n2p3 REAL c1,c2,hli,hir,h2i,h2r,wis,wrs DOUBLE PRECISION theta,wi,wpi,wpr,wr,wtemp theta=3.141592653589793d0/dble(n/2) cl=0.5 if (isign.eq.1) then c2=-0.5 call four1(data,n/2,+1) else c2=0.5 theta=-theta endif (etc.,etc., etc.) SUBROUTINE twofft(data,data2,fft1,fft2,n) INTEGER n REAL data1(n,data2(n) COMPLEX fft1(n), fft2(n) * uses four1 INTEGER j,n2 COMPLEX h1,h2,c1,c2 c1=cmplx(0.5,0.0) c2=cmplx(0.0,-0.5) do 11 j=1,n fft1(j)=cmplx(data1(j),data2(j) continue call four1 (fft1,n,1) fft2(1)=cmplx(aimag(fft1(1)),0.0) fft1(1)=cmplx(real(fft1(1)),0.0) n2=n+2 do 12 j=2,n/2+1 h1=c1*(fft1(j)+conjg(fft1(n2-j))) h2=c2*(fft1(j)-conjg(fft1(n2-j))) fft1(j)=h1 fft1(n2-j)=conjg(h1) fft2(j)=h2 fft2(n2-j)=conjg(h2) continue return END SUBROUTINE four1(data,nn,isign) INTEGER isign,nn REAL data(2*nn) INTEGER i,istep,j,m,mmax,n REAL tempi,tempr DOUBLE PRECISION theta, wi,wpi,wpr,wr,wtemp n=2*nn j=1 do 11 i=1,n,2 if(j.gt.i)then tempr=data(j) tempi=data(j+1) (etc.,etc.,etc.,) continue mmax=istep goto 2 endif return END There are 4 subroutines with this that are about 3 pages of code and whould be much easier to e-mail to someone if their able to help me with this.My e-mail is [email protected] , or if someone could tell me where to get a "working" compiler that could recompile this? THANK-YOU, THANK-YOU,and THANK-YOU for any help with this! The errors Iam getting are; 1.In a call to CONVLV from another procedure,the first argument was of a type REAL(kind=1), it is now a COMPLEX(kind=1) 2.In a call to REALFT from another procedure, ... COMPLEX(kind=1) it is now a REAL(kind=1) 3.In a call to TWOFFT from...COMPLEX(kind-1) it is now a REAL(kind=1) 4.In a previous call to FOUR1, the first argument was of a type REAL(kind=1) it is now a COMPLEX(kind=1).

    Read the article

  • Running code when all threads are finished processing.

    - by rich97
    Quick note: Java and Android noob here, I'm open to you telling me I'm stupid (as long as you tell me why.) I have an android application which requires me start multiple threads originating from various classes and only advance to the next activity once all threads have done their job. I also want to add a "failsafe" timeout in case one the the threads takes too long (HTTP request taking too long or something.) I searched Stack Overflow and found a post saying that I should create a class to keep a running total of open threads and then use a timer to poll for when all the threads are completed. I think I've created a working class to do this for me, it's untested as of yet but has no errors showing in eclipse. Is this a correct implementation? Are there any APIs that I should be made aware of (such as classes in the Java or Android APIs that could be used in place of the abstract classes at the bottom of the class?) package com.dmp.geofix.libs; import java.util.ArrayList; import java.util.Iterator; import java.util.Timer; import java.util.TimerTask; public class ThreadMonitor { private Timer timer = null; private TimerTask timerTask = null; private OnSuccess onSuccess = null; private OnError onError = null; private static ArrayList<Thread> threads; private final int POLL_OPEN_THREADS = 100; private final int TIMEOUT = 10000; public ThreadMonitor() { timerTask = new PollThreadsTask(); } public ThreadMonitor(OnSuccess s) { timerTask = new PollThreadsTask(); onSuccess = s; } public ThreadMonitor(OnError e) { timerTask = new PollThreadsTask(); onError = e; } public ThreadMonitor(OnSuccess s, OnError e) { timerTask = new PollThreadsTask(); onSuccess = s; onError = e; } public void start() { Iterator<Thread> i = threads.iterator(); while (i.hasNext()) { i.next().start(); } timer = new Timer(); timer.schedule(timerTask, 0, POLL_OPEN_THREADS); } public void finish() { Iterator<Thread> i = threads.iterator(); while (i.hasNext()) { i.next().interrupt(); } threads.clear(); timer.cancel(); } public void addThread(Thread t) { threads.add(t); } public void removeThread(Thread t) { threads.remove(t); t.interrupt(); } class PollThreadsTask extends TimerTask { private int timeElapsed = 0; @Override public void run() { timeElapsed += POLL_OPEN_THREADS; if (timeElapsed <= TIMEOUT) { if (threads.isEmpty() == false) { if (onSuccess != null) { onSuccess.run(); } } } else { if (onError != null) { onError.run(); } finish(); } } } public abstract class OnSuccess { public abstract void run(); } public abstract class OnError { public abstract void run(); } }

    Read the article

  • mysql_fetch_array() problem

    - by Marty
    So I have 3 DB tables that are all identical in every way (data is different) except the name of the table. I did this so I could use one piece of code with a switch like so: function disp_bestof($atts) { extract(shortcode_atts(array( 'topic' => '' ), $atts)); $connect = mysql_connect("localhost","foo","bar"); if (!$connect) { die('Could not connect: ' . mysql_error()); } switch ($topic) { case "attorneys": $bestof_query = "SELECT * FROM attorneys p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_TopAttorneys'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; case "physicians": $bestof_query = "SELECT * FROM physicians p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_TopDocs'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; case "dining": $bestof_query = "SELECT * FROM restaurants p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_DiningAwards'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; default: $bestof_query = "switch on $best did not match required case(s)"; break; } $category = ''; while( $result = mysql_fetch_array($query) ) { if( $result['category'] != $category ) { $category = $result['category']; //echo "<div class\"category\">"; $bestof_content .= "<h2>".$category."</h2>\n"; //echo "<ul>"; Now, this whole thing works PERFECT for the first two cases, but the third one "dining" breaks with this error: Warning: mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource ... on line 78 Line 78 is the while() at the bottom. I have checked and double checked and can't figure what the problem is. Here's the DB structure for 'restaurants': CREATE TABLE `restaurants` ( `id` int(10) NOT NULL auto_increment, `restaurant` varchar(255) default NULL, `address1` varchar(255) default NULL, `address2` varchar(255) default NULL, `city` varchar(255) default NULL, `state` varchar(255) default NULL, `zip` double default NULL, `phone` double default NULL, `URI` varchar(255) default NULL, `neighborhood` varchar(255) default NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=249 DEFAULT CHARSET=utf8 Does anyone see what I'm doing wrong here? I'm passing "dining" to the function and as I said before, the first two cases in the switch work fine. I'm sure it's something stupid...

    Read the article

  • www.domain redirecting to google?

    - by aayush
    Note: A while back i had no place to host my domain, then via namecheap i set it to forward my domain to google I bought webhosting again today and everything was working fine. I set up vhosts and set up www.domain as the server alias. Both worked. Then i tried to set up a alternate subdomain test.domain, but failed (I did it by creating a alternate vhost right below the current one) as it kept redirecting to google. As a test, i replaced the www with test in serveralias, it still redirected to google but now even www redirects to google. I am using cloudflare, and i am really confused how to go about this. I tried listing www as a cname and as a A record, still redirecting to google. I tried checking via proxies e.t.c, its universal and hence not a problem of my PC. Please help, i am really distressed by this. I am running a ubuntu 13.10 x32 stack with LAMP. Here is what my domain.com.conf file looks like <VirtualHost *:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName domain.com ServerAlias www.domain.com ServerAdmin webmaster@localhost DocumentRoot /var/www/domain.com/public_html # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> There is a valid index.php file at the end of the documentroot aswell. The website in question is aayushagra.com Edit: On cloudflare i tried removing the www entirely, and it still sent me to google Edit: Zone file ;; Domain: aayushagra.com ;; Exported: 2013-11-03 07:37:52 ;; ;; This file is intended for use for informational and archival ;; purposes ONLY and MUST be edited before use on a production ;; DNS server. In particular, you must: ;; -- update the SOA record with the correct authoritative name server ;; -- update the SOA record with the contact e-mail address information ;; -- update the NS record(s) with the authoritative name servers for this domain. ;; ;; For further information, please consult the BIND documentation ;; located on the following website: ;; ;; http://www.isc.org/ ;; ;; And RFC 1035: ;; ;; http://www.ietf.org/rfc/rfc1035.txt ;; ;; Please note that we do NOT offer technical support for any use ;; of this zone data, the BIND name server, or any other third-party ;; DNS software. ;; ;; Use at your own risk. ;; $ORIGIN aayushagra.com. @ 3600 IN SOA aayushagra.com. root.aayushagra.com. ( 2013110301 ; serial 7200 ; refresh 3600 ; retry 86400 ; expire 3600) ; minimum ;; MX Records aayushagra.com. 300 IN MX aayushagra.com. ;; CNAME Records direct.aayushagra.com. 300 IN CNAME aayushagra.com. ;; A Records (IPv4 addresses) www.aayushagra.com. 300 IN A 146.185.140.31 aayushagra.com. 300 IN A 146.185.140.31

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >