Search Results

Search found 988 results on 40 pages for 'branching and merging'.

Page 31/40 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • What are some good questions (and good/bad answers) to ask at an interview to gauge the competency of the company/team?

    - by Wayne M
    I'm already familiar with the Joel Test, but it's been my experience that some of the questions there have the answers "massaged" to make the company seem better than it is. I've had several jobs in the past that, for instance, claimed they had a QA process and did unit testing, and what they really meant is "The programmers test the app, and test with the debugger and via trial-and-error."; they said they used SVN but they just lumped everything into one giant repository and had no concept of branching/merging or anything more complicated than updating and committing; said they can build in one step and what they really mean is it's "one step" to copy dozens of files by hand from the programmer's PC to the live server. How do you go about properly gauging a company's environment to make sure that it's a well-evolved company and not stuck on doing things a certain way because they've done it for years and they're ignorant of change? You can almost never ask to see their source code, so you're stuck trying to figure out if the interviewer's answer is accurate or BS to make the company seem good. Besides the Joel Test what are some other good questions to get the proper feel for a company, and more importantly what are some good and bad answers that could indicate a good or bad company? I mean something like (take at face value, please, it's all I could think of at short notice): Question: How does the software team apply the SOLID principles and Inversion of Control to their code? Good Answer: We adhere to SOLID wherever possible; we use TDD so it kind of forces us to write abstract, testable code. We use Ninject for our IoC container because it's fairly easy to configure - it was that or StructureMap but I find Ninject a bit more intuitive, and who doesn't like ninjas? You're not a pirate, are you? Bad Answer: Our code is pretty secure, yeah. And what's this Inversion of Control thing? I've never heard of it before. You see what I did there. The "good" answer uses facts to back it up and has a bit of "in crowd" humor; the bad answer shows complete ignorance of the question - not necessarily a bad thing if you are interviewing for a manger/director position, but a terrible answer and a huge red flag if you're interviewing as a developer and talking to a senior developer or manager! My biggest problem at the moment is being able to take a generic response and gauge whether it's the good or bad answer; more often than not it's the bad kind and I find myself frustrated almost from day one at the new job. I suppose I could name drop if I ask about specific things (e.g. "Do you write unit tests?" and if the answer is yes, ask if they use NUnit, MbUnit or something else; if they mention data access ask if they use a clean ORM like NHibernate or something more coupled like EF or Linq) but is there another way short of being resolute to actually call the interview on things (which will almost certainly result in not getting the job, but if they are skirting the question it's probably not a job I want).

    Read the article

  • Subterranean IL: Exception handling 2

    - by Simon Cooper
    Control flow in and around exception handlers is tightly controlled, due to the various ways the handler blocks can be executed. To start off with, I'll describe what SEH does when an exception is thrown. Handling exceptions When an exception is thrown, the CLR stops program execution at the throw statement and searches up the call stack looking for an appropriate handler; catch clauses are analyzed, and filter blocks are executed (I'll be looking at filter blocks in a later post). Then, when an appropriate catch or filter handler is found, the stack is unwound to that handler, executing successive finally and fault handlers in their own stack contexts along the way, and program execution continues at the start of the catch handler. Because catch, fault, finally and filter blocks can be executed essentially out of the blue by the SEH mechanism, without any reference to preceding instructions, you can't use arbitary branches in and out of exception handler blocks. Instead, you need to use specific instructions for control flow out of handler blocks: leave, endfinally/endfault, and endfilter. Exception handler control flow try blocks You cannot branch into or out of a try block or its handler using normal control flow instructions. The only way of entering a try block is by either falling through from preceding instructions, or by branching to the first instruction in the block. Once you are inside a try block, you can only leave it by throwing an exception or using the leave <label> instruction to jump to somewhere outside the block and its handler. The leave instructions signals the CLR to execute any finally handlers around the block. Most importantly, you cannot fall out of the block, and you cannot use a ret to return from the containing method (unlike in C#); you have to use leave to branch to a ret elsewhere in the method. As a side effect, leave empties the stack. catch blocks The only way of entering a catch block is if it is run by the SEH. At the start of the block execution, the thrown exception will be the only thing on the stack. The only way of leaving a catch block is to use throw, rethrow, or leave, in a similar way to try blocks. However, one thing you can do is use a leave to branch back to an arbitary place in the handler's try block! In other words, you can do this: .try { // ... newobj instance void [mscorlib]System.Exception::.ctor() throw MidTry: // ... leave.s RestOfMethod } catch [mscorlib]System.Exception { // ... leave.s MidTry } RestOfMethod: // ... As far as I know, this mechanism is not exposed in C# or VB. finally/fault blocks The only way of entering a finally or fault block is via the SEH, either as the result of a leave instruction in the corresponding try block, or as part of handling an exception. The only way to leave a finally or fault block is to use endfinally or endfault (both compile to the same binary representation), which continues execution after the finally/fault block, or, if the block was executed as part of handling an exception, signals that the SEH can continue walking the stack. filter blocks I'll be covering filters in a separate blog posts. They're quite different to the others, and have their own special semantics. Phew! Complicated stuff, but it's important to know if you're writing or outputting exception handlers in IL. Dealing with the C# compiler is probably best saved for the next post.

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Procedual level generation for a platformer game (tilebased) using player physics

    - by Notbad
    I have been searching for information about how to build a 2d world generator (tilebased) for a platformer game I am developing. The levels should look like dungeons with a ceiling and a floor and they will have a high probability of being just made of horizontal rooms but sometimes they can have exits to a top/down room. Here is an example of what I would like to achieve. I'm refering only to the caves part. I know level design won't be that great when generated but I think it is possible to have something good enough for people to enjoy the procedural maps (Note: Supermetrod Spoiler!): http://www.snesmaps.com/maps/SuperMetroid/SuperMetroidMapNorfair.html Well, after spending some time thinking about this I have some ideas to create the maps that I would like to share with you: 1) I have read about celular automatas and I would like to use them to carve the rooms but instead of carving just a tile at once I would like to carve full columns of tiles. Of course this carving system will have some restrictions like how many tiles must be left for the roof and the ceiling, etc... This way I could get much cleaner rooms than using the ussual automata. 2) I want some branching into the rooms. It will have little probability to happen but I definitely want it. Thinking about carving I came to the conclusion that I could be using some sort of path creation algorithm that the carving system would follow to create a path in the rooms. This could be more noticiable if we make the carving system to carve columns with the height of a corridor or with the height of a wide room (this will be added to the system as a param). This way at some point I could spawn a new automa beside the main one to create braches. This new automata should play side by side with the first one to create dead ends, islands (both paths created by the automatas meet at some point or lead to the same room. It would be too long to explain here all the tests I have done, etc... just will try to summarize the problems to see if anyone could bring some light to solve them (I don't mind sharing my successes but I think they aren't too relevant): 1) Zone reachability: How can I make sure that the player will be able to reach all zones I created (mainly when branches happen or vertical rooms are created). When branches are created I have to make sure that there will be a way to get onto the new created branch. I mean a bifurcation that the player could follow. Player will follow the main path or jump to a platform to get onto the other way). On the other hand if an island is created by the meeting of both branches I need to make sure the player will be able to get onto the island too. 2) When a branch is created and corridors are generated for each branch how can I make then both merge or repel to create an island or just make them separated corridors. 3) When I create a branch and an island is created becasue both corridors merge at somepoint or they lead to the same room, is there any way to detect this and randomize where to create the needed platforms to get onto the created isle? This platforms could be created at the start of the island or at the end. I guess part of the problem could be solved using some sort of graph following the created paths but I'm a bit lost in this sea of precedural content creation :). On the other hand I don't expect a solution to the problem but some information to get me moving forward again. Thanks in advance.

    Read the article

  • Structuring projects & dependencies of large winforms applications in C#

    - by Benjol
    UPDATE: This is one of my most-visited questions, and yet I still haven't really found a satisfactory solution for my project. One idea I read in an answer to another question is to create a tool which can build solutions 'on the fly' for projects that you pick from a list. I have yet to try that though. How do you structure a very large application? Multiple smallish projects/assemblies in one big solution? A few big projects? One solution per project? And how do you manage dependencies in the case where you don't have one solution. Note: I'm looking for advice based on experience, not answers you found on Google (I can do that myself). I'm currently working on an application which has upward of 80 dlls, each in its own solution. Managing the dependencies is almost a full time job. There is a custom in-house 'source control' with added functionality for copying dependency dlls all over the place. Seems like a sub-optimum solution to me, but is there a better way? Working on a solution with 80 projects would be pretty rough in practice, I fear. (Context: winforms, not web) EDIT: (If you think this is a different question, leave me a comment) It seems to me that there are interdependencies between: Project/Solution structure for an application Folder/File structure Branch structure for source control (if you use branching) But I have great difficulty separating these out to consider them individually, if that is even possible. I have asked another related question here.

    Read the article

  • using paperclip with secure and non-secure files

    - by crankharder
    First off, we have this namespaced/sti'd structure for our different types of 'Media' Media< Ar::Base Media::Local < Media Media::Local::Image < Media::Local Media::Local::Csv < Media::Local etc... etc.. This is excellent since a user can have many media, and how we display each piece of media is based on the class name and a co-responding partial. But what if we have some Csv's that need to be secure? That is, they can't reside inside of public. I really hate the idea of branching Media again and doing something like this: Media::Secure < Media Media::Secure::Image < Media::Secure Media::NotSecure < Media Media::NotSecure::Image < Media::NotSecure ...where Secure and NotSecure would have different params passed to has_attached_file. Now there are two classes that represent Image and it makes my view/helper system that much more complicated -- not to mention it feels very obtuse. What I would really like to do is be able to change where certain Paperclip::Attachment objects get saved before they get saved (e.g. anything uploaded through foo_secure_action) -- but I can't seem to make this work. Paperclip::Attachment has an @options hash with :path and :url, but changing those before it is saved doesn't have an effect on where it actually gets set. Even if this is possible, I'm not sure if it would have further consequences... I'm open to alternative ideas for structuring this data, but for the moment I like the idea of using STI for this situation.

    Read the article

  • SCM for Xcode?

    - by Gregor
    I am developing an application for the Mac as a small team (me + another person) effort. We are located in different cities, and have started to see the need for solid source control management. None of us have any experience with this, and both of us are relatively new to Cocoa/Obj-C/Xcode (but do have C knowledge). Does anyone have any recommendations as to which SCM system to choose? I understand that a lot of people are using Subversion, which is also supported in Xcode 3.1. Does anyone have experience with using Subversion through Xcode? Or is it a better option to chose a stand alone GUI alternative, such as Versions? Grateful for any input on this. Gregor Tomasevic, Sweden Update/personal experiences: Since this post, we have tried Versions and Cornerstone (both of which are SVN GUI-clients), as well as Xcodes built-in support for SVN. We were not particularly pleased with Versions, which seemed to have some problems with committing unversioned files/build files. The built-in SVN support in Xcode works quite well, although it probably has limitations that we have still not run into. Cornerstone is both simple to use and powerful, and does not seem to suffer from the problems we encountered with Versions. So far, we have just tried committing, updating repo, checking out latest/previous versions of our files and worked some with file comparison. It might be a whole different ball game once you start working extensively with branching, an area which we have been told both these GUI clients might have some weaknesses in. For what it's worth (and with only days of evaluation) Cornerstone seems to be a somewhat better alternative, although for simpler SCM, Xcode works well too. Thanks for all the comments.

    Read the article

  • Has anyone used Rational Team Concert (RTC)?

    - by FryGuy
    The company I work for is currently evaluating replacements for SourceSafe, and for various reasons, I think RTC will be chosen. I'm a little scared that we're going to end up with a solution that isn't the best for us in our situation. I've tried researching a little bit about what it is, but all I have been able to find are marketing things, but nothing about how it actually works (any of the paradigms it uses, etc). Our team is around 8 developers and 2 QA people on a single project (and 4-5 more people that would be using it for their independent project). It seems like RTC is targetted for larger teams, but our team is relatively small. Does anyone has experience using RTC in a smaller team? The project that would be using it is a .NET/WPF application, so we would be using primarily Visual Studio. Is the Visual Studio integration any good, or are we stuck having to have Eclipse open on top of Visual Studio? Personally, I have been using Bazaar as my personal source control (and checking out/into sourcesafe from a branch), as well as on personal projects. Does RTC incorporate features of "third generation" version control systems, such as first class branching/merging and changesets rather than file changes, and good visualization of where changes come from? Also, what are the general pros and cons for it?

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • Design to distribute work when generating task oriented input for legacy dos application?

    - by TheDeeno
    I'm attempting to automate a really old dos application. I've decided the best way to do this is via input redirection. The legacy app (menu driven) has many tasks within tasks with branching logic. In order to easily understand and reuse the input for these tasks, I'd like to break them into bit size pieces. Since I'll need to start a fresh app on each run, repeating a context to consume a bit might be messy. I'd like to create an object model that: allows me to concentrate on the task at hand allows me to reuse common tasks from different start points prevents me from calling a task from the wrong start point To be more explicit, given I have the following task hierarchy: START A A1 A1a A1b A2 A2a B B1 B1a I'd like an object model that lets me generate an input file for task "A1b" buy using building blocks like: START -> do_A, do_A1, do_A1b but prevents me from: START -> do_A1 // because I'm assuming a different call chain from above This will help me write "do_A1b" because I can always assume the same starting context and will simplify writing "do_A1a" because it has THE SAME starting context. What patterns will help me out here? I'm using ruby at the moment so if dynamic language features can help, I'm game.

    Read the article

  • Hg: How to do a rebase like git's rebase

    - by jpswain09
    Hey guys, In Git I can do this: 1. Start working on new feature: $ git co -b newfeature-123 # (a local feature development branch) do a few commits (M, N, O) master A---B---C \ newfeature-123 M---N---O 2. Pull new changes from upstream master: $ git pull (master updated with ff-commits) master A---B---C---D---E---F \ newfeature-123 M---N---O 3. Rebase off master so that my new feature can be developed against the latest upstream changes: (from newfeature-123) $ git rebase master master A---B---C---D---E---F \ newfeature-123 M---N---O I want to know how to do the same thing in Mercurial, and I've scoured the web for an answer, but the best I could find was this: http://www.selenic.com/pipermail/mercurial/2007-June/013393.html That link provides 2 examples: 1. I'll admit that this: (replacing the revisions from the example with those from my own example) hg up -C F hg branch -f newfeature-123 hg transplant -a -b newfeature-123 is not too bad, except that it leaves behind the pre-rebase M-N-O as an unmerged head and creates 3 new commits M',N',O' that represent them branching off the updated mainline. Basically the problem is that I end up with this: master A---B---C---D---E---F \ \ newfeature-123 \ M'---N'---O' \ newfeature-123 M---N---O this is not good because it leaves behind local, unwanted commits that should be dropped. The other option from the same link is hg qimport -r M:O hg qpop -a hg up F hg branch newfeature-123 hg qpush -a hg qdel -r qbase:qtip and this does result in the desired graph: master A---B---C---D---E---F \ newfeature-123 M---N---O but these commands (all 6 of them!) seem so much more complicated than $ git rebase master I want to know if this is the only equivalent in Hg or if there is some other way available that is simple like Git. Thanks!! Jamie

    Read the article

  • Any book on designing and implementing a CRPG engine?

    - by Fabzter
    Hi! First, let me tell you, I am not really interested in making my own rpg engine (at least not in the near future, hehe), but I do feel like I want to understand the internals of how a rpg engine works. Why? Well, because I like to read about programming and design, It keeps me motivated and excited, and because I know I will learn a lot, for, even when I have been programming for some years now, I never stop considering myself an ignorant... there are simply SO many things involving a game engine (specially rpg ones, like branching storylines, and items and economics!) I'm eager to know. I've been searching (and thus, finding) lots of info online, but it is never focused in what I'm interested (most of it talks about the mathematics and AI algorithms implementation, which I know quite well), which is the design of overall structure, patterns, scripting engine, decision engine... damn, so many things I can't even imagine, since I've never done any game programming. I hope you know have an idea of how I feel, and how I want to learn for the sake of learning, and why would I want you to tell me if you know if there exist books touching the topics that interest me the most.

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • What are practical guidelines for evaluating a language's "Turing Completeness"?

    - by AShelly
    I've read "what-is-turing-complete" and the wikipedia page, but I'm less interested in a formal proof than in the practical implications of being Turing Complete. What I'm actually trying to decide is if the toy language I've just designed could be used as a general-purpose language. I know I can prove it is if I can write a Turing machine with it. But I don't want to go through that exercise until I'm fairly certain of success. Is there a minimum set of features without which Turing Completeness is impossible? Is there a set of features which virtually guarantees completeness? (My guess is that conditional branching and a readable/writeable memory store will get me most of the way there) EDIT: I think I've gone off on a tangent by saying "Turing Complete". I'm trying to guess with reasonable confidence that a newly invented language with a certain feature set (or alternately, a VM with a certain instruction set) would be able to compute anything worth computing. I know proving you can building a Turing machine with it is one way, but not the only way. What I was hoping for was a set of guidelines like: "if it can do X,Y,and Z, it can probably do anything".

    Read the article

  • How is a relative JMP (x86) implemented in an Assembler?

    - by Pindatjuh
    While building my assembler for the x86 platform I encountered some problems with encoding the JMP instruction: enc inst size in bytes EB cb JMP rel8 2 E9 cw JMP rel16 4 (because of 0x66 16-bit prefix) E9 cd JMP rel32 5 ... (from my favourite x86 instruction website, http://siyobik.info/index.php?module=x86&id=147) All are relative jumps, where the size of each encoding (operation + operand) is in the third column. Now my original (and thus fault because of this) design reserved the maximum (5 bytes) space for each instruction. The operand is not yet known, because it's a jump to a yet unknown location. So I've implemented a "rewrite" mechanism, that rewrites the operands in the correct location in memory, if the location of the jump is known, and fills the rest with NOPs. This is a somewhat serious concern in tight-loops. Now my problem is with the following situation: b: XXX c: JMP a e: XXX ... XXX d: JMP b a: XXX (where XXX is any instruction, depending on the to-be assembled program) The problem is that I want the smallest possible encoding for a JMP instruction (and no NOP filling). I have to know the size of the instruction at c before I can calculate the relative distance between a and b for the operand at d. The same applies for the JMP at c: it needs to know the size of d before it can calculate the relative distance between e and a. How do existing assemblers implement this, or how would you implement this? This is what I am thinking which solves the problem: First encode all the instructions to opcodes between the JMP and it's target, and if this region contains a variable-sized opcode, use the maximum size, i.e. 5 for JMP. Then in some conditions, the JMP is oversized (because it may fit in a smaller encoding): so another pass will search for oversized JMPs, shrink them, and move all instructions ahead), and set absolute branching instructions (i.e. external CALLs) after this pass is completed. I wonder, perhaps this is an over-engineered solution, that's why I ask this question.

    Read the article

  • How should a programmer go about getting started with Flash/Flex/ActionScript?

    - by Graphics Noob
    What is the shortest path between zero (ie no flash related development software on my computer or information about where to obtain it or get started) to running a "hello world" ActionScript? I'm hoping for an answer that gives step by step instructions about exactly what software is needed to get started, an example of some "hello world" code, and instructions for compiling and running the code. I've spent more time than I think should be necessary researching this question and not found much information. Hopefully this question will be found by programmers like me who want to get started with Flash/Flex/ActionScript (After my morning of researching I still don't even know what terminology to use so I'll just throw it all out there). ActionScript tutorials I've found are focused on programming concepts, ie logic, branching, OOP, etc, and some even have code examples to download, but not a single one I've found explains how to compile and run the code. They all seem to assume you have an IDE standing by but no knowledge of programming, exactly the opposite of the position I'm in. Here are the most related SO questions I've found: http://stackoverflow.com/questions/59083/what-is-adobe-flex-is-it-just-flash-ii http://stackoverflow.com/questions/554899/getting-started-with-flex-3-ui-actionscript-programming http://stackoverflow.com/questions/2123105/how-to-learn-flex

    Read the article

  • How do I create a new project in TFS from an existing project (breaking history)?

    - by Lindsay
    My team is taking over a project from a previous team. We use a different TFS server than the original team, and we are also not interested in keeping the history of the project because we are accepting the latest version of the code as the beginning of our history with the project. Branching is not an option since we want to start our history from the current version of the code. We just want a fresh project with the existing code. I have not been able to create the new project from the old code successfully. I keep getting an error: "Source control cannot add the solution: Solution would span multiple workspaces" My process for attempting the new project creation: Create a workspace for the previous team's version of the code. Get latest version of that code into local mapped workspace directory Open the solution. Unbind all projects and solution. Close solution. Create a workspace for the new version of the code on our TFS server. Copy the unbound code on my local box to the new local workspace mapped folder. Open the solution from the new directory. "Add to source control" from the new solution. Then I get the error. I have tried removing the TFS security files out of the code directories in the unbound version and tried changing source control instead of adding to source control (but it just binds back to the original instead of letting me bind to the new). Is there any other way to do this besides recreating the solution/projects and adding back all the files and references? It doesn't seem like it should be this difficult... Any advice much appreciated!

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • Determining what action an NPC will take, when it is partially random but influenced by preferences?

    - by lala
    I want to make characters in a game perform actions that are partially random but also influenced by preferences. For instance, if a character feels angry they have a higher chance of yelling than telling a joke. So I'm thinking about how to determine which action the character will take. Here are the ideas that have come to me. Solution #1: Iterate over every possible action. For each action do a random roll, then add the preference value to that random number. The action with the highest value is the one the character takes. Solution #2: Assign a range of numbers to an action, with more likely actions having a wider range. So, if the random roll returns anywhere from 1-5, the character will tell a joke. If it returns 6-75, they will yell. And so on. Solution #3: Group all the actions and make a branching tree. Will they take a friendly action or a hostile action? The random roll (with preference values added) says hostile. Will they make a physical attack or verbal? The random roll says verbal. Keep going down the line until you reach the action. Solution #1 is the simplest, but hardly efficient. I think Solution #3 is a little more complicated, but isn't it more efficient? Does anyone have any more insight into this particular problem? Is #3 the best solution? Is there a better solution?

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • What is an elegant way to set up a leiningen project that requires different dependencies based on the build platform?

    - by Savanni D'Gerinel
    In order to do some multi-platform GUI development, I have just switched from GTK + Clojure (because it looks like the Java bindings for GTK never got ported to Windows) to SWT + Clojure. So far, so good in that I have gotten an uberjar built for Linux. The catch, though, is that I want to build an uberjar for Windows and I am trying to figure out a clean way to manage the project.clj file. At first, I thought I would set the classpath to point to the SWT libraries and then build the uberjar. This would require that I set a classpath to the SWT libraries before running the jar, but I would likely need a launcher script, anyway. However, leiningen seems to ignore the classpath in this instance because it always reports that Currently, project.clj looks like this for me: (defproject alyra.mana-punk/character "1.0.0-SNAPSHOT" :description "FIXME: write" :dependencies [[org.clojure/clojure "1.2.0"] [org.clojure/clojure-contrib "1.2.0"] [org.eclipse/swt-gtk-linux-x86 "3.5.2"]] :main alyra.mana-punk.character.core) The relevant line is the org.eclipse/swt-gtk-linux-x86 line. If I want to make an uberjar for Windows, I have to depend on org.eclipse/swt-win32-win32-x86, and another one for x86-64, and so on and so forth. My current solution is to simply create a separate branch for each build environment with a different project.clj. This seems kinda like using a semi to deliver a single gallon of milk, but I am using bazaar for version control, so branching and repeated integrations are easy. Maybe the better way is to have a project.linux.clj, project.win32.clj, etc, but I do not see any way to tell leiningen which project descriptor to use. What are other (preferably more elegant) ways to set up such an environment?

    Read the article

  • Best source control system for maintaining different versions

    - by dalecooper
    Hi all! We need to be able to simultanously maintain a set of different versions of our system. I assume this is best done using branching. We currently use TFS2008 for source control, work items and automatic builds. What is the best version control solution for this task? Our organization is in the process of merging to TFS2010. Will TFS2010 give us the functionality we need to easily manage a series of branches per system version. We need to be able to keep each version isolated from the others, so that we can do testing deployment for each version. Our dev team consists of 5 .net developers and two flash developers. I have heard a lot of talk about GIT. Should we consider using GIT instead of TFS for source control? Is it possible to use TFS2010 together with GIT? Does anyone have similar setups that works nicely? Any sugggestions are appreciated! Thanks, Kjetil.

    Read the article

  • Aggregate path counts using HierarchyID

    - by austincav
    Business problem - understand process fallout using analytics data. Here is what we have done so far: Build a dictionary table with every possible process step Find each process "start" Find the last step for each start Join dictionary table to last step to find path to final step In the final report output we end up with a list of paths for each start to each final step: User Fallout Step HierarchyID.ToString() A 1/1/1 B 1/1/1/1/1 C 1/1/1/1 D 1/1/1 E 1/1 What this means is that five users (A-E) started the process. Assume only User B finished, the other four did not. Since this is a simple example (without branching) we want the output to look as follows: Step Unique Users 1 5 2 5 3 4 4 2 5 1 The easiest solution I could think of is to take each hierarchyID.ToString(), parse that out into a set of subpaths, JOIN back to the dictionary table, and output using GROUP BY. Given the volume of data, I'd like to use the built-in HierarchyID functions, e.g. IsAncestorOf. Any ideas or thoughts how I could write this? Maybe a recursive CTE?

    Read the article

  • R: Are there any alternatives to loops for subsetting from an optimization standpoint?

    - by Adam
    A recurring analysis paradigm I encounter in my research is the need to subset based on all different group id values, performing statistical analysis on each group in turn, and putting the results in an output matrix for further processing/summarizing. How I typically do this in R is something like the following: data.mat <- read.csv("...") groupids <- unique(data.mat$ID) #Assume there are then 100 unique groups results <- matrix(rep("NA",300),ncol=3,nrow=100) for(i in 1:100) { tempmat <- subset(data.mat,ID==groupids[i]) #Run various stats on tempmat (correlations, regressions, etc), checking to #make sure this specific group doesn't have NAs in the variables I'm using #and assign results to x, y, and z, for example. results[i,1] <- x results[i,2] <- y results[i,3] <- z } This ends up working for me, but depending on the size of the data and the number of groups I'm working with, this can take up to three days. Besides branching out into parallel processing, is there any "trick" for making something like this run faster? For instance, converting the loops into something else (something like an apply with a function containing the stats I want to run inside the loop), or eliminating the need to actually assign the subset of data to a variable?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >