Search Results

Search found 23949 results on 958 pages for 'test test'.

Page 724/958 | < Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >

  • How to deal with fellow programmer who likes to delegate task with lack any support from boss [closed]

    - by Rudy
    I have a problem with my fellow programmer. We are currently working together in a small project that need to be shipped every 2 weeks. She has a tendency to ask for help for every issues that she is facing. Whether it's a compile error, algorithm problem or even sync/merge issue that caused by herself. She does not even bother to check Google or try to find out by herself. I can be asked to help her for 5-10 times a day. Everyday her husband keeps calling (4-6 times a day), and most of the code that has been delivered by her are actually incorrect. Today she framed me for sending the wrong delivery product. She went home after lunch on the delivery day without telling PM and other team member on that day and her code she commited does not work at all. It's not even tested. I have no choice to roll back her code and cleaning her code just for sake to able to run the product. I have warned her about her defective codes for almost 3 iterations. She said when she was not around I should be able to test her module for her. I snapped and yelled that I am not her slave and directly reported to my boss. However, my boss is not a person that can manage and care about software quality. What is the most important thing to my boss is delivery of product, whether it tested or not. He can even asked us to deliver something that not even tested by QA to the client, on the next day. Most of our suggestion is not followed by him. He even asked me to apologize to her because I snapped. I am tired of the whole situation. This kind of thing keeps repeated. I do have saving to be able to survive for 6 months and the idea of resigning is keep haunting. There is nothing else that can be learned in my current job and I had been in a better environment than this. What should I do with the situation?

    Read the article

  • Android Array Lag?

    - by Mike
    I am making a platform game for Android. It is sort of a tile based game. I added bullets and enemies with AI and a bunch of tile types. I created a simple map with no Enemies. Everything was running well and smooth until I shot a bunch of bullets randomly everywhere. A couple of hundreds of bullets later, the FPS lowered. I made a test to find out if the bullets were the problem so I made another simple map with just a tile to stand on and left it for a while. Minutes later, I played around with it a bit to check if the FPS changed and it didnt. I reloaded the same map and shot a lot of bullets. Minutes later, the FPS was visibly lower even after the number of bullets were zero. Points to note: Programmed FPS is 30 Tested on a Samsung Galaxy Y and Samsung Galaxy W Any tile, enemy, bullet that is off screen is not drawn to prevent lag Bullets collide with Tiles (if they dont collide with in 450 frames, they are removed from the array) I used List bullets = new ListArray(); I used bullets.add(new Bullet(x, y, params...)); I used for(...){ if(...){ bullets.remove(i); } } Code for bullet: private void drawBullets(Canvas canvas) { for (int i = 0; i < bullets.size(); i++) { Bullet b = bullets.get(i); b.update(canvas); //updates physics if (b.t > blm) { //if the bullet is past its expiry bullets.remove(i); i--; } else { if (svx((b.x)) > 0 && svx(b.x) < width && svy((b.y)) > 0 && svy(b.y) < height) { // if bullet is not off screen b.draw(canvas); // draw the bullet } } } } I tried searching for solutions and references but I have no luck. I'm guessing that the lag has something to do with the Array and the Bullets or Classes that I've loaded? I'm not sure! Someone please help! Thanks in advance! :)

    Read the article

  • SDL - Getting a single keypress event instead of a keystate?

    - by MrKatSwordfish
    Right now I'm working on a simple SDL project, but I've hit an issue when trying to get a single keypress event to skip past a splash screen. Right now, there are 4 start-up splash screens that I would like to be able to skip with a single keypress (of any key). My issue is that, as of now, if I hold down a key, it skips through each splash screen to the very last one immediately. The splash screens are stored as an array of SDL surfaces which are all loaded at the initialization of the state. I have an variable called currentSplashImage that controls which element of the array is being rendered on the screen. I've set it up so that whenever there's a SDL_KEYDOWN event, it triggers a single incrementation of the currentSplashImage variable. So, I'm really not sure why my code isn't working correctly. For some reason, when I hold down a button, it seems to be treating the held button as a new key press event every time it ticks through the code. Does anyone know how I can go about fixing this issue? [Here's a snippet of code that I've been using...] void SplashScreenState::handleEvents() { SDL_PollEvent( &localEvent ); if ( localEvent.type == SDL_KEYDOWN ) { if ( currentSplashImage < 3 && currentSplashImage >= 0) { currentSplashImage++; } } else if ( localEvent.type == SDL_QUIT ) { smgaEngine.setRunning(false); } } I should also mention that the SDL_Event 'localEvent' is part of the GameState parent class, while this event handling code is part of a SplashScreenState subclass. If anyone knows why this is happening, or if there is any way to improve my code, It'd be helpful to me! :D I'm still a very new programmer, trying to learn. UPDATE: I added a std::cout line to that the code runs multiple times with a single KEYDOWN event. I also tried disabling SDL_EnableKeyRepeat, but it didn't fix the issue. void SplashScreenState::handleEvents() { SDL_PollEvent( &localEvent ); if ( localEvent.type == SDL_KEYDOWN ) { if ( currentSplashImage < 3 && currentSplashImage >= 0) { currentSplashImage++; std::cout << "KEYDOWN.."; //<---- test cout line } } else if ( localEvent.type == SDL_QUIT ) { smgaEngine.setRunning(false); } } This prints out "KEYDOWN..KEYDOWN..KEYDOWN.." in the cout stream when a button is held.

    Read the article

  • How to create a JMS durable subscriber in WebLogic Server?

    - by lmestre
    WebLogic Server Provides a set of examples that are very helpful to get started with Weblogic ServerHere you can check how to install the examples:http://docs.oracle.com/cd/E23943_01/doc.1111/e14142/prepare.htmAfter you have installed the examples, you can find the example you want to review, in this case TopicReceive, here:wlserver_10.3/samples/server/examples/src/examples/jms/topicTo review details of the specific example, you can open:wlserver_10.3/samples/server/examples/src/examples/jms/topic/instructions.htmlTo create a Durable Subscriber, you can just set the client ID  and invoke createDurableSubscriber instead of calling createSubscriber, i.e.:    tconFactory = (TopicConnectionFactory)       PortableRemoteObject.narrow(ctx.lookup(JMS_FACTORY),                                   TopicConnectionFactory.class);    tcon = tconFactory.createTopicConnection();    //Set Client ID for this Durable Subscriber    tcon.setClientID("GT2");    tsession = tcon.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);    topic = (Topic)       PortableRemoteObject.narrow(ctx.lookup(topicName),                                   Topic.class);    // Create Durable Subscription    tsubscriber = tsession.createDurableSubscriber(topic, "Test");    tsubscriber.setMessageListener(this);    tcon.start(); Enjoy!   You can read more about this here:http://docs.oracle.com/cd/E23943_01/web.1111/e13727/advpubsub.htm#CHDEBABChttp://docs.oracle.com/cd/E23943_01/web.1111/e13727/manage_apps.htm#i1097671    http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13943/WebLogic.Messaging.ISession.CreateDurableSubscriber_overload_2.html

    Read the article

  • Play Framework Plugin for NetBeans IDE

    - by Geertjan
    The start of minimal support for the Play Framework in NetBeans IDE 7.3 Beta would constitute (1) recognizing Play projects, (2) an action to run a Play project, and (3) classpath support. Well, most of that I've created already, as can be seen, e.g., below you can see logical views in the Projects window for Play projects (i.e., I can open all the samples that come with the Play distribution). Right-clicking a Play project lets you run it and, if the embedded browser is selected in the Options window, you can see the result in the IDE. Make a change to your code and refresh the browser, which immediately shows you your changes: What needs to be done, among other things: A wizard for creating new Play projects, i.e., it would use the Play command line to create the application and then open it in the IDE. Integration of everything available on the Play command line. Maybe the logical view, i.e., what is shown in the Projects window, should be changed. Right now, only the folders "app" and "test" are shown there, with everything else accessible in the Files window, as can be seen in the screenshot above. More work on the classpath, i.e., I've hardcoded a few things just to get things to work correctly. Options window extension to register the Play executable, instead of the current hardcoded solution. Scala integrations, i.e., investigate if/how the NetBeans Scala plugin is helpful and, if not, create different/additional solutions. E.g., the HTML templates are partly in Scala, i.e., need to embed Scala support into HTML. Hyperlinking in the "routes" file, as well as special support for the "application.conf" file. Anyone interested, especially if you're a Play fan (a "playboy"?), in joining me in working on this NetBeans plugin? I'll be uploading the sources to a java.net repository soon. It will be here, once it has been made publicly accessible: http://java.net/projects/nbplay/sources/nbplay Kind of cool detail is that the NetBeans plugin is based on Maven, which means that you could use any Maven-supporting IDE to work on this plugin.

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • while(true) and loop-breaking - anti-pattern?

    - by KeithS
    Consider the following code: public void doSomething(int input) { while(true) { TransformInSomeWay(input); if(ProcessingComplete(input)) break; DoSomethingElseTo(input); } } Assume that this process involves a finite but input-dependent number of steps; the loop is designed to terminate on its own as a result of the algorithm, and is not designed to run indefinitely (until cancelled by an outside event). Because the test to see if the loop should end is in the middle of a logical set of steps, the while loop itself currently doesn't check anything meaningful; the check is instead performed at the "proper" place within the conceptual algorithm. I was told that this is bad code, because it is more bug-prone due to the ending condition not being checked by the loop structure. It's more difficult to figure out how you'd exit the loop, and could invite bugs as the breaking condition might be bypassed or omitted accidentally given future changes. Now, the code could be structured as follows: public void doSomething(int input) { TransformInSomeWay(input); while(!ProcessingComplete(input)) { DoSomethingElseTo(input); TransformInSomeWay(input); } } However, this duplicates a call to a method in code, violating DRY; if TransformInSomeWay were later replaced with some other method, both calls would have to be found and changed (and the fact that there are two may be less obvious in a more complex piece of code). You could also write it like: public void doSomething(int input) { var complete = false; while(!complete) { TransformInSomeWay(input); complete = ProcessingComplete(input); if(!complete) { DoSomethingElseTo(input); } } } ... but you now have a variable whose only purpose is to shift the condition-checking to the loop structure, and also has to be checked multiple times to provide the same behavior as the original logic. For my part, I say that given the algorithm this code implements in the real world, the original code is the most readable. If you were going through it yourself, this is the way you'd think about it, and so it would be intuitive to people familiar with the algorithm. So, which is "better"? is it better to give the responsibility of condition checking to the while loop by structuring the logic around the loop? Or is it better to structure the logic in a "natural" way as indicated by requirements or a conceptual description of the algorithm, even though that may mean bypassing the loop's built-in capabilities?

    Read the article

  • Working the Chart Percentages

    - by Tim Dexter
    Charting in BIP is such fun, well sometimes it is. Not so much today, at least not for Ron in San Diego. He needed a horizontal bar chart showing values plotted for various test areas with value labels at the end of the bars. Simple enough right? The wrinkle, they were percentage values so he needed to see '56%' not '56'! Still, it should be simple enough but the percentage formatting has a requirement for your values to be in a decimal format i.e. 0.56 not 56.0. 56.0 gets formatted as 5600%. OK, so either pull the values out as decimals or use the div function to divide the values in the chart by 100 e.g. <xsl:value-of select="myval div 100)" /> Now I can use the following the chart XML to format the percentages as I need them:   <Graph ... > ... <MarkerText visible="true"> <Y1ViewFormat> <ViewFormat numberType="NUMTYPE_PERCENT" decimalDigit="0" numberTypeUsed="true" leadingZeroUsed="true" decimalDigitUsed="true"/> </Y1ViewFormat> </MarkerText> ... </Graph>   That gets me the values shown the way I want but the auto axis formatting gets me from 0 >> 1. I now need to go in and add the formatting for the axis too.   <Graph ...> ... <Y1Axis axisMinAutoScaled="false" axisMinValue="0.0" axisMaxAutoScaled="false" axisMaxValue="1.0" majorTickStepAutomatic="true"> <ViewFormat numberType="NUMTYPE_PERCENT" decimalDigit="0" scaleFactor="SCALEFACTOR_NONE" numberTypeUsed="true" leadingZeroUsed="true" decimalDigitUsed="true" scaleFactorUsed="true"/> </Y1Axis>   Now I have a chart that's showing the percentage values and formatting axis scale correctly for me too. You can of course mess with the attributes above to get more decimal points on your labels, etc. Happy Charting!

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Is it dangerous for me to give some of my Model classes Control-like methods?

    - by Pureferret
    In my personal project I have tried to stick to MVC, but I've also been made aware that sticking to MVC too tightly can be a bad thing as it makes writing awkward and forces the flow of the program in odd ways (i.e. some simple functions can be performed by something that normally wouldn't, and avoid MVC related overheads). So I'm beginning to feel justified in this compromise: I have some 'manager programs' that 'own' data and have some way to manipulate it, as such I think they'd count as both part of the model, and part of the control, and to me this feels more natural than keepingthem separate. For instance: One of my Managers is the PlayerCharacterManager that has these methods: void buySkill(PlayerCharacter playerCharacter, Skill skill); void changeName(); void changeRole(); void restatCharacter(); void addCharacterToGame(); void createNewCharacter(); PlayerCharacter getPlayerCharacter(); List<PlayerCharacter> getPlayersCharacter(Player player); List<PlayerCharacter> getAllCharacters(); I hope the mothod names are transparent enough that they don't all need explaining. I've called it a manager because it will help manage all of the PlayerCharacter 'model' objects the code creates, and create and keep a map of these. I may also get it to store other information in the future. I plan to have another two similar classes for this sort of control, but I will orchestrate when and how this happens, and what to do with the returned data via a pure controller class. This splitting up control between informed managers and the controller, as opposed to operating just through a controller seems like it will simplify my code and make it flow more. My question is, is this a dangerous choice, in terms of making the code harder to follow/test/fix? Is this somethign established as good or bad or neutral? I oculdn't find anything similar except the idea of Actors but that's not quite why I'm trying to do. Edit: Perhaps an example is needed; I'm using the Controller to update the view and access the data, so when I click the 'Add new character to a player button' it'll call methods in the controller that then go and tell the PlayerCharacterManager class to create a new character instance, it'll call the PlayerManager class to add that new character to the player-character map, and then it'll add this information to the database, and tell the view to update any GUIs effected. That is the sort of 'control sequence' I'm hoping to create with these manager classes.

    Read the article

  • git workflow for separating commits

    - by gman
    Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user "You need to upgrade". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question Are there workflows that work for you or that make this process easier? I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says development methodologies and processes

    Read the article

  • Is it reasonable to insist on reproducing every defect before diagnosing and fixing it?

    - by amphibient
    I work for a software product company. We have large enterprise customers who implement our product and we provide support to them. For example, if there is a defect, we provide patches, etc. In other words, It is a fairly typical setup. Recently, a ticket was issued and assigned to me regarding an exception that a customer found in a log file and that has to do with concurrent database access in a clustered implementation of our product. So the specific configuration of this customer may well be critical in the occurrence of this bug. All we got from the customer was their log file. The approach I proposed to my team was to attempt to reproduce the bug in a similar configuration setup as that of the customer and get a comparable log. However, they disagree with my approach saying that I should not need to reproduce the bug (as that is overly time-consuming and will require simulating a server cluster on VMs) and that I should simply "follow the code" to see where the thread- and/or transaction-unsafe code is and put the change working off of a simple local development, which is not a cluster implementation like the environment from which the occurrence of the bug originates. To me, working out of an abstract blueprint (program code) rather than a concrete, tangible, visible manifestation (runtime reproduction) seems like a difficult working environment (for a person of normal cognitive abilities and attention span), so I wanted to ask a general question: Is it reasonable to insist on reproducing every defect and debug it before diagnosing and fixing it? Or: If I am a senior developer, should I be able to read (multithreaded) code and create a mental picture of what it does in all use case scenarios rather than require to run the application, test different use case scenarios hands on, and step through the code line by line? Or am I a poor developer for demanding that kind of work environment? Is debugging for sissies? In my opinion, any fix submitted in response to an incident ticket should be tested in an environment simulated to be as close to the original environment as possible. How else can you know that it will really remedy the issue? It is like releasing a new model of a vehicle without crash testing it with a dummy to demonstrate that the air bags indeed work. Last but not least, if you agree with me: How should I talk with my team to convince them that my approach is reasonable, conservative and more bulletproof?

    Read the article

  • How do you feel about being asked to code during an interview?

    - by Mystere Man
    I have seen a lot of comments about good interview questions and puzzles to require potential developers to solve during the interview process. I have personally had several interviews in which the interviewer has asked me to write some piece of code or solve a problem during the interview, and I have always performed very poorly in these "tests". The reason is simple, as a developer who spends my days talking to computers, I find I have to prepare myself and "switch gears" to be in "interview mode". I prepare myself to make a good impression. When I'm programming, I'm very focused and am totally different from when I'm being "interpersonal". I just can't get into "the zone" when I'm also having to be a charming and witty potential employee. I feel that by asking a developer to prove his skills during an interview, all you're doing is finding out if they can code under pressure, and at the drop of a hat. It has almost no ability to determine how you would perform in a "real life" development situation. Maybe, if you're looking for someone that can code and chat at the same time, i can see how that would be beneficial. But I think you overlook potential candidates that simply do not perform well in such an artificial environment. While I appreciate that a potential employer wants to see what I can do, I don't think an interview is the place for such a test. I mean, suppose a job for an over the road trucker required that you drive while being interviewed. How does that really end well? So I'm curious as to what others think about such situations. Have you failed interviews because you were not in the right frame of mind? Have you failed to make a good interpersonal impression because you were too distracted trying to solve the problem? If you're a hiring manager, or someone that gives interviews, do you even think about such things? Is it really important that someone perform well in an interview? EDIT: To clarify, I'm not against testing applicants. My concern is about testing during an interview. See also: What are the pros and cons for the employer of code questions during an interview? looking at this from the interviewer's point of view.

    Read the article

  • When a co-worker asks you to teach him what you know, do you share the information or keep it to yourself? [closed]

    - by Chuck
    I am the only developer/DBA in a small IT department. There is another guy who can do it, but he's more of a backup as he spends his time working on IT support stuff. Anyway we have a new hire and I've been training him on the IT support side of things. Seems like he is eager to learn and be productive, but nobody is going out of their way to show him anything. He's been asking me to teach him database design, SQL, etc. For some reason, the boss has him working with me. He is also sending him to meetings that I go to, yet he hasn't said outright that I have to teach him anything. Meanwhile, the boss insists on doing a lot of the support work himself (i.e. he hoards information and doesn't delegate to anyone). I'm a little bit on the fence. First, the new guy doesn't yet have a strong foundation on the IT support functions which is where we really need help at this time. Second, I paid thousands of dollars for classes and spent many hours learning this stuff. Is it my responsibility to teach others skills that I had to learn on my own? Others here really aren't quick to share information so I'm not sure that I should either in this environment. I do know that if I get him involved, and get him started on projects, then I'd be responsible for his mistakes. I had to take the heat for the other guy when he made mistakes. OTOH the guy wants to learn something, is motivated, and I don't want to stop him. We've had our share of slackers in the group and it's nice to have someone who is willing to work for a change. So what would you guys do? Would you teach him the skills that you spent all of that time learning? Set him up with a test database on his PC and recommend some books for him? Encourage him to get a strong foundation in IT support first and ask later? We haven't had a new hire in years, let alone one that is interested in what I do, so this is new to me.

    Read the article

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • Physics Engine [Collision Response, 2-dimensional] experts, help!! My stack is unstable!

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they hardly stand still! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have: I would try adding a damping term (proportional to velocity) to the Baumgarte. Is this a good idea in general? If not I would not want to waste my time trying to tune the parameter hoping it magically works. Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :)

    Read the article

  • Using extension methods to decrease the surface area of a C# interface

    - by brian_ritchie
    An interface defines a contract to be implemented by one or more classes.  One of the keys to a well-designed interface is defining a very specific range of functionality. The profile of the interface should be limited to a single purpose & should have the minimum methods required to implement this functionality.  Keeping the interface tight will keep those implementing the interface from getting lazy & not implementing it properly.  I've seen too many overly broad interfaces that aren't fully implemented by developers.  Instead, they just throw a NotImplementedException for the method they didn't implement. One way to help with this issue, is by using extension methods to move overloaded method definitions outside of the interface. Consider the following example: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public interface IFileTransfer 2: { 3: void SendFile(Stream stream, Uri destination); 4: } 5:   6: public static class IFileTransferExtension 7: { 8: public static void SendFile(this IFileTransfer transfer, 9: string Filename, Uri destination) 10: { 11: using (var fs = File.OpenRead(Filename)) 12: { 13: transfer.SendFile(fs, destination); 14: } 15: } 16: } 17:   18: public static class TestIFileTransfer 19: { 20: static void Main() 21: { 22: IFileTransfer transfer = new FTPFileTransfer("user", "pass"); 23: transfer.SendFile(filename, new Uri("ftp://ftp.test.com")); 24: } 25: } In this example, you may have a number of overloads that uses different mechanisms for specifying the source file. The great part is, you don't need to implement these methods on each of your derived classes.  This gives you a better interface and better code reuse.

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • Wi-Fi triangulation using android smartphone

    - by user1887020
    How to make application for wifi triangulation using android platform? This project will be implemented inside the building. No GPS needed. Just using wifi only and doing triangulation to get the current position of the user inside the building. I got minimum 3 access point to implement it. But how to start code in android and integrate triangulation inside android coding? I got the algorithm to do it.. but is there any chance that I can get it done? Because this project is actually want to replace the floor directory board into a smartphone floor directory so that user can find their way to their room for example to the lab. public class Triangulation { public Triangulation() { int dist_1, dist_2, dist_3; //variable for the distances int x1, x2, x3; //coordinates of x int y1, y2, y3; //coordinates of y int final_dist1, final_dist2; //final distance after calc dist_1 = 1; dist_2 = 2; dist_3 = 3; x1 = 5; //test inputs x2 = 2; x3 = 4; y1 = 2; y2 = 2; y3 = 5; final_dist1 = ((dist_1 * dist_1) - (dist_2 * dist_2) – (x1 * x1) + (x2 * x2) - (y1 * y1) + (y2 * y2)) / 2; final_dist2 = ((dist_2 * dist_2) - (dist_3 * dist_3) – (x2 * x2) + (x3 * x3) - (y2 * y2) + (y3 * y3)) / 2; initial_a1 = x1 - x2; initial_a2 = x2 - x3; initial_b1 = y1 - y2; initial_b2 = y2 - y3; //-----------------------STEP 1-------------------------------------- int a1 = initial_a1 / initial_a1; int a2 = initial_a2 / initial_a1; int b1 = initial_b1 / initial_a1; int b2 = initial_b2 / initial_a1; final_dist1 /= initial_a1; final_dist2 /= initial_a1; //-----------------------STEP 2-------------------------------------- a2 = a2 -a2; final_dist2 = -(initial_a2) * final_dist1 + final_dist2; //-----------------------STEP 3-------------------------------------- a2 /= b2; final_dist2 = final_dist2 / b2; b2 /= b2; //-------------------------STEP 4----------------------------------- b1 = b1 - b1; final_dist1 = -(initial_b1) * final_dist2 + final_dist1; } }

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • Programming tourism

    - by Andrew_B
    I'm going on vacation to Paris, France for 10 days. Actually, it's my girlfriend's wish to go there but I'm not very interested in visiting, sightseeing, etc. Recently, I came up with an idea of trying to do something like programming tourism. :) I'd like to do something related to programming in a startup-like company. I do not want a salary or any kind of compensation. I want to overview process, social aspects, environment and "what it feels like" to development software in another country. I'm from Russia. I've been a software developer since 2003. I prefer C#4 but I'm ready to use anything Turing-complete. I have some MS certifications and am familiar with all .NETs since 1.1. Currently I'm finishing PhD in CS. I'm interested in multidimensional indexing and I can turn any piece of data and code to OLAP system. :) But it'd take too much time. What can I do? I have no more than one week. I want a totally complete project in a short amount of time. Implement some features in well-tested project Do a code review Debug memory, performance and concurrency issues Do unit testing So, about the questions: Is it legal? I'm ready to sign NDA if it's necessary. I'll have tourist visa. Is it possible? I'm really sure that bureaucratic companies with lots of HRs and PMs will not allow such experiments. But small companies can afford it. I'm ready to guarantee support on my code after leaving home :) P.S. I still havn't started learning French :) I hope it will not take too much time :) P.P.S. Yes, it's girlfriend-approved. What's in it for me? It's fun. It's fun to see new systems and people who created them. It's fun to complete meaningful things. Quickly. What's in it for them? Feature, debug, review or test. If my short-term colleagues will like this style of working I can invite them to make same trip into my company :) I think in Russia it's even more exciting :)

    Read the article

  • IndexOutOfRangeException on World.Step after enabling/disabling a Farseer physics body?

    - by WilHall
    Earlier, I posted a question asking how to swap fixtures on the fly in a 2D side-scroller using Farseer Physics Engine. The ultimate goal being that the player's physical body changes when the player is in different states (I.e. standing, walking, jumping, etc). After reading this answer, I changed my approach to the following: Create a physical body for each state when the player is loaded Save those bodies and their corresponding states in parallel lists Swap those physical bodies out when the player state changes (which causes an exception, see below) The following is my function to change states and swap physical bodies: new protected void SetState(object nState) { //If mBody == null, the player is being loaded for the first time if (mBody == null) { mBody = mBodies[mStates.IndexOf(nState)]; mBody.Enabled = true; } else { //Get the body for the given state Body nBody = mBodies[mStates.IndexOf(nState)]; //Enable the new body nBody.Enabled = true; //Disable the current body mBody.Enabled = false; //Copy the current body's attributes to the new one nBody.SetTransform(mBody.Position, mBody.Rotation); nBody.LinearVelocity = mBody.LinearVelocity; nBody.AngularVelocity = mBody.AngularVelocity; mBody = nBody; } base.SetState(nState); } Using the above method causes an IndexOutOfRangeException when calling World.Step: mWorld.Step(Math.Min((float)nGameTime.ElapsedGameTime.TotalSeconds, (1f / 30f))); I found that the problem is related to changing the .Enabled setting on a body. I tried the above function without setting .Enabled, and there was no error thrown. Turning on the debug views, I saw that the bodies were updating positions/rotations/etc properly when the state was changes, but since they were all enabled, they were just colliding wildly with each other. Does Enabling/Disabling a body remove it from the world's body list, which then causes the error because the list is shorter than expected? Update: For such a straightforward issue, I feel this question has not received enough attention. Has anyone else experienced this? Would anyone try a quick test case? I know this issue can be sidestepped - I.e. by not disabling a body during the simulation - but it seems strange that this issue would exist in the first place, especially when I see no mention of it in the documentation for farseer or box2d. I can't find any cases of the issue online where things are more or less kosher, like in my case. Any leads on this would be helpful.

    Read the article

  • How are software projects 'typically' managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is that we have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. You can think of this as analogous to moving some web pages from the 'test' server to the 'live' server, where QC personnel can bang on the system and complain that the developer has it all wrong ;-) Once QC signs off that the changes are working, the system again moves the code along to the next stage, where additional testing is performed, etc. I have been searching the internet for a few days trying to find how the process is accomplished anywhere else -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. To keep things consistent, let's say that we have a project called Calculator, which we want to add support for the basic trigonometric functions: sine, cosine and tangent. I'm open to reorganizing the company however we need to in order to accomplish due diligence testing and we can suppose that any tools are available for use (if that helps to illustrate the process). To start things off, I think I understand this much: we document the requirements, e.g.: support sine, cosine and tangent functions we create some type of change request/work order to assign to programming coding takes place, commits are made to version control peer review commences programmer marks the work order as completed? ... now what? How does QC do their thing? Would they perform testing before closing the 'work order'?

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

  • What is the standard term for my role?

    - by sigil
    I'm doing work that involves writing code and managing developers in a "special projects" division of a large company. I'd like to define my role better and figure out if there's an industry standard term for what I do, so that it will be easier for me to research best practices and work on a career path What I do all day: A macro that connects an Excel sheet to an Access database is acting funny; I get called in to figure out what's happening and debug it. Someone needs data extracted from a bunch of files on Sharepoint. I figure out a client-side solution because I'm not authorized to do anything server-side and getting IT to do anything would take several months and need a business case. A manager wants a new data entry tool for their team. I interview the manager and team members to work out the functional requirements, then design/develop/test the application. Someone needs a VBA script to crunch some data for their presentation that's due in two hours. I drop everything I'm doing to hack out a quick script and run the analysis, without much in the way of testing. A developer has been hired to build a database for one of the teams, since I'm working on too many different things and don't have time to take this project on in the timeframe required. I direct his work and push him to meet certain deadlines, interview stakeholders to get more info that will help him figure out how to build the necessary forms, and modify the functional requirements of the database to fit in the timeframe. Someone wants to load a set of data into a GIS system and set up an ongoing refresh and reporting of this data set. I facilitate the conversation between the GIS developers and the owners of this data set, and design a demo application as proof of concept. It's kind of an "all-purpose programming and IT management" position, but it's not officially IT because the company has an actual IT department with a rigorously defined system of submitting requests, developing code, and managing projects. What I do, I guess, is more of a handyman job, where stuff falls to me because I'm the geekiest one in the room. Is there a standard term in the software world for what I do?

    Read the article

< Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >