Search Results

Search found 3184 results on 128 pages for 'beginning'.

Page 54/128 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Which skills would you expect and appreciate in a Junior Software Engineer??

    - by Bartzilla
    Hi StackOverflow community! I would like to receive some advices from all of you. I know in here there are superb programmers, with outstanding careers, people working for amazing and important companies in the industry so I am very excited to read the replies I could get. I recently finished my Msc.in Software Engineering, and I am about to start my professional career in two weeks. My role will be as a Junior Developer for a company which develops e-commerce software using Java & related technologies (among them Spring, Hibernate). To be honest I am really excited about what is coming specially because I really want to develop my career as a Java developer plus I am also very interested in gaining experience in the e-commerce field. Additionally, this is going to be my first work experience as a professional developer so I really want to do my best from the very beginning. I know many of you probably have manager roles or are team leaders, so basically I would like to know which skills and abilities would you judge and appreciate in a new professional (Junior Developer) that could be part of your team(Soft and Technical Skills) and in which skills I should focus on to achieve a successful career as a Software Engineer. Of course there are many things everybody should expect like good technical knowledge of the technologies you are going to use and so on.. But, I would like to hear your opinions, I will really appreciate advises from experienced developers and hear different perspectives other than mines.. Thanks in advance!

    Read the article

  • The MsC gray zone: How to deal with the "too unexperienced on engineering/too under-qualified for research" situation?

    - by Hunter2
    Last year I've got a MsC degree on CS. On the beginning of the MsC course, I was keen on moving on with research and go for a PhD. However, as the months passed, I started to feel the urge to write software that people would, well, actually use. The programming bug had bitten me, again. So, I decided that before deciding on getting a PhD degree, I would spend some time on the "real world", working as a software developer. Sadly, most companies here in Brazil are "services" companies that seem to be stuck on the 80s when it comes to software development. I have to fend off pushy managers, less-than-competent coworkers and outrageous software requirements (why does everyone seem to need a 50k Oracle license and a behemoth Websphere AS for their CRUD applications?) on a daily basis, and even though I still love software development, the situation is starting to touch a nerve. And, mind you, I'm already lucky for getting a job at a place that isn't a plain software sweatshop. Sure, there are better places around here or I could always try my luck abroad, but then I hit the proverbial brick wall: Sorry, you're too unexperienced as a developer and too under-qualified as a researcher I've already heard this, and variations of that, multiple times. Research position recruiters look for die-hard, publication-ridden, rockstar PhDs, while development position recruiters look for die-hard, experience-ridden, rockstar programmers. To most, my MsC degree seems like a minor bump on my CV (and an outright waste of time for some). Applying for abroad positions is even harder, since the employer would have to deal of the hassle of a VISA process, which I understand that, sometimes, is too much. Now I'm feeling I've reached a dead-end. I'm certain that development (and not research) is my thing, so should I just dismiss my MsC (or play it as a "trump card") and play the "big fish on a small pond" role while I gather some experience and contribute on some open-source projects as a plus? Is there a better way to handle this?

    Read the article

  • Should I use mod_wsgi embedded mode if I have full control of Apache?

    - by mgibsonbr
    I'm managing a bunch of sites and applications in a shared hosting, using Django via mod_wsgi. I had planned to use daemon mode from the beginning (to avoid restart problems), but ended up purchasing a plan that allows me to run a dedicated Apache instance. I kept using daemon mode for convenience, but I'm afraid it's consuming more server resources than it should (I have different projects for each site, each with its own process and process group), so I'm considering switching to embedded mode. Would that be a sensible thing to do? I'd still be able to restart Apache anytime I need to, and I wouldn't need so many child processes and sockets (so I hope the resource usage would decrease). But I'm unsure whether or not doing so would make it more difficult to manage those sites (if I need to update one, I have to restart all) or maybe the applications won't be properly isolated from one another. Are these problems really significant (or only a minor nuisance), are there other drawbacks I coudn't foresee? I'm looking for advice in any aspect of this setup - mainainability, performance, security etc. Tips for improving the current setup are also welcome (I know how to correctly configure a basic mod_wsgi setup, but I'm clueless about sensible values for threads, processes etc).

    Read the article

  • Choppy USB mice on just one of USB ports

    - by user20532
    I've got Lenovo b560 laptop with latest, properly updated Kubuntu on it (11.04 natty, kernel 2.6.38-8-generic). It has three USB2.0 ports onboard. I usually plug a mouse into one of them (I've got 3 different mice - in office, at home and for when on the go). Sometimes, usually after laptop awakening from sleep, the mouse still works but cursor movements are choppy, as if the processor was extremely loaded (it's usually not). I found that if I re-plug the mouse cord into the other USB port, it works just fine. If I plug it back to problematic port, it is still choppy and remains choppy until next boot. Of course I want my mice to always work fine. Problem is: I cannot reproduce this behavior for sure, it happens sporadically but regularly. I use different USB ports (problem has ever happened on each of them since), I use different mice (each has failed me this way at least once), I cannot generally find what exactly is going wrong and why plugging to different port fixes the mouse instantly. So I'd like to hear at least clues where to look at, what to try to identify my problem. A bit of update: while beginning this post, I had the issue once again. I have just replugged the mouse back to problematic port and it is not recognized at all. On the other port it works smoothly.

    Read the article

  • Application toolkits like QT versus traditional game/multimedia libraries like SFML

    - by Aaron
    I currently intend to use SFML for my next game project. I'll need a substantial GUI though (RPG/strategy-type) so I'll either have to implement my own or try to find an appropriate third party library, which seem to boil down to CEGUI, libRocket, and GWEN. At the same time, I do not anticipate doing that many advanced graphical effects. My game will be 2D and primarily sprite-based with some sprite animations. I've recently discovered that QT applications can have their appearance styled so that they don't have to look like plain OS apps. Given that, I am beginning to consider QT a valid alternative to SFML. I wouldn't have to implement the GUI functionality I'd need, and I may not be taking advantage of SFML's lower-level access anyway. The only drawbacks I can think of immediately are the learning curve for QT and figuring out how to fit game logic inside such a framework after getting used to the input/update/render loop of traditional game libraries. When would an application toolkit like QT be more appropriate for a game than a traditional game or multimedia library like SFML?

    Read the article

  • design in agile process

    - by ying
    Recently I had an interview with dev team in a company. The team uses agile + TDD. The code exercise implements a video rental store which generates statement to calc total rental fee for each type of video (new release, children, etc) for a customer. The existing code use object like: Statement to generate statement and calc fee where big switch statement sits to use enum to determine how to calc rental fee customer holds a list of rentals movie base class and derived class for each type of movie (NEW, CHILDREN, ACTION, etc) The code originally doesn't compile as the owner was assumed to be hit by a bus. So here is what I did: outlined the improvement over object model to have better responsibility for each class. use strategy pattern to replace switch statement and weave them in config But the team says it's waste of time because there is no requirement for it and UAT test suite works and is the only guideline goes into architecture decision. The underlying story is just to get pricing feature out and not saying anything about how to do it. So the discussion is focused on why should time be spent on refactor the switch statement. In my understanding, agile methodology doesn't mean zero design upfront and such code smell should be avoided at the beginning. Also any unit/UAT test suite won't detect such code smell, otherwise sonar, findbugs won't exist. Here I want to ask: is there such a thing called agile design in the agile methodology? Just like agile documentation. how to define agile design upfront? how to know enough is enough? In my understanding, ballpark architecture and data contract among components should be defined before/when starting project, not the details. Am I right? anyone can explain what the team is really looking for in this kind of setup? is it design aspect or agile aspect? how to implement minimum viable product concept in the agile process in the real world project? Is it must that you feel embarrassed to be MVP?

    Read the article

  • Are your personal insecurities screwing up your internal communications?

    - by Lucy Boyes
    I do some internal comms as part of my job. Quite a lot of it involves talking to people about stuff. I’m spending the next couple of weeks talking to lots of people about internal comms itself, because we haven’t done a lot of audience/user feedback gathering, and it turns out that if you talk to people about how they feel and what they think, you get some pretty interesting insights (and an idea of what to do next that isn’t just based on guesswork and generalising from self). Three things keep coming up from talking to people about what we suck at  in terms of internal comms. And, as far as I can tell, they’re all examples where personal insecurity on the part of the person doing the communicating makes the experience much worse for the people on the receiving end. 1. Spending time telling people how you’re going to do something, not what you’re doing and why Imagine you’ve got to give an update to a lot of people who don’t work in your area or department but do have an interest in what you’re doing (either because they want to know because they’re curious or because they need to know because it’s going to affect their work too). You don’t want to look bad at your job. You want to make them think you’ve got it covered – ideally because you do*. And you want to reassure them that there’s lots of exciting work going on in your area to make [insert thing of choice] happen to [insert thing of choice] so that [insert group of people] will be happy. That’s great! You’re doing a good job and you want to tell people about it. This is good comms stuff right here. However, you’re slightly afraid you might secretly be stupid or lazy or incompetent. And you’re exponentially more afraid that the people you’re talking to might think you’re stupid or lazy or incompetent. Or pointless. Or not-adding-value. Or whatever the thing that’s the worst possible thing to be in your company is. So you open by mentioning all the stuff you’re going to do, spending five minutes or so making sure that everyone knows that you’re DOING lots of STUFF. And the you talk for the rest of the time about HOW you’re going to do the stuff, because that way everyone will know that you’ve thought about this really hard and done tons of planning and had lots of great ideas about process and that you’ve got this one down. That’s the stuff you’ve got to say, right? To prove you’re not fundamentally worthless as a human being? Well, maybe. But probably not. See, the people who need to know how you’re going to do the stuff are the people doing the stuff. And those are the people in your area who you’ve (hopefully-please-for-the-love-of-everything-holy) already talked to in depth about how you’re going to do the thing (because else how could they help do it?). They are the only people who need to know the how**. It’s the difference between strategy and tactics. The people outside of your bubble of stuff-doing need to know the strategy – what it is that you’re doing, why, where you’re going with it, etc. The people on the ground with you need the strategy and the tactics, because else they won’t know how to do the stuff. But the outside people don’t really need the tactics at all. Don’t bother with the how unless your audience needs it. They probably don’t. It might make you feel better about yourself, but it’s much more likely that Bob and Jane are thinking about how long this meeting has gone on for already than how personally impressive and definitely-not-an-idiot you are for knowing how you’re going to do some work. Feeling marginally better about yourself (but, let’s face it, still insecure as heck) is not worth the cost, which in this case is the alienation of your audience. 2. Talking for too long about stuff This is kinda the same problem as the previous problem, only much less specific, and I’ve more or less covered why it’s bad already. Basic motivation: to make people think you’re not an idiot. What you do: talk for a very long time about what you’re doing so as to make it sound like you know what you’re doing and lots about it. What your audience wants: the shortest meaningful update. Some of this is a kill your darlings problem – the stuff you’re doing that seems really nifty to you seems really nifty to you, and thus you want to share it with everyone to show that you’re a smart person who thinks up nifty things to do. The downside to this is that it’s mostly only interesting to you – if other people don’t need to know, they likely also don’t care. Think about how you feel when someone is talking a lot to you about a lot of stuff that they’re doing which is at best tangentially interesting and/or relevant. You’re probably not thinking that they’re really smart and clearly know what they’re doing (unless they’re talking a lot and being really engaging about it, which is not the same as talking a lot). You’re probably thinking about something totally unrelated to the thing they’re talking about. Or the fact that you’re bored. You might even – and this is the opposite of what they’re hoping to achieve by talking a lot about stuff – be thinking they’re kind of an idiot. There’s another huge advantage to paring down what you’re trying to say to the barest possible points – it clarifies your thinking. The lightning talk format, as well as other formats which limit the time and/or number of slides you have to say a thing, are really good for doing this. It’s incredibly likely that your audience in this case (the people who need to know some things about your thing but not all the things about your thing) will get everything they need to know from five minutes of you talking about it, especially if trying to condense ALL THE THINGS into a five-minute talk has helped you get clear in your own mind what you’re doing, what you’re trying to say about what you’re doing and why you’re doing it. The bonus of this is that by being clear in your thoughts and in what you say, and in not taking up lots of people’s time to tell them stuff they don’t really need to know, you actually come across as much, much smarter than the person who talks for half an hour or more about things that are semi-relevant at best. 3. Waiting until you’ve got every detail sorted before announcing a big change to the people affected by it This is the worst crime on the list. It’s also human nature. Announcing uncertainty – that something important is going to happen (big reorganisation, product getting canned, etc.) but you’re not quite sure what or when or how yet – is scary. There are risks to it. Uncertainty makes people anxious. It might even paralyse them. You can’t run a business while you’re figuring out what to do if you’ve paralysed everyone with fear over what the future might bring. And you’re scared that they might think you’re not the right person to be in charge of [thing] if you don’t even know what you’re doing with it. Best not to say anything until you know exactly what’s going to happen and you can reassure them all, right? Nope. The people who are going to be affected by whatever it is that you don’t quite know all the details of yet aren’t stupid***. You wouldn’t have hired them if they were. They know something’s up because you’ve got your guilty face on and you keep pulling people into meeting rooms and looking vaguely worried. Here’s the deal: it’s a lot less stressful for everyone (including you) if you’re up front from the beginning. We took this approach during a recent company-wide reorganisation and got really positive feedback. People would much, much rather be told that something is going to happen but you’re not entirely sure what it is yet than have you wait until it’s all fixed up and then fait accompli the heck out of them. They will tell you this themselves if you ask them. And here’s why: by waiting until you know exactly what’s going on to communicate, you remove any agency that the people that the thing is going to happen to might otherwise have had. I know you’re scared that they might get scared – and that’s natural and kind of admirable – but it’s also patronising and infantilising. Ask someone whether they’d rather work on a project which has an openly uncertain future from the beginning, or one where everything’s great until it gets shut down with no forewarning, and very few people are going to tell you they’d prefer the latter. Uncertainty is humanising. It’s you admitting that you don’t have all the answers, which is great, because no one does. It allows you to be consultative – you can actually ask other people what they think and how they feel and what they’d like to do and what they think you should do, and they’ll thank you for it and feel listened to and respected as people and colleagues. Which is a really good reason to start talking to them about what’s going on as soon as you know something’s going on yourself. All of the above assumes you actually care about talking to the people who work with you and for you, and that you’d like to do the right thing by them. If that’s not the case, you can cheerfully disregard the advice here, but if it is, you might want to think about the ways above – and the inevitable countless other ways – that making internal communication about you and not about your audience could actually be doing the people you’re trying to communicate with a huge disservice. So take a deep breath and talk. For five minutes or so. About the important things. Not the other things. As soon as you possibly can. And you’ll be fine.   *Of course you do. You’re good at your job. Don’t worry. **This might not always be true, but it is most of the time. Other people who need to know the how will either be people who you’ve already identified as needing-to-know and thus part of the same set as the people in you’re area you’ve already discussed this with, or else they’ll ask you. But don’t bring this stuff up unless someone asks for it, because most of the people in the audience really don’t care and you’re wasting their time. ***I mean, they might be. But let’s give them the benefit of the doubt and assume they’re not.

    Read the article

  • Thoughts on type aliases/synonyms?

    - by Rei Miyasaka
    I'm going to try my best to frame this question in a way that doesn't result in a language war or list, because I think there could be a good, technical answer to this question. Different languages support type aliases to varying degrees. C# allows type aliases to be declared at the beginning of each code file, and they're valid only throughout that file. Languages like ML/Haskell use type aliases probably as much as they use type definitions. C/C++ are sort of a Wild West, with typedef and #define often being used seemingly interchangeably to alias types. The upsides of type aliasing don't invoke too much dispute: It makes it convenient to define composite types that are described naturally by the language, e.g. type Coordinate = float * float or type String = [Char]. Long names can be shortened: using DSBA = System.Diagnostics.DebuggerStepBoundaryAttribute. In languages like ML or Haskell, where function parameters often don't have names, type aliases provide a semblance of self-documentation. The downside is a bit more iffy: aliases can proliferate, making it difficult to read and understand code or to learn a platform. The Win32 API is a good example, with its DWORD = int and its HINSTANCE = HANDLE = void* and its LPHANDLE = HANDLE FAR* and such. In all of these cases it hardly makes any sense to distinguish between a HANDLE and a void pointer or a DWORD and an integer etc.. Setting aside the philosophical debate of whether a king should give complete freedom to their subjects and let them be responsible for themselves or whether they should have all of their questionable actions intervened, could there be a happy medium that would allow the benefits of type aliasing while mitigating the risk of its abuse? As an example, the issue of long names can be solved by good autocomplete features. Visual Studio 2010 for instance will alllow you to type DSBA in order to refer Intellisense to System.Diagnostics.DebuggerStepBoundaryAttribute. Could there be other features that would provide the other benefits of type aliasing more safely?

    Read the article

  • Best Architecture for ASP.NET WebForms Application

    - by stack man
    I have written an ASP.NET WebForms portal for a client. The project has kind of evolved rather than being properly planned and structured from the beginning. Consequently, all the code is mashed together within the same project and without any layers. The client is now happy with the functionality, so I would like to refactor the code such that I will be confident about releasing the project. As there seems to be many differing ways to design the architecture, I would like some opinions about the best approach to take. FUNCTIONALITY The portal allows administrators to configure HTML templates. Other associated "partners" will be able to display these templates by adding IFrame code to their site. Within these templates, customers can register and purchase products. An API has been implemented using WCF allowing external companies to interface with the system also. An Admin section allows Administrators to configure various functionality and view reports for each partner. The system sends out invoices and email notifications to customers. CURRENT ARCHITECTURE It is currently using EF4 to read/write to the database. The EF objects are used directly within the aspx files. This has facilitated rapid development while I have been writing the site but it is probably unacceptable to keep it like that as it is tightly coupling the db with the UI. Specific business logic has been added to partial classes of the EF objects. QUESTIONS The goal of refactoring will be to make the site scalable, easily maintainable and secure. 1) What kind of architecture would be best for this? Please describe what should be in each layer, whether I should use DTO's / POCO / Active Record pattern etc. 2) Is there a robust way to auto-generate DTO's / BOs so that any future enhancements will be simple to implement despite the extra layers? 3) Would it be beneficial to convert the project from WebForms to MVC?

    Read the article

  • Printer share keeps asking for password and I can't authenticate from any machine. Why?

    - by tenshimsm
    Ubuntu 12.04 printer share keeps asking for password and I can't authenticate from any machine. Why?? We've installed it in two machine (to act as printer servers) and we get the same problem. It doesn't matter what we do, change or install. We can't figure out why the printer share asks for password even using all of the users that are registered in the server. What is wrong with Precise? I want it to work without a password, but it is not even working WITH one! I gave up! The samba version that comes with Precise is insufferable! I tried various settings that didn't work. I should've used Mint from the beginning. [Edit] My printers config. Remembering that samba is 3.6.3 in ubuntu 12.04 load printers = yes [printers] comment = All Printers browseable = yes path = /var/spool/samba printable = yes guest ok = yes readonly = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes readonly = yes guest ok = yes

    Read the article

  • How to manage many mobile device users at server side?

    - by Rami
    I built a social Android application in which users can see other users around them by GPS location. At the beginning thing went well as I had low number of users, but now that I have increasing number of users (about 1500 +100 every day) it has revealed a major problem in my design. In my Google App Engine servlet I have static HashMap that holds all the users profiles objects, currently 1500 and this number will increase as more users register. Why I'm doing it? Every user that requests for the users around him compares his GPS with other users and checks if they are in his 10km radius. This happens every five minutes on average. Consequently, I can't get the users from db every time because GAE read/write operation quota will tear me apart. The problem with this design is? As the number of users increases, the Hashmap turns to null every 4-6 hours, I think that this time is getting shorter, but I'm not sure. I'm fixing this by reloading the users from the db every time I detect that it becomes null, but this causes DOS to my users for 30 sec, so I'm looking for better solution. I'm guessing that it happens because the size of the hashmap. Am I right? I have been advised to use a spatial database, but that means that I can't work with GAE any more and it means that I need to build my big server all over again and lose my existing DB. Is there something I can do with the existing tools? Thanks.

    Read the article

  • thick client migration to web based application

    - by user1151597
    This query is related to application design the technology that I should consider during migration. The Scenario: I have a C#.net Winform application which communicates with a device. One of the main feature of this application is monitoring cyclic data(rate 200ms) sent from the device to the application. The request to start the cyclic data is sent only once in the beginning and then the application starts receiving data from the device until it sends a stop request. Now this same application needs to be deployed over the web in a intranet. The application is composed of a business logic layer and a communication layer which communicates with the device through UDP ports. I am trying to look at a solution which will allow me to have a single instance of the application on the server so that the device thinks that it is connected as usual and then from the business logic layer I can manage the clients. I want to reuse the code of the business layer and the communication layer as much as possible. Please let me know if webserives/WCF/ etc what i should consider to design the migration. Thanks in advance.

    Read the article

  • Leaving the field of programming. What are the options?

    - by hal10001
    A lot of graduates ask about getting into this field, but I know there are times when I (as well as many others) think about leaving, too. My issue is that I love solving problems and the act of creating something that people enjoy using, and that is what keeps bringing me back. Lately, though, programming has become less of the act of creation and about solving problems, and has become more about being "a monkey at a keyboard". Can you offer any advice with regard to: What fields would offer equivalent problem-solving challenges consistently? How you would go about doing the research, or considering the career change? Basically anything else you think would be helpful in this situation. EDIT: I guess I should clarify and say that I've been in the field about 10 years, and I have had my fair share of working environments. The place where I am at now, and even the previous two jobs, the people I worked with have been great. I've been very lucky in that respect. I'm beginning to wonder if the next step for me has little to do with actual programming and more to do with business analysis or strategic consulting. I would hate to get too much onto the business side of things though, as I like being around tech folks more.

    Read the article

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • Organization standards for large programs

    - by Chronicide
    I'm the only software developer at the company where I work. I was hired straight out of college, and I've been working here for several years. When I started, eveeryone was managing their own data as they saw fit (lots of filing cabinets). Until recently, I've only been tasked with small standalone projects to help with simple workflows. In the beginning of the year I was asked to make a replacement for their HR software. I used SQL Server, Entity Framework, WPF, along with MVVM and Repository/Unit of work patterns. It was a huge hit. I was very happy with how it went, and it was a very solid program. As such, my employer asked me to expand this program into a corporate dashboard that tracks all of their various corporate data domains (People, Salary, Vehicles/Assets, Statistics, etc.) I use integrated authentication, and due to the initial HR build, I can map users to people in positions, so I know who is who when they open the program, and I can show each person a customized dashboard given their work functions. My concern is that I've never worked on such a large project. I'm planning, meeting with end users, developing, documenting, testing and deploying it on my own. I'm part way through the second addition, and I'm seeing that my code is getting disorganized. It's still programmed well, I'm just struggling with the organization of namespaces, classes and the database model. Are there any good guidelines to follow that will help me keep everything straight? As I have it now, I have folders for Data, Repositories/Unit of Work, Views, View Models, XAML Resources and Miscellaneous Utilities. Should I make parent folders for each data domain? Should I make separate EF models per domain instead of the one I have for the entire database? Are there any standards out there for organizing large programs that span multiple data domains? I would appreciate any suggestions.

    Read the article

  • Groovy JUnit test support

    - by Martin Janicek
    Good news everyone! I've implemented support for the Groovy JUnit tests which basically means you can finally use Groovy in the area where is so highly productive! You can create a new Groovy JUnit test in the New File/Groovy/Groovy JUnit test and it should behave in the same way as for Java tests. Which means if there is no JUnit setup in your project yet, you can choose between JUnit 3 and JUnit 4 template and with respect to your choice the project settings will be changed (in case of the Maven based projects the correct dependencies and plugins are added to the pom.xml and in case of the Ant based project the JUnit dependency is configured). Or if the project is already configured, the correct template will be used. After that the test skeleton is created and you can write your own code and of course run the tests together with the java ones. Some of you were asking for this feature and of course I don't expect it will be perfect from the beginning so I would be really glad to see some constructive feedback about what could be improved and/or redesigned ;] ..at the end I have to say that the feature is not active for the Ant based Java EE projects yet (I'm aware of it and it will be fixed to the NetBeans 7.3 final - actually it will be done in a few days/weeks, just want you to know). But it's already complete in all types of the Maven based projects and also for the Ant based J2SE projects. And as always, the daily build where you can try the feature can be downloaded right here, so don't hesitate to try it!

    Read the article

  • Legacy Code Retreat Questions

    - by MarkPearl
    I recently heard of the concept of a Legacy Code Retreat. Since I have attended and helped facilitate some normal Code Retreats I thought it might be interesting in trying a Legacy Code Retreat, but I have a few questions on how a legacy CR differs from a normal one. If anyone has attended a Legacy CR and has some suggestions on how best to host these event’s please leave a comment on what has worked for you in the past or if you have any answers to my questions below… Should you restrict the languages that people can do the sessions in? In the normal CR’s I have been involved in the past we have had people attend and code in their programming language of choice. A normal CR lends itself to  this because each session starts with no code. With a legacy CR each session seems to start with an existing code base. Is there some sort of limitation on the languages that people can work in during the sessions? If not, how do you give them a base to start from? What happens as the beginning of each session? In the normal CR that I have attended each session would have a constraint set on it – i.e. no if statements used, no primitives, etc. With a legacy CR it seems more like patterns for refactoring are learned. Does the facilitator explain the pattern used before the session starts or are they just given a code base to start from and an objective to achieve

    Read the article

  • What is the best way for an experienced developer to work on a WordPress blog

    - by nanothief
    I'm beginning to work on my first WordPress blog, however I've noticed most tutorials just have you do modifications (such as theme changes, installing plugins) on the production site. This worries me for a few reasons: No backups No version control If you make a mistake, your production site is affected Developing remotely is slower than local development, especially when tweaking css files. I understand why WordPress works like this - it allows people with no development experience to manage their WordPress installation (or the one provided by their service provider). It also allows you to work on the WordPress installation without having ssh access to the server. However as I am confortable working with tools like git and ssh, and am using a virtual server for the blog, this isn't very important to me. So I was wondering what techniques experienced developers use when working on a WordPress blog. For example: Do you develop locally, then push the changes to the live site? How do you do this? How do you manage database changes and backups? What do you store under version control (if anything)? If a plugin changes the database, do you somehow track the changes it does in version control, so you can rollback the changes done by the plugin if you need to? Or maybe I'm just overcomplicating everything if working on the production site isn't as risky as I am thinking it would be. I would appreciate any answers either way.

    Read the article

  • Huge spike in traffic tracked by Google Analytics from Safari browsers

    - by Petra Barus
    My site urbanindo.com recently experienced huge spike in traffic tracked by Google Analytics. The huge spike sometime shows in several same pages. This is odd because I rarely experienced that much traffic before. Some pages can have hundreds of visitors at the same time. But when I read the webserver log, those pages only showed up in only one or two entries, not hundreds like the GA showed. But the only similar thing about the entries is that they are using similar browser agent. Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3 Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3 And when I opened the Google Analytics Audience Technology Browsers & OS and I plotted the chart based on the browsers, I saw that the huge spike came from Safari. This huge spikes started to happen since the beginning of this August, which happens to be when I use multiple webservers behind load balancer (although I'm pretty sure those two are not relevant). Is there something wrong with my GA configuration?

    Read the article

  • TechEd North America 2012-Day 4 #msTechEd #teched

    - by Marco Russo (SQLBI)
    I hadn’t time yesterday to write a blog post before of the beginning of the day and I recover now that I’m already back to Europe. I spent many hours at the Microsoft booth meeting people who asked questions about Tabular, Power View and PowerPivot. I have to say that I’m really glad so many people started using PowerPivot and of the large interest around Tabular. During my BISM: Multidimensional vs. Tabular I’ve been helped by Alberto during the demo (we joked about he was the demo monkey) and the feedback received are good. I tried to compare the strength and weakness of the two modeling options without spending time describing the area in which they are similar. During the days before I had many discussions about scenarios based on snapshots and I added a few slides to the presentation in order to cover this area that I thought was marginal but seems to be a very common one. To know more, watch the presentation when it will be available or wait for some article I’ll write in the future describing these patterns. In less than 10 days, I’ll be at TechEd Europe and I’m really looking forward to meet other Tabular developers and PowerPivot users there!

    Read the article

  • How to decide if I should take a profit sharing offer or insist on hard cash

    - by Icode4food
    I am currently the sole developer at the very beginning stages of what I think could be a successful startup. My question is simply how to go about determining if I am interested in accepting a permanent stock in the enterprise or if I should insist on hard cash. This startup is seeking venture capital and it is looking likely that they will get something, I don't know how much. I know very little about the business plan or financial operations at all. The founders have experience with other startup type devours that have been successful. From a business perspective I have a reasonable amount of confidence in them. However, I have never met them face to face. They have offered me a partnership in the project but I'm fairly confident that I don't want to do that. I have done some work for them and they really like me so I'm not to afraid of loosing the position all together. I am a young developer with no experience with this sort of thing and need some experience to point me in the right direction. How to I begin to evaluate how much stock or what percentage of the profit should I ask for? Any suggestions at all would be appreciated! P.S. Any suggestions for better tags?

    Read the article

  • So now Google has said no to old browsers when can the rest of us follow suit?

    - by Richard
    Google recently announced that they will no longer support older browsers on Aug 1st: http://www.bbc.co.uk/news/technology-13639875 http://gmailblog.blogspot.com/2011/06/our-plans-to-support-modern-browsers.html For this reason, soon Google Apps will only support modern browsers. Beginning August 1st, we’ll support the current and prior major release of Chrome, Firefox, Internet Explorer and Safari on a rolling basis. Each time a new version is released, we’ll begin supporting the update and stop supporting the third-oldest version. There is nothing worse than looking at the patching of code that takes place to support older browsers. If we could all move towards a standards only web (I'm looking at you IE9) then surely we could spend more time programming good web apps and less trying to make them run equally on terrible non standards compliant older browsers. So when can the rest of us expect to be able to tell our clients that we no longer support older browsers? Because it seems that large corporates will continue to run older browsers and even if google chrome frame can be installed without admin privileges (it's coming soon, currently in beta) we can't expect all users to be motivated to do this. I appreciate any thoughts.

    Read the article

  • How do I draw a dotted or dashed line?

    - by Gagege
    I'm trying to draw a dashed or dotted line by placing individual segments(dashes) along a path and then separating them. The only algorithm I could come up with for this gave me a dash length that was variable based on the angle of the line. Like this: private function createDashedLine(fromX:Float, fromY:Float, toX:Float, toY:Float):Sprite { var line = new Sprite(); var currentX = fromX; var currentY = fromY; var addX = (toX - fromX) * 0.0075; var addY = (toY - fromY) * 0.0075; line.graphics.lineStyle(1, 0xFFFFFF); var count = 0; // while line is not complete while (!lineAtDestination(fromX, fromY, toX, toY, currentX, currentY)) { /// move line draw cursor to beginning of next dash line.graphics.moveTo(currentX, currentY); // if dash is even if (count % 2 == 0) { // draw the dash line.graphics.lineTo(currentX + addX, currentY + addY); } // add next dash's length to current cursor position currentX += addX; currentY += addY; count++; } return line; } This just happens to be written in Haxe, but the solution should be language neutral. What I would like is for the dash length to be the same no matter what angle the line is. As is, it's just adding 75 thousandths of the line length to the x and y, so if the line is and a 45 degree angle you get pretty much a solid line. If the line is at something shallow like 85 degrees then you get a nice looking dashed line. So, the dash length is variable, and I don't want that. How would I make a function that I can pass a "dash length" into and get that length of dash, no matter what the angle is? If you need to completely disregard my code, be my guest. I'm sure there's a better solution.

    Read the article

  • Problems dualbooting Ubuntu because of UEFI

    - by Koffeehaus
    I have an X-series Asus laptop which I just bough about a month ago. I want to dualboot Ubuntu - Windows. I can easily access LiveUSB with both UEFI enabled and disabled. I heard that there were problems with UEFI, so I disabled it. After I've installed the system I couldn't access it. It just boots to Windows straight. Another unusual thing, that never happened to me before was that the partition editor wanted me to create a BIOS reserved area, which I did, but not at the beginning of the table. Any ideas how to access the Ubuntu partition? As far as I can guess both Windows and Ubuntu have to be both of the same type of boot, either Legacy or EFI. This is not the case of what I have now. So, if I reinstall Ubuntu in UEFI mode that correlates with my Windows type, will I then be able to boot into it? I have a constraint, my laptop doesn't have a CD ROM, so I cannot reinstall WIndows, nor can I move around the Windows recovery partition. This is the boot-repair report : http://paste.ubuntu.com/1354254/

    Read the article

  • Ubuntu won't start - blank screen with flashing white cursor

    - by loomy
    My laptop is dual-booted with Windows 8 on one partition and Ubuntu 12.04.3 on the other. I've searched for my issue already, but nothing I've found so far has solved the problem. Since last week, when I try to boot Ubuntu from GRUB, I am taken to a purple screen (as usual), but then to a black screen with a blinking white _ cursor. I've tried leaving it, but nothing else happens. When I hold Shift and edit the GRUB entry to change 'quiet slash' to 'text', the back screen instead asks for my login and password. When I put them in, it tells me the date of my last logon, and then waits for further commands. Being very very new to Ubuntu, I have no idea at all what to try out at this point. I tried to launch FailsafeX, but while that was beginning, it said "unable to run server /usr/bin/x" No such file or directory", then shortly after returned to the recovery mode menu. Pressing Ctrl+Alt+Delete goes through an Ubuntu loading screen, then the laptop then restarts. Any suggestions will be very appreciated, and apologies if this is a common issue that has been answered a million times before.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >