Search Results

Search found 8250 results on 330 pages for 'dunn less'.

Page 154/330 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Isometric Camera trouble - can't rotate or move correctly

    - by Deukalion
    I'm trying to create a 3D editor, but I've been having some trouble with the Camera and understanding each component. I've created 2 camera that works OK, but now I'm trying to implement an Isometric Camera in XNA without success on the rotation and movement of the camera. All I get working is Zoom. (Cube with x=3f, y=3f, z=1f in center) And this is the constructor for my IsometricCamera (inherits from ICamera, with methods for Rotation, Movement and Zoom, and Properties for World/View/Projection matrices) public IsometricCamera3D(GraphicsDevice device, float startClip = -1000f, float endClip = 1000f) { matrix_projection = Matrix.CreateOrthographic(device.Viewport.Width, device.Viewport.Height, startClip, endClip); rotation = Vector3.Zero; matrix_view = Matrix.CreateScale(zoom) * Matrix.CreateRotationY(MathHelper.ToRadians(45 + 180)) * Matrix.CreateRotationX(MathHelper.ToRadians(30)) * Matrix.CreateRotationZ(MathHelper.ToRadians(120)) * Matrix.CreateTranslation(rotation.X, rotation.Y, rotation.Z); } Problem is when I rotate it, all that happens is that the Cube gets more or less shiny and nothing happens. What is wrong and how should I create my View matrix to move it / rotate it correctly? Rotate, Move and Zoom looks like: MethodName(Vector3 rotation/movement), Zoom(float value); and just increases the value, then calls an update to recreate the View Matrix according to the code in the constructor. Currently, in my editor I use MiddleButton + Mouse Movement to rotate the camera, but it's not working as the other camera. But in my default camera I use World Matrix to move, but I guess that's not the best way to go which is why I'm trying this.

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • Project Jigsaw: On the next train

    - by Mark Reinhold
    I recently proposed to defer Project Jigsaw from Java 8 to Java 9. Feedback on the proposal was about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. The ultimate decision rested, of course, with the Java SE 8 (JSR 337) Expert Group. After due consideration, a strong majority of the EG agreed to my proposal. In light of this decision we can still make progress in Java 8 toward the convergence of the higher-end Java ME Platforms with Java SE. I previously suggested that we consider defining a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. JEP 161 lays out a specific initial proposal for such Profiles. There is also much useful work to be done in Java 8 toward the fully-modular platform in Java 9. Alan Bateman has submitted JEP 162, which proposes some changes in Java 8 to smooth the eventual transition to modules, to provide new tools to help developers prepare for modularity, and to deprecate and then, in Java 9, actually remove certain API elements that are a significant impediment to modularization. Thanks to everyone who responded to the proposal with comments and questions. As I wrote initially, deferring Jigsaw to a Java 9 release in 2015 is by no means a pleasant decision. It does, however, still appear to be the best available option, and it is now the plan of record.

    Read the article

  • Ubuntu 14.04 stalling. Problem with LightDM. Plymouth (and logging out) switches over to a black screen w/ white cursor

    - by Kage
    if its a duplicate, sorry. Couldn't find anything that fits my issue, much less that was on 14.04. I changed a few things recently. Switched to the Numix theme (from PPA), installed lm-sensors and psensor (ran all the I/O probes), Ubuntu Tweak, Pinta, and well, Team Fortress 2 on Steam. :P The system will get to the Plymouth 'ubuntu' screen, load load, all dots filled, switches over to LightDM, but wait! No LightDM. :I Just a blank screen with that white cursor. Can't switch out to tty1-6 - not sure if the Ctrl-Alt-F1 is disabled in 14.04 or if its literally just locked down. If I change any files, I have access to the filesystem from my Windows 8 partition. That's it. :/ I'm pretty familiar with Linux, especially Ubuntu, but I think I'm still at the point I know just enough to break things and not always how to fix 'em. Any help would be greatly appreciated. Thanks! UPDATE I was just able to get into my desktop briefly. I booted Ubuntu. When the black screen froze, I hit Ctrl-Alt-Del. When it started switching off, I hit Ctrl-Alt-Backspace. It rebooted. I plugged the second monitor in I had been using before the issue ever came about. Plymouth displayed on both. LightDM came up, displayed on both (it used to show only the ubuntu logo on the unfocused monitor though). I logged in just fine. Even ran some pending software updates. I logged out of the desktop though, and LightDM refused to show again. xP

    Read the article

  • My first blog post…

    - by steveh99999
    I’ve been meaning to start a blog for a while now, (OK, for several years…..) - finally now, here it begins First post, something really simple but, a wise-man once told me about the best way to improve SQL server performance. Store Less Data. That's it.. that's all there is to it... Over the years, I've seen the following :- -  a 200Gb database which held 3 days data. Once business requirements changed, we were able to hold only 1 days data in this database. -  a table developed by DBAs to hold application table cardinality information - that information was collected at 2 hour intervals every day for 7 years ! After 7 years the DBA space-info table had become the largest table in the database - 60 million rows !  It was a simple change to remove alot of the historical intra-day data and change the schedule to run only once per evening. Suddenly that table held 6 million rows instead of 60 million.... - lots of backup and restore history held in msdb. See this post by Brent Ozar for more details on this issue. Imagine how much faster the backups, DBCC Checks and reindexes ran when the above 3 changes were implemented ?   How often do you review your big databases \ tables to see if you’re actually holding only data that is really required by the business ?

    Read the article

  • Brief pause after keypress

    - by user36324
    After i press and hold the key it goes forward once then pauses for a second or less then goes forward on forever. My problem is the brief pause I cant locate the issue. Thanks for your help. while(game){ while (SDL_PollEvent(&e)){ mainChar.manageEvents(e); } background.renderChar(); mainChar.renderChar(); SDL_RenderPresent(ren); } void Character::manageEvents(SDL_Event event) { switch(event.type){ case SDL_KEYDOWN: KEYS[event.key.keysym.sym] = true; printf("true"); handleInput(); break; case SDL_KEYUP: KEYS[event.key.keysym.sym] = false; printf("false"); break; default: break; } } void Character::handleInput() { if(KEYS[SDLK_a]) { dst.x--; } if(KEYS[SDLK_d]) { dst.x++; } if(KEYS[SDLK_w]) { dst.y++; } if(KEYS[SDLK_s]) { dst.y--; } }

    Read the article

  • Cannot access windows share

    - by Wrigley
    I have an accounting program specifically tailored to the IT industry, its called Fincon (exe based). It basically works on a client to server directory base system. the server is currently running on a windows 7 machine with a NTFS partition. I have installed Wine. Have the shared Windows directory mounted with what I assume is the correct command for such being (mount -tsmbfs //servername/sharedir /mnt/fincon -0 usename=username,password=password). I can see the shared dir although it does take a bit long to access it on the Ubuntu machine via the mapped directory but instant via normal network browsing. I have also set up the mapped directory to my D: drive in Wine and have pointed the fincon.ini to read server field from D: directly. Here's my issue, it seems that for some odd reason I cant write to the mapped directory from Ubuntu, yet I can with my Windows machines, the permissions are set correct on the Windows 7 share and I really dunno what I'm missing. I'm quite a Linux noob just switched yesterday. Thanks guys for any help in this would be quite appreciated. As I'm pulling out my hair here and really want to migrate my work PCs to a Linux OS as it just gives less issues than Windows does ever.

    Read the article

  • Create a system image in Windows 8

    - by Greg Low
    One of the things that I've just come to accept is that the designers of Windows 8 and I think very differently.It'll take a long time to convince me that shutting down the computer is a "setting". Even after using Windows 8 for quite a while now, I still find that I struggle nearly every day, just trying to do things that I previously knew how to do. That's just not a good thing.Today I decided to create a system image as I hadn't made one lately. I started in Control Panel looking for Backup options. That yielded nothing except programs that wanted to "Save backup copies of my files with file history". I thought "oh well, let's just try the new search options". I hit the Windows key and typed "Backup". No, nothing came up there either.I searched again all over the Control Panel options to no avail.So it was time to hit Google again. Once again, clearly lots of people used to know how to do this and have been trying to work out where this option went.The first trick is that there are a bunch of Control Panel options that don't appear in the Control Panel. In the address bar at the top, if you click on Control Panel, you'll find there is an option that says "All Control Panel Options". That is curious given that's where I thought I was when I opened Control Panel. No hint is given on that screen that there are a bunch of hidden options. None the less, I then checked out "all" the options.The option that you need to create a system image in Windows 8 turns out to be the "Windows 7 File Recovery" option that appears in this extended list. Why does it say "Windows 7" when it's for "Windows 8" as well and I'm running "Windows 8"? Why do I have to choose an option that says "File Recovery" to create a system image backup?<sigh>But at least I've recorded it here for the next time I forget where to find it.

    Read the article

  • Should I expect my peers to read or practice on a regular basis? [closed]

    - by Joshua Smith
    I've been debating asking this question for some time. Based several of the comments I read in this question I decided I had to ask. This feels like I'm stating the obvious, but I believe that regular reading (of books, blogs, StackOverflow, whatever) and/or practice are required just to stay current (let alone excel) in whichever stack you use to pay the bills, not to mention playing with things outside your comfort zone to learn new ways of doing things. Yet, I virtually never see this from many of my peers. Even when I go out of my way to point out useful (and almost always free) learning material, I quite often get a sense of total apathy from those I'm speaking to. I'd even go so far as to say that if someone doesn't try to improve (or at least stay current), they'll atrophy as technology advances and actually become less useful to the company. I don't expect people to spend hours a day studying or practicing. I have two young kids and hours of practice simply aren't feasible. Still, I find some time; perhaps on the train, at lunch, in bed for a few minutes, whatever. I'm willing to believe this is arrogance or naivete on my part, but I'd like to hear what the community has to say. So here's my question: Should I expect (and encourage) the same from my peers, or just keep my mouth shut and do my own thing?

    Read the article

  • Login takes very long, annoying repaints once a minute when logged in: How to troubleshoot?

    - by user946850
    I am suffering from a strange problem with my Gnome Shell in Ubuntu 12.10. The login takes very long ( 30 sec), with a blank screen. In Google Chrome and Thunderbird (and perhaps in other applications), the main window freezes and is repainted in periodic intervals of less than one minute. The freeze takes several seconds, and it seems that font and appearance of, e.g., tabs and buttons briefly changes. Attempting to enable the second monitor show an error message related to XRANDR. Everything seems to have started three days ago, after I had to force-shutdown the machine while it was hibernating due to low power. (It was hibernating for quite a while and didn't want to stop.) Silly me. I have tried the following measures, with no avail: Checked all package file md5 hashes using debsums Reinstalled all packages using a variant of dpkg --get-selection \* | xargs apt-get install -reinstall Temporarily moved configuration directories such as .gconf, .config and .gnome2 to another location Created a new user account When I choose "Ubuntu" during login, the problems disappear. I am sort of frustrated that reinstalling all packages didn't fix the issue. How to troubleshoot this Gnome Shell (?) problem, short of reinstalling the system? (Or did anyone see this kind of behavior on their machine?)

    Read the article

  • How do you proactively guard against errors of omission?

    - by Gabriel
    I'll preface this with I don't know if anyone else who's been programming as long as I have actually has this problem, but at the very least, the answer might help someone with less xp. I just stared at this code for 5 minutes, thinking I was losing my mind that it didn't work: var usedNames = new HashSet<string>(); Func<string, string> l = (s) => { for (int i = 0; ; i++) { var next = (s + i).TrimEnd('0'); if (!usedNames.Contains(next)) { return next; } } }; Finally I noticed I forgot to add the used name to the hash set. Similarly, I've spent minutes upon minutes over omitting context.SaveChanges(). I think I get so distracted by the details that I'm thinking about that some really small details become invisible to me - it's almost at the level of mental block. Are there tactics to prevent this? update: a side effect of asking this was fixing the error it would have for i 9 (Thanks!) var usedNames = new HashSet<string>(); Func<string, string> name = (s) => { string result = s; if(usedNames.Contains(s)) for (int i = 1; ; result = s + i++) if (!usedNames.Contains(result)) break; usedNames.Add(result); return result; };

    Read the article

  • Importing an existing project into Git

    - by Andy
    Background During the course of developing our site (ASP.NET), we discovered that our existing source control (SourceGear Vault) wasn't working for us. So, we decided to migrate to Git. The translation has been less than smooth though. Our site is broken up into three environments DEV, QA, and PROD. For tho most part, DEV and the source control repo have been in sync with each other. There is one branch in the repo, if a page was going to be moved up to QA then the file was moved manually, same thing with stuff that was ready for PROD. So, our current QA and PROD environments do not correspond to any particular commit in the master branch. Clarification: The QA and PROD branches are not currently, nor have they ever been in source control. The Question How do I move QA and PROD into Git? Should I forget about the history we've maintained up to this point and start over with a new repo? I could start with everything on PROD, then make a branch and pull in everything from QA, and then make another branch off of that with DEV. That way not only will the branches reflect the differences in the environments, they'll be in the right order chronologically with the newest commits in the DEV branch. What I've tried so far I thought about creating a QA branch off of the current master and using robocopy to make the working folder look like the current QA environment. This doesn't work because the new commit from QA will remove new files from DEV and that will remove them when we merge up, I suspect there will be similar problems if I started QA at an earlier (though not exact) commit from DEV.

    Read the article

  • Entity Object Extension in Oracle Application R12

    - by Manoj Madhusoodanan
    In this blog I will explain how to perform Entity Object ( EO ) Extension.As a prerequisite please read my previous blog.I am doing this exercise based on PL/SQL EO. Following attributes are part of FndUserEO. Here I will add a validation to UserName attribute "Length should be > 5". Following steps need to perform. 1) Download all files of  "Entity Object Based on PL/SQL" to JDEV_USER_HOME/myprojects and JDEV_USER_HOME/myclasses.If you want to see the content of source java file decompile it and save it in JDEV_USER_HOME/myprojects. 2) Create new Entity Object XXFndUserEO as follows. Include all attributes of parent EO. 3) Add the validation code snippet to XXFndUserEOImpl.java as follows. 4) Create the substitution as follows. 5) Migrate files to $JAVA_TOP. xxcustom.oracle.apps.fnd.user.schema.server.XXFndUserEOImpl.javaxxcustom.oracle.apps.fnd.user.schema.server.XXFndUserEO.xml 6) Migrate the substitution.. 7) Bounce the server. 8) Verify the substitution has applied properly. Access Create User Page and create a User. You can see the validation message if user name length is less than 5. Give User Name as XXCUST4 and verify the table.   The FND_USER has created successfully.

    Read the article

  • Oracle MDM Panel at OOW 12: Best practices, Lessons Learned and More...

    - by Mala Narasimharajan
    By Narayana Machiraju  We are less than two weeks out from the start of Oracle Open World 2012. The MDM team has built-up a solid line-up of product and customer sessions for you to attend this year in addition to the hands-on labs, and numerous demonstration pods in Moscone West. This year we will be hosting a customer panel session dedicated to Oracle Customer Hub at Oracle Open World. An esteemed panel of Oracle Customer Hub customers in different Industries: Credit Suisse, Allianz and Elsevier will provide insight into the journey of Customer MDM right from building a business case and MDM vision, establishing and sustaining governance, implementation strategies and realizing the benefits. You will also hear about implementation challenges, phasing strategies and lessons learned from real-life experiences. If you are already implementing Customer MDM or evaluating the benefits of MDM and you would like to hear directly from our customers then I highly recommend you attend this session: Customer MDM Panel: Discussion and Q&A on Implementation Best Practices, Data Quality, Data Governance          and ROI Wednesday October, 3rd, 5:00PM - 6:00PM Westin Market Street Hotel - Metropolitan 1 The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several customer panels, Conference sessions and Customer round table sessions featuring a lot of marquee Customers. You can see an overview of MDM sessions here.  We hope to see you at Open World and stay in touch via our future blogs.

    Read the article

  • FP for simulation and modelling

    - by heaptobesquare
    I'm about to start a simulation/modelling project. I already know that OOP is used for this kind of projects. However, studying Haskell made me consider using the FP paradigm for modelling a system of components. Let me elaborate: Let's say I have a component of type A, characterised by a set of data (a parameter like temperature or pressure,a PDE and some boundary conditions,etc.) and a component of type B, characterised by a different set of data(different or same parameter, different PDE and boundary conditions). Let's also assume that the functions/methods that are going to be applied on each component are the same (a Galerkin method for example). If I were to use an OOP approach, I would create two objects that would encapsulate each type's data, the methods for solving the PDE(inheritance would be used here for code reuse) and the solution to the PDE. On the other hand, if I were to use an FP approach, each component would be broken down to data parts and the functions that would act upon the data in order to get the solution for the PDE. This approach seems simpler to me assuming that linear operations on data would be trivial and that the parameters are constant. What if the parameters are not constant(for example, temperature increases suddenly and therefore cannot be immutable)? In OOP, the object's (mutable) state can be used. I know that Haskell has Monads for that. To conclude, would implementing the FP approach be actually simpler,less time consuming and easier to manage (add a different type of component or new method to solve the pde) compared to the OOP one? I come from a C++/Fortran background, plus I'm not a professional programmer, so correct me on anything that I've got wrong.

    Read the article

  • Dealing with numerous, simultaneous sounds in unity

    - by luxchar
    I've written a custom class that creates a fixed number of audio sources. When a new sound is played, it goes through the class, which creates a queue of sounds that will be played during that frame. The sounds that are closer to the camera are given preference. If new sounds arrive in the next frame, I have a complex set of rules that determines how to replace the old ones. Ideally, "big" or "important" sounds should not be replaced by small ones. Sound replacement is necessary since the game can be fast-paced at times, and should try to play new sounds by replacing old ones. Otherwise, there can be "silent" moments when an old sound is about to stop playing and isn't replaced right away by a new sound. The drawback of replacing old sounds right away is that there is a harsh transition from the old sound clip to the new one. But I wonder if I could just remove that management logic altogether, and create audio sources on the fly for new sounds. I could give "important" sounds more priority (closer to 0 in the corresponding property) as opposed to less important ones, and let Unity take care of culling out sound effects that exceed the channel limit. The only drawback is that it requires many heap allocations. I wonder what strategy people use here?

    Read the article

  • Best way to calculate unit deaths in browser game combat?

    - by MikeCruz13
    My browser game's combat system is written and mechanically functioning well. It's written in PHP and uses a SQL database. I'm happy with the unit balance in relation to one another. I am, however, a little worried about how I'm calculating unit deaths when one player attacks another because the deaths seem to pile up a little fast for my taste. For this system, a battle doesn't just trigger, calculate winner, and end. Instead, it is allowed to go for several rounds (say one round every 15 mins.) until one side passes a threshold of being too strong for the other player and allows players to send reinforcements between rounds. Each round, units pair up and attack each other. Essentially what I do is calculate the damage: AP = Attack Points HP = Hit Points Units AP * Quantity * Random Factors * other factors (such as attrition) I take that and divide by the defending unit's HP to find the number of casualties of defending units. So, for example (simplified to take out some factors), if I have: 500 attackers with 50 AP vs 1000 defenders with 100 HP = 250 deaths. I wonder if that last step could be handled better to reduce the deaths piling up. Some ideas: I just change all the units with more HP? I make sure to set the Attacking unit's AP to be a max of the defender's HP to make sure they only kill 1 unit. (is that fair if I have less huge units vs many small units?) I spread the damage around more by including the defending unit's quantity more? i.e. in that scenario some are dead and some are 50% damage. (How would I track this every round?) Other better mathematical approaches?

    Read the article

  • Deploying InfoPath forms &ndash; idiosyncrasies

    - by PointsToShare
    Well, I have written a sophisticated PowerShell script to expedite the deployment of InfoPath forms - .XSN file.  Along the way by way of trial and error (mostly error and error), I discovered a few little things. Here they are. •    Regardless of how the install command is run – PowerShell or the GUI in Central Admin – SharePoint enwraps the XSN inside a solution – WSP, then installs and deploys the solution. •    The solution is named by concatenating “form-“ with the first 16 characters (or less if the file name is shorter than 16) of the file name and the required WSP at the end. So if the form name was MyInfopathForm.xsn the solution name will be form-MyInfopathForm.wsp, but for WithdrawalOfRequestsForRefund.xsn it will be named form-WithdrawalOfRequ.wsp •    It only gets worse! Had there already been a solution file with the same name, Microsoft appends a three digit number to the name, like MyInfopathForm-123.wsp. Remember a digit is a finger, I suspect a middle finger, so when you deploy the same form – many versions of it, or as it was in my case – testing a script time and again, you’ll end up with many such digit (middle finger) appended solutions, all un-deployed except the last one. This is not a bug. It’s a feature!   Well, there are ways around it. When by hand, remove the solution from the solution store before deploying the form again. In the script I do the same thing. And finally - an important caveat; Make sure that all your form names are unique in the first 16 characters. If you also have a form with the name forWithdrawalOfRequestForRelief.xsn, you’re in trouble! That’s all folks!

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • Are specific types still necessary?

    - by MKO
    One thing that occurred to me the other day, are specific types still necessary or a legacy that is holding us back. What I mean is: do we really need short, int, long, bigint etc etc. I understand the reasoning, variables/objects are kept in memory, memory needs to be allocated and therefore we need to know how big a variable can be. But really, shouldn't a modern programming language be able to handle "adaptive types", ie, if something is only ever allocated in the shortint range it uses fewer bytes, and if something is suddenly allocated a very big number the memory is allocated accordinly for that particular instance. Float, real and double's are a bit trickier since the type depends on what precision you need. Strings should however be able to take upp less memory in many instances (in .Net) where mostly ascii is used buth strings always take up double the memory because of unicode encoding. One argument for specific types might be that it's part of the specification, ie for example a variable should not be able to be bigger than a certain value so we set it to shortint. But why not have type constraints instead? It would be much more flexible and powerful to be able to set permissible ranges and values on variables (and properties). I realize the immense problem in revamping the type architecture since it's so tightly integrated with underlying hardware and things like serialization might become tricky indeed. But from a programming perspective it should be great no?

    Read the article

  • Why have we got so many Linux distributions? [closed]

    - by nebukadnezzar
    Pointed to from an answer to another question, I came across this graphic, and I'm shocked how many linux distributions currently exist. However, it seems that most of these distributions are forks of already popular distributions with minimal changes, usually limited to themes, wallpapers and buttons. It would still seem easier to create a sub-distribution with the required changes, such as XUbuntu with XFCE4, KUbuntu with KDE4, Fluxbuntu with Fluxbox, etc. In my mind there are a number of problems with having so many distributions - perhaps less security/stability due to smaller group of developers, and also the confusingly vast range of choice for newcomers to Linux. Some reasons that developers might decide to fork are: Specializing on a particular topic (work-related topic - i.e. for a Hospital, etc) An exceptional architecture, that requires a special set of software Use of non-FOSS, proprietary technology, and such So what other reasons are there that have caused so many people decided to create their own distributions? What are the thought processes that have led to this? And are these "valid" reasons - do we need so many distributions? If you can back your experiences up with references that would be great.

    Read the article

  • How to avoid "DO YOU HAZ TEH CODEZ" situations?

    - by volothamp
    I have a strange situation at work, where a colleague of mine often asks me and other co-workers for working code. I would like to help him, but this constant request of trivial snippets interrupts my thoughts and sometimes makes it hard to concentrate. Plus, I have the impression (...) that this requests are generated by lack of competence, more than by laziness. In fact, he often asks things pretending to know the answer, since when I solve the problem he usually says things like "Sure", "Yes, that's what I thought", giving me the impression that my answer isn't worth it. How can I solve this embarrassing situation? Should I show more explicitly in front of other colleagues his lack of knowledge (by saying things like: "do it yourself if you can, please") or continue giving him what he wants? I think that he should aggregate all his answers in one, so that I can give him a portion of my time and he can work all by himself on his things. There is no hierarchy in the team, I must say we both have a similar seniority of five years, more or less. For the same reason I believe I cannot report to management, since trivial questions are often ignored. I discussed with other two members and they agree with me: in fact he often ask things cycling through colleagues.

    Read the article

  • Binding in the view or the controller?

    - by da_b0uncer
    I've seen 2 different approaches with MVC on the web. One, like in ExtJS, is to bind the callbacks to the view via the controller. Finding every element on the view and adding the functionallity. The other, like in angular.js and in the lift-framework server-side, too, is to bind in the views and just write the functionallity in the controller. Which is better and cleaner? The ExtJS approach has dumb views and all the logic in the controller. Which seems clean to me. I had problems with global IDs for GUI-elements or relative navigation to GUI-elements in this approach. When I changed the view, the controller couldn't find the buttons anymore or I had multiple instances of one button with the same ID on a single application, because of the global ID. But I solved this with IDs that are only global in a view and can be on the application multiple times. So I could mess with the (dumb) views layout and design and the functionallity wouldn't break. The angular.js approach with the bindings in the view don't has the problem with global IDs. Also, the person who changes something in the view layout has to know the IDs anyway, so the controller can put the data at the right spot. So if I write <a ng-click="doThis()" /> for angular.js and implement doThis() or <a lid="buttonwhichdoesthis" /> for extjs and find the element with the local id and add doThis() as handler on the controller side, seems to be not so different. The only thing is, the second one has one more layer of indirection, which seems cleaner. The first one seems somehow to cost less effort.

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • Spring Cleaning

    - by Tim Dexter
    I recently got a shiny new laptop; moving my shiz from old to new, was not the nightmare it used to be. I have gotten into the habit of using a second hard drive in the media bay where the CDROM normally sits. That drive contains my life's work with BIP. I can pull it out and plug it into another machine very easily. I have been sorting through some old directories and files, archiving some, sharing others with colleagues. For instance, a little dated but if you were looking for a list of Publisher reports available in EBS R12.1, here it is. Im trying to track down a more recent R12 instance and will re-post the document. I also found another gem; its a little out there in terms of usefulness but Im sharing it none the less. You can embed, locally or remotely reference SVG graphics (in XML format) and bring the images into the BIP outputs. Template and sample data here. A nice set of templates showing page number control and page suppression - they will need some explanation, so I'll save them for another post. The list goes on but I'll save them for later. Back to the clean up!

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >