Search Results

Search found 9494 results on 380 pages for 'least squares'.

Page 167/380 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Is it common to only pay developers for the time they said a project would take?

    - by BAM
    I work at a small startup (<10 people), and I was recently assigned (along with one other developer) to a relatively small project. The project involved moving an existing iOS app to Android. The client told us they had built the app for iOS in 300 man-hours. Not knowing at the time that this figure was completely false, we naively and optimistically assumed that if they could build the app from scratch in that amount of time, we could easily "port" it in a similar amount of time. Therefore, we drafted up a fixed-price contract based on 350 man-hours, with a 5 week deadline. (We are well aware now of how big of a mistake this was... Never let the client tell you how long it's going to take!) Anyway, by week 4 we had already surpassed our 350 hours, and we estimated that there were at least 2 more weeks left on the project. We were told to continue working, but that the company could not afford to pay out on overdue projects anymore. I thought this just meant "be more careful about estimates in the future". However a few weeks later, the company president informed us that we would not be getting paid for any time past 350 man-hours. We argued over the issue for almost an hour. He claimed, however, that this is standard practice for many organizations, and that I was unreasonable for making a big deal out of it. So is this really a common thing, or am I justified in being upset about it? Thanks in advance for any advice!

    Read the article

  • Problem with installing Nvidia display drivers on Ubuntu 13.10

    - by Pascal
    Hello everyone and thank you for taking a look at this topic! I'm currently trying out Ubuntu 13.10 but I keep hitting a wall when it comes to installing a driver. I've tried: sudo apt-get install nvidia-current This resulted in a un-bootable system. The screen just stayed black and the cursor displayed as an 'X'. After that I did had to re-install Ubuntu. The computer I'm using is an Acer-Aspire-V3 with a build in Nvidia geforce GT 630M and also with a Intel HD graphics chip-set (not sure if chip-set is the right word here). "lspci | grep VGA" output: pascal@pascal-Aspire-V3-571G:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108M [GeForce GT 630M] (rev a1) I've searched a bit here and there and found out that it would be wise to mention that this laptop is using (or so I think) Nvidia Optimus, not sure if it will add anything to the subject but at least I'll mention it just to be sure. Now to the questions: Q1 How is this caused and how can I fix it? Q2 What additional information could I provide to help you help me?

    Read the article

  • Apply a BSU patch manually in Off-line Mode

    - by adejuanc
    To apply patch or patches in Off-line mode.Note that you will need at least the patch-catalog.xml and one patch jar file.BSU Patch Installation Off-line Mode------------------------------------------------------------------------------------------------------------Apply patch or patches in Off-line modeNote: To apply the patch in offline mode refer to the section 8 Using the Command-Line Interface of Using the Command-Line Interface guide availablehere - http://download.oracle.com/docs/cd/E14571_01/doc.1111/e14143/toc.htmSteps:1. Make sure you have the following files inside Folder WLS_ORACLE_HOME/utils/bsu/cache_dir- patch-catalog.xml- XXXX.jarNote that XXXX is the patch ID, is just a reference name.2. Apply the patch by running the following command:> cd WLS_ORACLE_HOME/utils/bsu/> ./bsu.sh -prod_dir=WLS_ORACLE_HOME/wlserver_10.3 -patchlist=<Patch_Id> -verbose -installFor example:> ./bsu.sh -prod_dir=/Oracle/Middleware/wlserver_10.3 -patchlist=XXXX -verbose -installAfter this the patch should be applied successfully.3. You can view the report of applied patches by running the following command:> ./bsu.sh -report -patch_id_mask=<Patch_Id>For example: > ./bsu.sh -report -patch_id_mask=XXXX4. Restart the server and watch the standard output logs to verify the installation.5. If you have more patches to apply go to step 1.6. Repeat the process on each Instance Server including the admin Server.

    Read the article

  • How can I refresh/reinstall/clear/set-to-default my bootup process?

    - by Tchalvak
    I'm currently having a problem with my bootup process that is growing progressively worse as time goes on: While booting, it does a few minutes of hard-drive reading. During that, instead of showing a boot splash screen, it shows various dashes and dots, as if the video card isn't recognizing. The splash screen actually has colors similar to the splash screen (purple), it simply is garbled. It then does a few minutes of hard-drive reads, and if I leave it long enough, sometimes it boots into the desktop (and auto-logs-in). Sometimes, unfortunately, it just hangs on that garbled screen and reads from the hard-drive forever. Notably, I've also stopped being able to access grub during bootup (perhaps it is just not displayed correctly by the video, hard to tell). This is a symptom that has grown over the course of various ubuntu upgrades, at least I suspect that the upgrade process is leaving behind cruft. So, is there a safe way for me to "refresh" the boot system so that it is clean, new, fast, and reliable? For example, to test out a cleanly configured boot, make sure that it works (try before I buy), and then apply it to the system to eliminate as much of this problem as possible? Edit: Here is the requested bootchart: http://imgur.com/9jocF

    Read the article

  • My Obligatory IPad Post

    - by mark.wilcox
    I've had my IPad for about a week now. So I thought I'd write some thoughts down based on my initial experiences. Here are my initial take-aways: 1 - Netflix OnDemand - I'm a movie junkie. I'm now more apt to just start a movie as background sound for my workday (I telecommute - so except for the occasional bark from my dog, it's awfully quiet here if I don't have something going). 2 - The Email Client is really nice and I'm as fast or faster typing when I have the wireless keyboard engaged. Even with onscreen keyboard - I'm already close to 75% of desktop speed 3 - The battery life is incredible - I think this is the first case where a mobile device actually under-promised on battery 4 - It totally has killed the notion of using a normal PC for my wife and mother-in-law - neither of which had wanted an iPhone/iPod Touch or really any Apple device until they got to play with my iPad. The concept of - instant on, easy to hold and touch-based navigation has them hooked. Heck, it has me hooked. My ultimate goal is to be able to have it at least replace the need to take my netbook with me on the road. I haven't had a chance to complete my testing on that front yet - between work, my wife traveling (for a change) and now my wife home sick - I haven't had time to just play with it. But so far my only regret - that I haven't already bought two more for everyone else in my family who wants to use mine. Posted via email from Virtual Identity Dialogue

    Read the article

  • How safe is it to rely on thirdparty Python libs in a production product?

    - by skyler
    I'm new to Python and come from the write-everything-yourself world of PHP (at least this is how I always approached it). I'm using Flask, WTForms, Jinja2, and I've just discovered Flask-Login which I want to use. My question is about the reliability of using thirdparty libraries for core functionality in a project that is planned to be around for several years. I've installed these libraries (via pip) into a virtualenv environment. What happens if these libraries stop being distributed? Should I back up these libraries (are they eggs)? Can I store these libraries in my project itself, instead of relying on pip to install them in a virtualenv? And should I store these separately? I'm worried that I'll rely on a library for core functionality, and then one day I'll download an incompatible version through pip, or the author or maintainer will stop distributing it and it'll no longer be available. How can I protect against this, and ensure that any thirdparty libraries that I use in my projects will always be available as they are now?

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

  • Enforcing Constraints Upon Data Documents of Various Formats

    - by Christopher Berman
    This seems like the sort of problem that must have been solved elegantly long ago, but I haven't the foggiest how to google it and find it. Suppose you're maintaining a large legacy system, which has a large collection of data (tens of GB) of various formats, including XML and two different internal configuration formats. Suppose further that there are abstract rules governing the values these files may or may not contain. EXAMPLE: File A defines the raw, mathematical data pertaining to the aerodynamics of a car for consumption of the physics component of the system. File B contains certain values from File A in an easily accessible, XML hierarchy for consumption of a different component of the system. There exists, therefore, an abstract rule (or constraint) such that the values from File B must match the values from File A. This is probably the simplest constraint that can be specified, but in practice, the constraints between files can become very complicated indeed. What is the best method for managing these constraints between files of arbitrary formats, short of migrating it over to an RDBMS (which simply isn't feasible for the foreseeable future)? Has this problem been solved already? To be more specific, I would expect the solution to at least produce notifications of violated constraints; the solution need not resolve the constraints. ============================== Sample file structures File A (JeepWrangler2011.emv): MODEL JeepWrangler2011 { EsotericMathValueX 11.1 EsotericMathValueY 22.2 EsotericMathValueZ 33.3 } File B (JeepWrangler2011.xml): <model name="JeepWrangler2011"> <!--These values must correspond File A's EsotericMathValues--> <modelExtent x="11.1" y="22.2" z="33.3"/> [...] </model>

    Read the article

  • How do you price your work?

    - by Dr.Kameleon
    Well, let me explain : This has really been an issue for me, for such a long time. And what is worse - since coding is something I simply ADORE (I would definitely do it, even if there was no payment involved whatsoever..) - is that I always end up feeling somewhat awkward... Anyway... So, here's the deal : You start working on a project, you may have something in your mind, and even if you're lucky enough and the client needs no "cost estimates" beforehand, sooner or later you'll face the ultimate dilemma of pricing your own work. So, how do YOU do it? By estimating the time you put into it? (obviously, this is not exact, 'coz perhaps a more capable coder will need much less time for the very same thing than a not-so-competent coder + even the very same coder may not "perform" equally at all times) By the Lines of code you've written? (obviously, this is not a measure either : a 10-line script that does exactly the same with a 1000-line script is, at least for me, "better") By taking into account the level of complexity of the project and, perhaps, how specialised the subject is? By taking into account other factors? (e.g. the value of the project for your customer)

    Read the article

  • share distribution question

    - by facebook-100000781341887
    Hi, I just developed a facebook game(mifia like), but the graphic I make is not good, because it is reference with some existing photo, trace with AI, and coloring it. Therefore, I invite my friend to join me, he is a graphic designer, own a company with his friend (I know both of them), for the share, I expect at least 70% for me, and at most 30% for them (both of them want to join). Therefore, they give me a counter offer, 60% for me and 40% for them, of course, I feel their counter offer is unacceptable because they only build the image in part time, and all the other work just like coding, webhosting...etc, is what I do in full time. Why they said they worth 40% is that they will make a good graphic, they can provide a advertise channel(on local magazine), etc... Actually, I don't think the game need advertisement on local magazine because the game is not target for local... Please give me some comments on this issue(is the share fair? what is the importance of the image of the game, is it worth more than 30%), or can anyone share the experience on this. Thanks in advance.

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • How to identify process that's sending error messages to terminal?

    - by kjo
    The following error message occasionally appears in my terminal: Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory ...which is pretty annoying. I have searched online for solutions to this error, without success. Is there any way to at least identify the process responsible for sending these error messages to my terminal? EDIT: Let me clarify that, as far as I can tell, these error messages appear "out of the blue". In fact, they appear asynchronously with respect to my interactions with the terminal (more often than not see them for the first time when I return to a terminal window that has been unattended for some time). I'm sure there's a definite, deterministic cause for these messages, but it is not one that I can readily identify. In short, I have not noticed any pattern or regularity to their occurrence. In particular, in my case their occurrence has nothing to do with running mplayer, or any other video playback program. (Please my earlier post about it here.) For one thing, the machine in question is a work machine, and I rarely watch any videos with it. In the very few instances in which I've watched a video on this machine I've used VLC, not mplayer, and these errors never appeared in these rare occasions that I used VLC.

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • Why is CS never a topic of conversation of the layman? [closed]

    - by hydroparadise
    Granted, every profession has it's technicalities. If you are an MD, you better know the anatomy of the human body, and if you are astronomer, you better know your calculus. Yet, you don't have to know these more advance topics to know that smoking might give you lung cancer because of carcinogens or the moon revolves around the earth because of gravity (thank you Discovery Channel). There's sort of a common knowledge (at least in more developed countries) of these more advanced topics. With that said, why are things like recursive descent parsing, BNF, or Turing machines hardly ever mentioned outsided 3000 or 4000 level classes in a university setting or between colleagues? Even back in my days before college in my pursuit of knowledge on how computers work, these very important topics (IMHO) never seem to get the light of day. Many different sources and sites go into "What is a processor?" or "What is RAM?", or "What is an OS?". You might get lucky and discover something about programming languages and how they play a role in how applications are created, but nothing about the tools for creating the language itself. To extend this idea, Dennis Ritchie died shortly after Steve Jobs, yet Dennis Ritchie got very little press compared to Steve Jobs. So, the heart of my question: Does the public in general not care to hear about computer science topics that make the technology in their lives work, or does the computer science community not lend itself to the general public to close the knowledge gap? Am I wrong to think the general public has the same thirst for knowledge on how things work as I do? Please consider the question carefully before answering or vote closing please.

    Read the article

  • Mercurial says "nothing changed", but it did. Sometimes my software is too clever.

    - by user12608033
    It seems I have found a "bug" in Mercurial. It takes a shortcut when checking for differences in tracked files. If the file's size and modification time are unchanged, it assumes its contents are unchanged: $ hg init . $ cp -p .sccs2hg/2005-06-05_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg add nicstat.c $ hg commit -m "added nicstat.c" $ cp -p .sccs2hg/2005-07-02_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg diff $ hg commit nothing changed $ touch nicstat.c $ hg diff diff -r b49cf59d431d nicstat.c --- a/nicstat.c Fri Aug 24 11:21:27 2012 -0700 +++ b/nicstat.c Fri Aug 24 11:22:50 2012 -0700 @@ -2,7 +2,7 @@ * nicstat - print network traffic, Kb/s read and written. Solaris 8+. * "netstat -i" only gives a packet count, this program gives Kbytes. * - * 05-Jun-2005, ver 0.81 (check for new versions, http://www.brendangregg.com) + * 02-Jul-2005, ver 0.90 (check for new versions, http://www.brendangregg.com) * [...] Now, before you agree or disagree with me on whether this is a bug, I will also say that I believe it is a feature. Yes, I feel it is an acceptable shortcut because in "real" situations an edit to a file will change the modification time by at least one second (the resolution that hg diff or hg commit is looking for). The benefit of the shortcut is greatly improved performance of operations like "hg diff" and "hg status", particularly where your repository contains a lot of files. Why did I have no change in modification time? Well, my source file was generated by a script that I have written to convert SCCS change history to Mercurial commits. If my script can generate two revisions of a file within a second, and the files are the same size, then I run afoul of this shortcut. Solution - I will just change my script to apply the modification time from the SCCS history to the file prior to commit. A "touch -t " will do that easily.

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • Finding the shortest path through a digraph that visits all nodes

    - by Boluc Papuccuoglu
    I am trying to find the shortest possible path that visits every node through a graph (a node may be visited more than once, the solution may pick any node as the starting node.). The graph is directed, meaning that being able to travel from node A to node B does not mean one can travel from node B to node A. All distances between nodes are equal. I was able to code a brute force search that found a path of only 27 nodes when I had 27 nodes and each node had a connection to 2 or 1 other node. However, the actual problem that I am trying to solve consists of 256 nodes, with each node connecting to either 4 or 3 other nodes. The brute force algorithm that solved the 27 node graph can produce a 415 node solution (not optimal) within a few seconds, but using the processing power I have at my disposal takes about 6 hours to arrive at a 402 node solution. What approach should I use to arrive at a solution that I can be certain is the optimal one? For example, use an optimizer algorithm to shorten a non-optimal solution? Or somehow adopt a brute force search that discards paths that are not optimal? EDIT: (Copying a comment to an answer here to better clarify the question) To clarify, I am not saying that there is a Hamiltonian path and I need to find it, I am trying to find the shortest path in the 256 node graph that visits each node AT LEAST once. With the 27 node run, I was able to find a Hamiltonian path, which assured me that it was an optimal solution. I want to find a solution for the 256 node graph which is the shortest.

    Read the article

  • How can I stop a process from moving to the background?

    - by Alex
    I have a machine running Ubuntu server version 12.04.3 LTS. On it, I'm attempting to run a node.js server that needs to stay up and running at all times. I'm running into an issue, however, where periodically I see this happen: [1]+ Stopped sudo node server.js When this happens, I have to manually bring it back with fg, which works fine, at least until it stops again. As far as I can tell, it isn't functioning properly while stopped, since I get no log files in those windows of time. So my question is this: Is there a way to prevent it from being stopped like that? I'm running it in a tmux window, if that changes anything. Also, to address the question before it gets asked: I'm running it as sudo due to some ecryptfs issues I've been having. I was originally running it in my home directory, but when it was left alive for too long things would get out of sync and the file writes it has to do would just stop working. To mitigate that, I moved it out of my home directory, but its new location requires me to use sudo permissions for everything to work correctly. Hopefully that isn't related to the whole background task thing. (sudo and tmux tags included in case one or both turn out to actually be relevant to the solution.)

    Read the article

  • Is there really anything to gain with complex design? [duplicate]

    - by SB2055
    This question already has an answer here: What is enterprise software, exactly? 8 answers I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: MVC Service Layer EF DB To really complex: MVC UoW DI / IoC Repository Service UI Tests Unit Tests Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - testing and interfaces aren't used rapid development (read: cost-savings) is a priority the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of-maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development.

    Read the article

  • Will an online degree get you a job that requires "CS or equivalent 4-year degree"? [on hold]

    - by qel
    I'm a nerdy slacker type who didn't get my life together till I was 30. I've had a real job for a couple years doing C#/SQL. I've gotten several raises, but I'm making less than most developers, and the atmosphere is ... not positive. Looking for a new job, I think my applications get thrown out because I don't have a degree. And I want to finish a Bachelor's just to feel like less of a loser. I have a lot of college credits from 1996-2003 and a low GPA, so I don't know if that's worth much. An online degree looks like a good option, but I just don't know what I should be looking at for online schools because they all look like fake degrees. If they had programs equivalent to a real Comp Sci degree, I don't think they would have weird sounding names like they do. University of Phoenix has a B.S./Information Technology-Software Engineering. DeVry has a B.S./Computer Engineering Technology program. But that's not CS, and most other things I see have even more fake-sounding names. Are these useless degrees? Some people say DeVry and UoP are acceptable, some people say they're a joke. I have enough experience now, though, that maybe all I'm missing is being able to check the box that I have a 4-year degree. Harvard Extension seems like a real degree, even if it isn't a real Harvard degree, but I'd have to live there at least 3 months, which kinda defeats the purpose of an online degree fitting around work.

    Read the article

  • What did Rich Hickey mean when he said, "All that specificity [of interfaces/classes/types] kills your reuse!"

    - by GlenPeterson
    In Rich Hickey's thought-provoking goto conference keynote "The Value of Values" at 29 minutes he's talking about the overhead of a language like Java and makes a statement like, "All those interfaces kill your reuse." What does he mean? Is that true? In my search for answers, I have run across: The Principle of Least Knowledge AKA The Law of Demeter which encourages airtight API interfaces. Wikipedia also lists some disadvantages. Kevlin Henney's Imperial Clothing Crisis which argues that use, not reuse is the appropriate goal. Jack Diederich's "Stop Writing Classes" talk which argues against over-engineering in general. Clearly, anything written badly enough will be useless. But how would the interface of a well-written API prevent that code from being used? There are examples throughout history of something made for one purpose being used more for something else. But in the software world, if you use something for a purpose it wasn't intended for, it usually breaks. I'm looking for one good example of a good interface preventing a legitimate but unintended use of some code. Does that exist? I can't picture it.

    Read the article

  • Ubuntu won't load, freezes on purple screen

    - by kara
    Last time I restarted my computer, I could not get Ubuntu to load; the screen would either go black, or would hang at the purple screen indefinitely. I have had some graphics problems in the past, but had put 'nomodeset' after 'quiet splash' in the grub command line, which at least let Ubuntu load. That doesn't work now, and doesn't work if I remove it. I looked up some answers, such as this one: Purple start screen - no splash screen However, when I enter the root in recovery mode in grub, I always get errors when I run those command lines and it won't let me modify the files. Also, if I run in recovery mode and then choose 'resume normal boot', it will continue. But instead of getting a usual interface, I get a black screen that asks for my username and password. I enter these, and it tells me I'm in Ubuntu 12.04, but I'm still on a black screen with texts. It also informs me that there are updates to install. When I use the command 'sudo apt-get update', it starts to retrieve the information, but then the screen goes blank after a couple of seconds and I can't do anything anymore. Any ideas?

    Read the article

  • Inactive JSRs looking for Spec Leads

    - by heathervc
    You may have noticed that some JSRs have a classification of "Inactive" on their JSR page.  The introduction of this term in 2009 was part of an effort to enable and encourage more transparency into the development of JSRs.  You can read more about Inactive JSRs here and also in the JCP FAQ.The following JSR proposals have been Inactive since at least 2009. If you are a JCP Member and are interested in taking over the Specification Lead role for one of these JSRs, please contact the PMO at [email protected] on or before 23 April 2012. With that message, please include the following: the subject line "Spec Lead for JSR ###," where '###' is the JSR number which JCP Member you represent why you wish to take over the Specification lead role Here is the current list of Inactive JSRs for which Members can request to become Specification Leads: JSR 122, JAIN JCAT JSR 161, JAIN ENUM API Specification JSR 182, JPay - Payment API for the Java Platform JSR 210, OSS Service Quality Management API JSR 241, The Groovy Programming Language JSR 251, Pricing API JSR 278, Resource Management API for Java ME JSR 304, Mobile Telephony API v2 JSR 305, Annotations for Software Defect Detection JSR 320, Services Framework

    Read the article

  • String Conversion to Char for Java Game

    - by Jen
    Is there someone who could help me achieve the following points on this problems? I can't seem to get it. I tried using toCharArray and Scanner to achieve this, but it doesn't work, nor do I know how to make this things possible for my word game. :( · Get a popular name of person, place, verse, saying or event from the user. This may have a single or multiple words in it. · Create a copy of this string to an array where each letter is replaced with a hyphen (-) and each space is replaced with an underscore (_). Symbols and numbers will remain shown. · The program then asks a letter from the user. If the letter is in the inputted string, then it should be shown on the array at the same position it is shown in the string. Meaning, the letter replaces the hyphen (-) at the correct position of the array. · The program again prompts for a letter from the user and replaces the hyphen (-) of the array if it exists on the inputted string. This will be repeatedly done until such time each hyphen (-) is replaced with the correct letter. · If the user inputs an invalid letter, that is, a letter that does not exist on the inputted string, then the program should inform the user. If this happens 3 times while there is still at least one hyphen on the array, then the program should inform the user that he lost the game and showing him the whole correct string. · If the user completes the game, meaning, all hyphens have been replaced with the correct letters; then the program should congratulate the user for a job well done.

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >