Search Results

Search found 4841 results on 194 pages for 'poor programmer'.

Page 99/194 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • vSphere 4 - how can I cancel a file copy in progress?

    - by DrStalker
    VMware vSphere 4 SAN storage with multiple data-stores No vCenter I shut-down a virtual machine and using the data-store browser did a copy/paste to copy the VM to a new datastore with additional space. The file copy performance was very poor, and due to time constraints I decided to cancel the copy task. However the copy task showing in the vsphere client can not be cancelled; the cancel option is disabled. Currently I am not able to start the machine in it's original location as the disk files are locked for the copy. How can I abort the copy? I tried deleting the target directory but this did not abort the copy task.

    Read the article

  • Interactive Data Language, IDL: Does anybody care?

    - by Alex
    Anyone use a language called Interactive Data Language, IDL? It is popular with scientists. I think it is a poor language because it is proprietary (every terminal running it has to have an expensive license purchased) and it has minimal support (try searching for IDL, the language, right now on stack) . I am trying to convince my colleagues to stop using it and learn C/C++/Python/Fortran/Java/Ruby. Does anybody know about or even care about IDL enough to have opinions on it? What do you think of it? Should I tell my colleagues to stop wasting their time on it now? How can I convince them? Edit: People are getting the impression that I don't know or use IDL. Also, I said IDL has minimal support which is true in one sense, so I must clarify that the scientific libraries are indeed large. I use IDL all the time, but this is exactly the problem: I am only using IDL because colleagues use it. There is a file format IDL uses, the .sav, which can only be opened in IDL. So I must use IDL to work with this data and transfer the data back to colleagues, but I know I would be more efficient in another language. This is like someone sending you a microsoft word file in an email attachment and if you don't understand how wrong that is then you probably write too many words not enough code and you bought microsoft word. Edit: As an alternative to IDL Python is popular. Here is a list of The Pros of IDL (and the cons) from AstroBetter: Pros of IDL Mature many numerical and astronomical libraries available Wide astronomical user base Numerical aspect well integrated with language itself Many local users with deep experience Faster for small arrays Easier installation Good, unified documentation Standard GUI run/debug tool (IDLDE) Single widget system (no angst about which to choose or learn) SAVE/RESTORE capability Use of keyword arguments as flags more convenient Cons of IDL Narrow applicability, not well suited to general programming Slower for large arrays Array functionality less powerful Table support poor Limited ability to extend using C or Fortran, such extensions hard to distribute and support Expensive, sometimes problem collaborating with others that don’t have or can’t afford licenses. Closed source (only RSI can fix bugs) Very awkward to integrate with IRAF tasks Memory management more awkward Single widget system (useless if working within another framework) Plotting: Awkward support for symbols and math text Many font systems, portability issues (v5.1 alleviates somewhat) not as flexible or as extensible plot windows not intrinsically interactive (e.g., pan & zoom) Pros of Python Very general and powerful programming language, yet easy to learn. Strong, but optional, Object Oriented programming support Very large user and developer community, very extensive and broad library base Very extensible with C, C++, or Fortran, portable distribution mechanisms available Free; non-restrictive license; Open Source Becoming the standard scripting language for astronomy Easy to use with IRAF tasks Basis of STScI application efforts More general array capabilities Faster for large arrays, better support for memory mapping Many books and on-line documentation resources available (for the language and its libraries) Better support for table structures Plotting framework (matplotlib) more extensible and general Better font support and portability (only one way to do it too) Usable within many windowing frameworks (GTK, Tk, WX, Qt…) Standard plotting functionality independent of framework used plots are embeddable within other GUIs more powerful image handling (multiple simultaneous LUTS, optional resampling/rescaling, alpha blending, etc) Support for many widget systems Strong local influence over capabilities being developed for Python Cons of Python More items to install separately Not as well accepted in astronomical community (but support clearly growing) Scientific libraries not as mature: Documentation not as complete, not as unified Not as deep in astronomical libraries and utilities Not all IDL numerical library functions have corresponding functionality in Python Some numeric constructs not quite as consistent with language (or slightly less convenient than IDL) Array indexing convention “backwards” Small array performance slower No standard GUI run/debug tool Support for many widget systems (angst regarding which to choose) Current lack of function equivalent to SAVE/RESTORE in IDL matplotlib does not yet have equivalents for all IDL 2-D plotting capability (e.g., surface plots) Use of keyword arguments used as flags less convenient Plotting: comparatively immature, still much development going on missing some plot type (e.g., surface) 3-d capability requires VTK (though matplotlib has some basic 3-d capability)

    Read the article

  • How to use JQuery to set the value of 2 html form select elements depending on the value of another

    - by Chris Stevenson
    My Javascript and JQuery skills are poor at best and this is ** I have the following three elements in a form : <select name="event_time_start_hours"> <option value="blank" disabled="disabled">Hours</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="01">1</option> <option value="02">2</option> <option value="03">3</option> <option value="04">4</option> <option value="05">5</option> <option value="06">6</option> <option value="07">7</option> <option value="08">8</option> <option value="09">9</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> <option value="midnight">Midnight</option> <option value="midday">Midday</option> </select> <select name="event_time_start_minutes"> <option value="blank" disabled="disabled">Minutes</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="00">00</option> <option value="15">15</option> <option value="30">30</option> <option value="45">45</option> </select> <select name="event_time_start_ampm"> <option value="blank" disabled="disabled">AM / PM</option> <option value="blank" disabled="disabled">&nbsp;</option> <option value="am">AM</option> <option value="pm">PM</option> </select> Quite simply, when either 'midnight' or 'midday' is selected in "event_time_start_hours", I want the values of "event_time_start_minutes" and "event_time_start_ampm" to change to "00" and "am" respectively. My VERY poor piece of Javascript says this so far : $(document).ready(function() { $('#event_time_start_hours').change(function() { if($('#event_time_start_hours').val('midnight')) { $('#event_time_start_minutes').val('00'); } }); }); ... and whilst I'm not terribly surprised it doesn't work, I'm at a loss as to what to do next. I want to do this purely for visual reasons for the user as when the form submits I ignore the "minutes" and "am/pm". I'm trying to decide whether it would be best to change the selected values, change the selected values and then disable the element or hide them altogether. However, without any success in getting anything to happen at all I haven't been able to try the different approaches to see what feels right. I've ruled out the obvious things like a duplicate element ID or simply not linking to JQuery. Thank you.

    Read the article

  • Random Complete System Unresponsiveness Running Mathematical Functions

    - by Computer Guru
    I have a program that loads a file (anywhere from 10MB to 5GB) a chunk at a time (ReadFile), and for each chunk performs a set of mathematical operations (basically calculates the hash). After calculating the hash, it stores info about the chunk in an STL map (basically <chunkID, hash>) and then writes the chunk itself to another file (WriteFile). That's all it does. This program will cause certain PCs to choke and die. The mouse begins to stutter, the task manager takes 2 min to show, ctrl+alt+del is unresponsive, running programs are slow.... the works. I've done literally everything I can think of to optimize the program, and have triple-checked all objects. What I've done: Tried different (less intensive) hashing algorithms. Switched all allocations to nedmalloc instead of the default new operator Switched from stl::map to unordered_set, found the performance to still be abysmal, so I switched again to Google's dense_hash_map. Converted all objects to store pointers to objects instead of the objects themselves. Caching all Read and Write operations. Instead of reading a 16k chunk of the file and performing the math on it, I read 4MB into a buffer and read 16k chunks from there instead. Same for all write operations - they are coalesced into 4MB blocks before being written to disk. Run extensive profiling with Visual Studio 2010, AMD Code Analyst, and perfmon. Set the thread priority to THREAD_MODE_BACKGROUND_BEGIN Set the thread priority to THREAD_PRIORITY_IDLE Added a Sleep(100) call after every loop. Even after all this, the application still results in a system-wide hang on certain machines under certain circumstances. Perfmon and Process Explorer show minimal CPU usage (with the sleep), no constant reads/writes from disk, few hard pagefaults (and only ~30k pagefaults in the lifetime of the application on a 5GB input file), little virtual memory (never more than 150MB), no leaked handles, no memory leaks. The machines I've tested it on run Windows XP - Windows 7, x86 and x64 versions included. None have less than 2GB RAM, though the problem is always exacerbated under lower memory conditions. I'm at a loss as to what to do next. I don't know what's causing it - I'm torn between CPU or Memory as the culprit. CPU because without the sleep and under different thread priorities the system performances changes noticeably. Memory because there's a huge difference in how often the issue occurs when using unordered_set vs Google's dense_hash_map. What's really weird? Obviously, the NT kernel design is supposed to prevent this sort of behavior from ever occurring (a user-mode application driving the system to this sort of extreme poor performance!?)..... but when I compile the code and run it on OS X or Linux (it's fairly standard C++ throughout) it performs excellently even on poor machines with little RAM and weaker CPUs. What am I supposed to do next? How do I know what the hell it is that Windows is doing behind the scenes that's killing system performance, when all the indicators are that the application itself isn't doing anything extreme? Any advice would be most welcome.

    Read the article

  • What popular "best practices" are not always best, and why?

    - by SnOrfus
    "Best practices" are everywhere in our industry. A Google search on "coding best practices" turns up nearly 1.5 million results. The idea seems to bring comfort to many; just follow the instructions, and everything will turn out fine. When I read about a best practice - for example, I just read through several in Clean Code recently - I get nervous. Does this mean that I should always use this practice? Are there conditions attached? Are there situations where it might not be a good practice? How can I know for sure until I've learned more about the problem? Several of the practices mentioned in Clean Code did not sit right with me, but I'm honestly not sure if that's because they're potentially bad, or if that's just my personal bias talking. I do know that many prominent people in the tech industry seem to think that there are no best practices, so at least my nagging doubts place me in good company. The number of best practices I've read about are simply too numerous to list here or ask individual questions about, so I would like to phrase this as a general question: Which coding practices that are popularly labeled as "best practices" can be sub-optimal or even harmful under certain circumstances? What are those circumstances and why do they make the practice a poor one? I would prefer to hear about specific examples and experiences.

    Read the article

  • How do I align my partition table properly?

    - by Jorge Castro
    I am in the process of building my first RAID5 array. I've used mdadm to create the following set up: root@bondigas:~# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Wed Oct 20 20:00:41 2010 Raid Level : raid5 Array Size : 5860543488 (5589.05 GiB 6001.20 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 20 20:13:48 2010 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 1% complete UUID : f6dc829e:aa29b476:edd1ef19:85032322 (local to host bondigas) Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 spare rebuilding /dev/sde While that's going I decided to format the beast with the following command: root@bondigas:~# mkfs.ext4 /dev/md1p1 mke2fs 1.41.11 (14-Mar-2010) /dev/md1p1 alignment is offset by 63488 bytes. This may result in very poor performance, (re)-partitioning suggested. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=48 blocks 97853440 inodes, 391394047 blocks 19569702 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 11945 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: ^C 27/11945 root@bondigas:~# ^C I am unsure what to do about "/dev/md1p1 alignment is offset by 63488 bytes." and how to properly partition the disks to match so I can format it properly.

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • Where to find other versions of Opera browser as deb packages?

    - by cipricus
    I used Opera mainly for the Unite feature now to be abandoned. It is missing in v. 12. Some say its features will re-emerge in future extensions etc. Until then, Unite is still accessible in v. 11. Where do I get the v.11 deb? P.S. In fact it seems that opera unite (at least in its older form) is dying while I am editing this question. Access to opera-unite applications from within opera-unite is poor or absent. This issue is obscure to me for now (31.08.2012) because yesterday I have installed v12 in Windows OS (with opera-unite and basic applications - file sharing and media player - already installed) and it is still working (server is working). The v12 Ubuntu version came today without unite, and after installing v11 (which has unite) I could not get applications (file sharing, etc). But they are still available: here and after downloading these files which have te .ua extension, they can be installed by opening them with Opera (v.11) But as opera-unite is no longer supported, it is possible that the server that provides the file sharing etc will soon be unaccessible. Even if that is the case the question should maybe not be closed at it has a general usefulness independently of the unite issue.

    Read the article

  • Choice of open source license for some components, closed source for others

    - by Peter Serwylo
    G'day, I am working on a set of multiplayer games, where different games play against each other (e.g. you play a Tetris clone, I play an Asteroids clone, but we are both competing against each other). All the games would be based on the same underlying framework written specifically for this project. I am struggling to comprehend how I would license this so that: The underlying framework is open source, so other people can create new games based on it. Some games built on the framework are open source Other games are closed source The goal is to have two bundles on something like the Android market: One free and open source package which has a collection of games Another "premium" (although I dislike that word) paid package which has a different collection of games. Usually I am fond of permissive licenses such as MIT/BSD, however I would prefer something more in the vein of the GPL for this. This is because for software such as the snes-9x SNES emulator, which is a great piece of software, there is a ton of poor quality versions being sold, whereas it would be preferable if there was just one authoritative version which was always kept up to date, and distributed for free. If the underlying framework was GPL'd, would I be able to build closed source games on top of it? Thanks for your input.

    Read the article

  • More Retro Games

    - by Matt Christian
    Last week I made 2 stops to my local game stores and spent a load of cash on a bunch of new retro games for my collection.  Here are the recent additions: NES - Mega Man 2 - The Adventures of Bayou Billy - Ducktales - Metal Gear - Super Mario Bros / Duck Hunt - Firestorm - Dragon's Lair - Bartman Meets Radioactive Man N64 - Superman 64 - Zelda: Ocarina of Time (in original box, box is in poor condition) Atari - Superman - Adventure - Donkey Kong - Raiders of the Lost Ark Dreamcast - Memory card with view screen - Space Channel 5 Genesis (all in case) - Jurassic Park - Sonic Spinball - Sonic the Hegehog 3 (missing manual) - Spiderman (also called Spiderman vs. The Kingpin) GameGear - Bart vs The Space Mutants Quite a large haul given it was all purchased in 2 days.  Although, Metal Gear I got for a great deal and almost considered buying their other copy simply to resale for more though I decided against it to let another lucky soul find it.  I may need to run over there again because I think they had TMNT 2 (NES) for around $6 and it usually sells for more than that.  I could have sworn I grabbed it and bought it but my receipt tells me differently. I also found my copy of Super Mario 3 and added that to my collection.  Unfortunately one of the corners of the label has begun to peel up pretty badly which sucks although it's still a good item for the collection. In other retro news, this weekend was Easter and while at my grandparents the cousins wanted to play on their NES which was not working.  Me being the retro NES nerd I am, grabbed a screw driver, some Windex, a few toothpicks, and a few cotton swabs and had it up and running under an hour (that includes eating dinner!).  The NES holds the games tighter, has a better connection, and works almost instantly.  I should do THAT for a living!

    Read the article

  • API Message Localization

    - by Jesse Taber
    In my post, “Keep Localizable Strings Close To Your Users” I talked about the internationalization and localization difficulties that can arise when you sprinkle static localizable strings throughout the different logical layers of an application. The main point of that post is that you should have your localizable strings reside as close to the user-facing modules of your application as possible. For example, if you’re developing an ASP .NET web forms application all of the localizable strings should be kept in .resx files that are associated with the .aspx views of the application. In this post I want to talk about how this same concept can be applied when designing and developing APIs. An API Facilitates Machine-to-Machine Interaction You can typically think about a web, desktop, or mobile application as a collection “views” or “screens” through which users interact with the underlying logic and data. The application can be designed based on the assumption that there will be a human being on the other end of the screen working the controls. You are designing a machine-to-person interaction and the application should be built in a way that facilitates the user’s clear understanding of what is going on. Dates should be be formatted in a way that the user will be familiar with, messages should be presented in the user’s preferred language, etc. When building an API, however, there are no screens and you can’t make assumptions about who or what is on the other end of each call. An API is, by definition, a machine-to-machine interaction. A machine-to-machine interaction should be built in a way that facilitates a clear and unambiguous understanding of what is going on. Dates and numbers should be formatted in predictable and standard ways (e.g. ISO 8601 dates) and messages should be presented in machine-parseable formats. For example, consider an API for a time tracking system that exposes a resource for creating a new time entry. The JSON for creating a new time entry for a user might look like: 1: { 2: "userId": 4532, 3: "startDateUtc": "2012-10-22T14:01:54.98432Z", 4: "endDateUtc": "2012-10-22T11:34:45.29321Z" 5: }   Note how the parameters for start and end date are both expressed as ISO 8601 compliant dates in UTC. Using a date format like this in our API leaves little room for ambiguity. It’s also important to note that using ISO 8601 dates is a much, much saner thing than the \/Date(<milliseconds since epoch>)\/ nonsense that is sometimes used in JSON serialization. Probably the most important thing to note about the JSON snippet above is the fact that the end date comes before the start date! The API should recognize that and disallow the time entry from being created, returning an error to the caller. You might inclined to send a response that looks something like this: 1: { 2: "errors": [ {"message" : "The end date must come after the start date"}] 3: }   While this may seem like an appropriate thing to do there are a few problems with this approach: What if there is a user somewhere on the other end of the API call that doesn’t speak English?  What if the message provided here won’t fit properly within the UI of the application that made the API call? What if the verbiage of the message isn’t consistent with the rest of the application that made the API call? What if there is no user directly on the other end of the API call (e.g. this is a batch job uploading time entries once per night unattended)? The API knows nothing about the context from which the call was made. There are steps you could take to given the API some context (e.g.allow the caller to send along a language code indicating the language that the end user speaks), but that will only get you so far. As the designer of the API you could make some assumptions about how the API will be called, but if we start making assumptions we could very easily make the wrong assumptions. In this situation it’s best to make no assumptions and simply design the API in such a way that the caller has the responsibility to convey error messages in a manner that is appropriate for the context in which the error was raised. You would work around some of these problems by allowing callers to add metadata to each request describing the context from which the call is being made (e.g. accepting a ‘locale’ parameter denoting the desired language), but that will add needless clutter and complexity. It’s better to keep the API simple and push those context-specific concerns down to the caller whenever possible. For our very simple time entry example, this can be done by simply changing our error message response to look like this: 1: { 2: "errors": [ {"code": 100}] 3: }   By changing our error error from exposing a string to a numeric code that is easily parseable by another application, we’ve placed all of the responsibility for conveying the actual meaning of the error message on the caller. It’s best to have the caller be responsible for conveying this meaning because the caller understands the context much better than the API does. Now the caller can see error code 100, know that it means that the end date submitted falls before the start date and take appropriate action. Now all of the problems listed out above are non-issues because the caller can simply translate the error code of ‘100’ into the proper action and message for the current context. The numeric code representation of the error is a much better way to facilitate the machine-to-machine interaction that the API is meant to facilitate. An API Does Have Human Users While APIs should be built for machine-to-machine interaction, people still need to wire these interactions together. As a programmer building a client application that will consume the time entry API I would find it frustrating to have to go dig through the API documentation every time I encounter a new error code (assuming the documentation exists and is accurate). The numeric error code approach hurts the discoverability of the API and makes it painful to integrate with. We can help ease this pain by merging our two approaches: 1: { 2: "errors": [ {"code": 100, "message" : "The end date must come after the start date"}] 3: }   Now we have an easily parseable numeric error code for the machine-to-machine interaction that the API is meant to facilitate and a human-readable message for programmers working with the API. The human-readable message here is not intended to be viewed by end-users of the API and as such is not really a “localizable string” in my opinion. We could opt to expose a locale parameter for all API methods and store translations for all error messages, but that’s a lot of extra effort and overhead that doesn’t add a lot real value to the API. I might be a bit of an “ugly American”, but I think it’s probably fine to have the API return English messages when the target for those messages is a programmer. When resources are limited (which they always are), I’d argue that you’re better off hard-coding these messages in English and putting more effort into building more useful features, improving security, tweaking performance, etc.

    Read the article

  • How is IntelliJ better than Eclipse?

    - by NickC
    I know there have been questions like What is your favorite editor/IDE?, but none of them have answered this question: Why spend the money on IntelliJ when Eclipse is free? I'm personally a big IntelliJ fan, but I haven't really tried Eclipse. I've used IntelliJ for projects that were Java, JSP, HTML/CSS, Javascript, PHP, and Actionscript, and the latest version, 9, has been excellent for all of them. Many coworkers in the past have told me that they believe Eclipse to be "pretty much the same" as IntelliJ, but, to counter that point, I've occasionally sat behind a developer using Eclipse who's seemed comparably inefficient (to accomplish roughly the same task), and I haven't experienced this with IntelliJ. They may be on par feature-by-feature but features can be ruined by a poor user experience, and I wonder if it's possible that IntelliJ is easier to pick up and discover time-saving features. For users who are already familiar with Eclipse, on top of the real cost of IntelliJ, there is also the cost of time spent learning the new app. Eclipse gets a lot of users who simply don't want to spend $250 on an IDE. If IntelliJ really could help my team be more productive, how could I sell it to them? For those users who've tried both, I'd be very interested in specific pros or cons either way.

    Read the article

  • Scene Graph for Deferred Rendering Engine

    - by Roy T.
    As a learning exercise I've written a deferred rendering engine. Now I'd like to add a scene graph to this engine but I'm a bit puzzled how to do this. On a normal (forward rendering engine) I would just add all items (All implementing IDrawable and IUpdateAble) to my scene graph, than travel the scene-graph breadth first and call Draw() everywhere. However in a deferred rendering engine I have to separate draw calls. First I have to draw the geometry, then the shadow casters and then the lights (all to different render targets), before I combine them all. So in this case I can't just travel over the scene graph and just call draw. The way I see it I either have to travel over the entire scene graph 3 times, checking what kind of object it is that has to be drawn, or I have to create 3 separate scene graphs that are somehow connected to each other. Both of these seem poor solutions, I'd like to handle scene objects more transparent. One other solution I've thought of was traveling trough the scene graph as normal and adding items to 3 separate lists, separating geometry, shadow casters and lights, and then iterating these lists to draw the correct stuff, is this better, and is it wise to repopulate 3 lists every frame?

    Read the article

  • InvalidProgramException Running Unit Test (Bug Closed)

    - by Anthony Trudeau
    In a previous post I reported an InvalidProgramException that occurs in a certain circumstance with unit tests involving accessors on a private generic method.  It turns out that Bug #635093 reported through Microsoft Connect will not be fixed. The reason cited is that private accessors have been discontinued.  And why have private accessors been discontinued?  They don't have time is the reason listed in the blog post titled "Generation of Private Accessors (Publicize) and Code Generation for Visual Studio 2010". In my opinion, it's a piss poor decision to discontinue support for a feature that they're still using within automatically generated unit tests against private classes and methods.  But, I think what is worse is the lack of guidance cited in the aforementioned blog post.  Their advice?  Use the PrivateObject to help, but develop your own framework. At the end of the day what Microsoft is saying is, "I know you spent a lot of money for this product.  I know that you don't have time to develop a framework to deal with this.  We don't have time and that is all that's important."

    Read the article

  • Windows Azure Learning Plan - Application Fabric

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the Application Fabric for Windows Azure. It serves three main purposes - Access Control, Caching, and as a Service Bus.   Overview and Training Overview and general  information about the Azure Application Fabric, - what it is, how it works, and where you can learn more. General Introduction and Overview http://msdn.microsoft.com/en-us/library/ee922714.aspx Access Control Service Overview http://msdn.microsoft.com/en-us/magazine/gg490345.aspx Microsoft Documentation http://msdn.microsoft.com/en-gb/windowsazure/netservices.aspx Learning and Examples Sources for online and other Azure Appllications Fabric training Application Fabric SDK http://www.microsoft.com/downloads/en/details.aspx?FamilyID=39856a03-1490-4283-908f-c8bf0bfad8a5&displaylang=en Application Fabric Caching Service Primer http://blogs.msdn.com/b/appfabriccat/archive/2010/11/29/azure-appfabric-caching-service-soup-to-nuts-primer.aspx?wa=wsignin1.0 Hands-On Lab: Building Windows Azure Applications with the Caching Service http://www.wadewegner.com/2010/11/hands-on-lab-building-windows-azure-applications-with-the-caching-service/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+WadeWegner+%28Wade+Wegner+-+Technical%29 Architecture  Azure Application Fabric Internals and Architectures for Scale Out and other use-cases. Azure Application Fabric Architecture Guide http://blogs.msdn.com/b/yasserabdelkader/archive/2010/09/12/release-of-windows-server-appfabric-architecture-guide.aspx Windows Azure AppFabric Service Bus - A Deep Dive (Video) http://www.msteched.com/2010/Europe/ASI410 Access Control Service (ACS) High Level Architecture http://blogs.msdn.com/b/alikl/archive/2010/09/28/azure-appfabric-access-control-service-acs-v-2-0-high-level-architecture-web-application-scenario.aspx Applications  and Programming Programming Patterns and Architectures for SQL Azure systems. Various Examples from PDC 2010 on using Azure Application as a Service Bus http://tinyurl.com/2dcnt8o Creating a Distributed Cache using the Application Fabric http://blog.structuretoobig.com/post/2010/08/31/Creating-a-Poor-Mane28099s-Distributed-Cache-in-Azure.aspx  Azure Application Fabric Java SDK http://jdotnetservices.com/

    Read the article

  • Which ecommerce framework is fast and easy to customize?

    - by Diego
    I'm working on a project where I have to put online an ecommerce system which will require some good amount of custom features. I'm therefore looking for a framework which makes customization easy enough (from an experienced developer's perspective, I mean). Language shoul be PHP and time is a constraint, I don't have months to learn. Additionally, the ecommerce will have to handle around 200.000 products from day one, which will increase over time, hence performance is also important. So far I examined the following: Magento - Complicated and, as far as I could read, slow when database contains many products. It's also resource intensive, and we can't afford a dedicated VPS from the beginning. OpenCart - Rough at best, documentation is extremely poor. Also, it's "free" to start, but each feature is implemented via 3rd party commercial modules. OSCommerce - Buggy, inefficient, outdated. ZenCart - Derived from OSCommerce, doesn't seem much better. Prestashop - It looks like it has many incompatibilities. Also, most of its modules are commercial, which increases the cost. In short, I'm still quite undecided, as none of the above seems to satisfy the requirements. I'm open to evaluate closed source frameworks too, if they are any better, but my knowledge about them is limited, therefore I'll welcome any suggestion. Thanks for all replies.

    Read the article

  • How to improve battery life on Samsung 13.3” Series 7 Ultra (NP730U3E-S01AU)?

    - by beam022
    Recently I've bought a Series 7 Ultra Samsung ultrabook and decided to change the OS from originally installed Windows 8 to Ubuntu 14.04LTS. However, it's difficult not to notice great decrease in battery life: on pre-installed Windows 8 battery would last for about 6 hours while on Ubuntu it's almost empty after 2 hours of same kind of work (wi-fi, web, vlc, spotify, intellij idea). I'm not here to say that Ubuntu's battery performance is worse than Windows, but to ask for suggestions how to improve the situation (2 hours of work is pretty poor battery life). Can you recommend some sources, applications or tips/tricks that would improve battery life on my ultrabook? I really like the Ubuntu experience, but this makes my machine much less reliable. I suspect that graphic video card might be one of the issues here. Let me give you tech specs of the ultrabook: Processor: Intel® Core™ i5 Processor 3337U (1.80GHz, 3MB L3 Cache) Chipset: Intel HM76 Graphic: AMD Radeon™ HD 8570M Graphics with 1GB gDDR3 Graphic Memory (PowerExpress) and Intel(R) HD Graphics 4000 Display: 13.3" SuperBright+ 350nit FHD LED Display (1920 x 1080), Anti-Reflective Memory: 10GB DDR3 System Memory at 1,600MHz Hard-drive: 128GB Solid-state Drive More informations here, on the official page. If it's helpful to provide additional info, I'm happy to do it, just let me know what you need. Thank you.

    Read the article

  • Oracle MDM at the MDM Summit in San Francisco

    - by David Butler
    Oracle is sponsoring the Product MDM track at this year’s MDM & Data Governance San Francisco Summit. Sachin Patel, Director of Product Strategy, Product Hub Applications, at Oracle will present the keynote: Product Master Data Management for Today’s Enterprise. Here’s the abstract: Today businesses struggle to boost operational efficiency and meet new product launch deadlines due to poor and cumbersome administrative processes. One of the primary reasons enterprises are unable to achieve cohesion is due to various domain silos and fragmented product data. This adversely affects business performance including, but not limited to, excess inventories, under-leveraged procurement spend, downstream invoicing or order errors and lost sales opportunities. In this session, you will learn the key elements and business processes that are required for you to master an enterprise product record. Additionally you will gain insights into how to improve the accuracy of your data and deliver reliable and consistent product information across your enterprise. This provides a high level of confidence that business managers can achieve their goals. In this session, you will understand how adopting a Master Data Management strategy for product information can help your enterprise change course towards a more profitable, competitive and successful business. Cisco Systems will join Sachin and cover their experiences, lessons learned and best practices. If you are in the Bay Area and interested in mastering your product data for the benefit of multiple applications, business processes and analytical systems, please join us at the Hyatt, Fisherman’s Wharf this Thursday, June 30th.

    Read the article

  • ubuntu 12.10 not updating

    - by gunjan parashar
    i have upgrade to ubuntu 12.10 from ubuntu 12.04 after that it is not updating software updater gives the following error : W:Failed to fetch http://archive.canonical.com/ubuntu/dists/precise/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-updates/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-backports/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-security/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-proposed/Release.gpg Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/restricted/source/Sources Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/main/source/Sources Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/multiverse/source/Sources Unable to connect to 10.4.42.15:8080: W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal/universe/source/Sources Unable to connect to 10.4.42.15:8080: : W:Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/quantal-proposed/universe/i18n/Translation-en Unable to connect to 10.4.42.15:8080: E:Some index files failed to download. They have been ignored, or old ones used instead. along with this i am not able to install any thing from software center , it just asks to use this source and after that it just keeps on quering software sources and nothing happens after that plz help me out , this 12.10 has became a great problem for me and forgive for my poor engish

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Recent resources on Entity Framework 4

    - by Eric Nelson
    I just posted on the bits you need to install to explore all the features of Entity Framework 4 with the Visual Studio 2010 RC. I’ve also had a quick look (March 12th 2010) to see what new resources are out there on EF4. They appear a little thin on the ground – but there are some gems. The following all caught my attention: Julie Lerman has published 2 How-to-videos on EF4 on pluralsight.com. You need to create a free guest pass to watch them. Getting Started with Entity Framework 4.0 – Session given at Cairo CodeCamp 2010 . This includes ppt and demos. Entity Framework 4 providers – read through the comments What’s new with Entity Framework in Visual Studio 2010 RC Extending the design surface of EF4 using the Extension Starter Kit Persistence Ignorance and EF4 on geekSpeak on channel 9 (poor audio IMHO – I gave up) First of a series of posts on EF4 How to stop your dba having a heart attack with EF4 from Simon Sabin in the UK. This includes ppt and demos. And the biggy. You no longer have to depend on SQL Profiler to keep an eye on the generated SQL. There is now a commercial profiler for Entity Framework.  I am yet to try it – but I listened to a .NET rocks podcast which made it sound great. It is “hidden” in a session on DSLs in Boo –> Oren Eini on creating DSLs in Boo. This is a much richer experience than you would get from SQL Profiler – matching the SQL to the .NET code. And finally a momentous #fail to … drum roll… the Visual Studio 2010 and .NET Framework 4 Training Kit Feb release. This just contains one ppt on EF4 – and it is not even a good one. Real shame. P.S. I will update the 101 EF4 Resources with the above … but post devweek in case I find some more goodies. Related Links 101 EF4 Resources

    Read the article

  • Is Joel Test really a good gauging tool?

    - by henry
    I just learned about Joel Test. I have been computer programmer for 22 years, but somehow never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only 3 people in IT department. I am no longer with them but I had being working there for 5 years – my longest streak with any given company. To my surprise they scored extremely poor on Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: a) Good pay, they bragged about it to my face and I bragged about it to their face, so it was almost like a family environment. b) I always knew big picture. When writing a code to solve particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specs we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. c) Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. d) No red tape. I could always buy any tools I deem necessary, and design solutions the way my professional judgment dictates. e) Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of 3 IT guys was on the floor (to help traders in case their monitors go dark) they did not care where 2 others are. So the question thus becomes how valuable Joel Test is? Why bother with it?

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by Singletony
    Hi guys. We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI, i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks) ? If my question is not clear, I will gladly elaborate! THANK YOU SO MUCH!

    Read the article

  • How do web-developers do web-design when freelancing?

    - by Gerald Blizz
    So I got my first job recently as junior web-developer. My company creates small/medium sites for wide variety of customers: autobusiness companies, weddign agencies, some sauna websites, etcetc, hope you get my point. They don't do big serious stuff like bank systems or really big systems, it's mostly small/medium-sized websites for startups/medium sized business. My main skills are PHP/MySQL, I also know HTML and a bit of CSS/JS/AJAX. I know that good web-developer must know some backend language (like PHP/Ruby/Python) AND HTML+CSS+JS+AJAX+JQuery combo. However, I was always wondering. In my company we have web-designer. In other serious organisations I often see the same stuff: web-developers who create business-logic and web-designers, who create design. As far as I know, after designers paint design of website they give it to developers either in PSD or sliced way, and developers put it together with logic, but design is NOT created by developers. Such separation seems very good for full-time job, but I am concerned with question how do freelance web-developers do websites? Do most of them just pay freelance designers to create design for them? Or do some people do both? Reason why I ask - I plan to start some freelancing in my free time after I get good at web-development. But I don't want to create websites with great business-logic but poor design. Neither I want to let someone else create a design for me. I like web-development very much and I am doing quite good, I like design aswell, even though I am a bit lost how to study it and get better at it. But I am scared that going in both directions won't let me become expert, it seems like two totally different jobs and getting really good in both seems very hard. But I really want to do both. What should I do? Thank you!

    Read the article

  • SO-overflow induced passivity - how to cope?

    - by Ruben
    After not really working on my pet project for a while, I discovered Stackoverflow and upon perusing it more intensely I was quite amazed. I'm a bit of a perfectionist, so when I found eye-openers here highlighting many of the mistakes I made, I first wanted to fix everything. However, it's a pet project for a reason: I'm self-taught and I'm studying psychology, so programming skills can never become priority one (though it often helps, even in this field). Issues that stuck out were numerous security issues (e.g. CSRF-prevention and bcrypt eluded me) not object-oriented (at least the PHP part, the JS-part mostly is) no PHP framework used, so many of my DIY takes on commonly-tackled components (auth, ...) are either bad or inefficient really poor MySQL usage (no prepared statements, mysql extension, heard about setting proper indices two days ago) using mootools even though JQuery seems to be fashionable, so there's more probably always going to be better integration with services I'd like to use (like google visualization) So, my SO-induced frenzy turned into passivity. I can't do it all (soon) in the rather small amount of spare time I can spend on working on my project. I can leave some of the issues be in good conscience (speed stuff: an unfinished & unpublished project will never become popular, right?). No clear conscience without good security though and if I don't use a framework for auth and other complex stuff I'll regret having to do it myself. One obvious answer would probably be going open-source, but I think the project would need to become more impressive before others would commit to it. I can't afford to employ someone either. I do think the project deserves being worked on, though. How should I tackle it anyway? What's the best practice for little-practice people?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >