Search Results

Search found 15866 results on 635 pages for 'css practice'.

Page 449/635 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • What is a generic term for name/identifier? (as opposed to label)

    - by d3vid
    I need to refer to a number of things that have both an identifier value (used in code and configuration), and a human-readable label. These things include: database columns dropdown items subapplications objects stored in a dictionary I want two unambiguous terms. One to refer to the identifier/value/key. One to refer to the label. As you can see, I'm pretty settled on the latter :) For the former, identifier seems best (not everything is strictly a key, and value and name could refer to the label; although, identifier usually refers only to a variable name), but I would prefer to follow an established practice if there is one. Is there an established term for this? (Please provide a source.) If not, are there any examples of a choice from a significant source (Java APIs, MSDN, a big FLOSS project)? (I wasn't sure if this should be posted here or to English Language & Usage. I thought this was the more appropriate expert audience. Happy to migrate if not.)

    Read the article

  • .Net search engine architecture and technology choice

    - by shrivb
    I am in the process of designing a search engine for an asp.net site. The site currently uses Microsoft Indexing Server to index and search content which range from simple text files to MS documents to PDFs. MIS is also used to crawl File servers. MIS in tandem with Index Server Companion crawls for content from external sites. I intend to replace MIS with the indexer/crawler I am trying to build. Since my platform is completely on the Microsoft stack, I cant afford to have a Java application server. Thus, Solr, and effectively, SolrNet is ruled out. With this being the context, I have couple of questions. 1.Technology choice I had done my initial investigation and looked at Lucene.Net. There seemed to be 2 issues in using Lucene.Net. First being, it cant crawl external content. There doesn't seem to be a direct port of Nutch in .Net. Second, since it is just an indexer, it cant parse various document types. The parsing is left to the developer. So, what would be best technology choice on the .Net platform to achieve indexing & crawling? Are there any .Net open source libraries available for document parsing? 2.Architectural pattern Is there any general architectural pattern or best practice that needs to be followed in designing such a search engine? Thanks in advance.

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • Drive project success & financial performance with business critical Enterprise Project Portfolio Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle Primavera invites you to the first in a series of three webcasts linking Enterprise Project Portfolio Management with enhanced operational performance and better financial results. Few organizations fully understand the impact projects have on their business. Consistently delivering successful projects is vital to the financial success of an asset intensive organization. Enterprise Project Portfolio Management (EPPM) is not a new concept yet for many organizations it is not considered "business critical". Webcast 1: Plan – Aligning project selection and prioritization with corporate objectives This webcast will look at 2 key questions: Are you aligning portfolio decisions with strategic objectives? How do you effectively measure the success of your portfolio decisions? Hear from Accenture who'll present a compelling case for why asset intensive organizations should consider EPPM as business critical. They'll explore: How technology is being used to enhance project delivery How collaboration enhances delivery performance The major challenges associated with the planning phase of a project Next hear from Geoff Roberts, Industry Strategist from Oracle Primavera. With over 30 years experience in project management/project controls in the construction, utilities and oil & gas sectors, Geoff will investigate how EPPM is a best practice and can support an organization through project selection and prioritization ensuring that decisions are aligned with corporate objectives. Don’t miss out, register today!

    Read the article

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • WordPress is now nicely supported on SQL Server (and SQL Azure for that matter)

    - by Eric Nelson
    WordPress is enormously popular for blogs and full websites thanks to an awesome eco system which has built up around it, the simplicity (relatively) of getting it up and running plus the flexibility to “bend it” in all sorts of directions. When I say bend, check out the following which are all WordPress sites My “back up blog” http://iupdateable.wordpress.com/  My groups “odd site” :) http://ubelly.com My favourite “cheap games” site http://www.frugalgaming.co.uk/  WordPress users typically run their sites on Linux and MySQL, although PHP (the language in which WordPress is written) can be happily run on Windows. Both fine technologies in their own right, but for me (and probably a fair few others) I would love to use WordPress but with the technologies I know best (aka Windows, IIS and SQL Server). However, that has proven to be actually rather tricky in practice to get working – until now. Earlier last month OmniTI released a patch for WordPress which provides SQL Server and SQL Azure support.  In parallel with that some fine folks inside Microsoft have also created http://wordpress.visitmix.com which contains information about running WordPress on the Microsoft platform with a particular focus on SQL Server and SQL Azure.  Top stuff! To run WordPress with SQL Server: Download and Install the WordPress on SQL Server Distro/Patch And then you will quite likely need to migrate: Check out how to Migrate to Windows and SQL Server by Zach Owens who is moving his blog to Windows and SQL Server Enjoy Related Links Running PHP on IIS on Windows http://php.iis.net/  If PHP is not your thing, then the following Blog engines are .NET based BlogEngine http://www.dotnetblogengine.net/ DasBlog http://www.dasblog.info/ Subtext http://subtextproject.com/ (which happens to power http://geekswithblogs.net where my main blog is http://geekswithblogs.net/iupdateable)

    Read the article

  • At what point would you drop some of your principles of software development for the sake of more money?

    - by MeshMan
    I'd like to throw this question out there to interestingly see where the medium is. I'm going to admit that in my last 12 months, I picked up TDD and a lot of the Agile values in software development. I was so overwhelmed with how much better my development of software became that I would never drop them out of principle. Until...I was offered a contracting role that doubled my take home pay for the year. The company I joined didn't follow any specific methodology, the team hadn't heard of anything like code smells, SOLID, etc., and I certainly wasn't going to get away with spending time doing TDD if the team had never even seen unit testing in practice. Am I a sell out? No, not completely... Code will always been written "cleanly" (as per Uncle Bob's teachings) and the principles of SOLID will always be applied to the code that I write as they are needed. Testing was dropped for me though, the company couldn't afford to have such a unknown handed to the team who quite frankly, even I did create test frameworks, they would never use/maintain the test framework correctly. Using that as an example, what point would you say a developer should never drop his craftsmanship principles for the sake of money/other benefits to them personally? I understand that this can be a very personal opinion on how concerned one is to their own needs, business needs, and the sake of craftsmanship etc. But one can consider that for example testing can be dropped if the company decided they would rather have a test team, than rather understand unit testing in programming, would that be something you could forgive yourself for like I did? So given that there is something you would drop, there usually should be an equal cost in the business that makes up for what you drop - hopefully, unless of course you are pretty much out for lining your own pockets and not community/social collaborating ;). Double your money, go back to RAD? Or walk on, and look for someone doing Agile, and never look back...

    Read the article

  • How to achieve a loosely coupled REST API but with a defined and well understood contract?

    - by BestPractices
    I am new to REST and am struggling to understand how one would properly design a REST system to both allow for loose coupling but at the same time allow a consumer of a REST API to understand the API. If, in my client code, I issue a GET request for a resource and get back XML, how do I know what to do with that xml? e.g. if it contains <fname>John</fname><lname>Smith</lname> how do I know that these refer to the concept of "first name", "last name"? Is it up to the person writing the REST API to define in documentation some place what each of the XML fields mean? What if producer of the API wants to change the implementation to be <firstname> instead of <fname>? How do they do this and notify their consumers that this change occurred? Or do the consumers just encounter the error and then look at the payload and figure out on their own that it changed? I've read in REST in Practice that using a WADL tool to create a client implementation based on the WADL (and hide the fact that you're doing a distributed call) is an "anti-pattern". But I was planning to do this-- at least then I would have a statically typed API call that, if it changed, I would know at compile time and not at run time. Why is this a bad thing to generate client code based on a WADL? And how do I know what to do with the links that returned in the response of a POST to a REST API? What defines this contract and gives true meaning to what each link will do? Please help! I dont understand how to go from statically-typed or even SOAP/RPC to REST!

    Read the article

  • My father is a doctor. He is insisting on writing a database to store non-critical patient information, with no programming background

    - by Dominic Bou-Samra
    So, my father is currently in the process of "hacking" together a database using FileMaker Pro, a GUI based databasing tool for his small (4 doctor) practice. The database will be used to help ease the burden on reporting from medical machines, streamlining quite a clumsy process. He's got no programming background, and seems to be doing everything in his power to not learn things correctly. He's got duplicate data types, no database-enforced relationships (foreign/primary key constraints) and a dozen other issues. He's doing it all by hand via GUI tool using Youtube videos. My issue is, that whilst I want him to succeed 100%, I don't think it's appropriate for him to be handling these types of decisions. How do I convince him that without some sort of education in these topics, a hacked together solution is a bad idea? He's can be quite stubborn and I think he sees these types of jobs as "childs play" How should I approach this? Is it even that bad an idea - or am I correct in thinking he should hire a proper DBA/developer to handle this so that it doesn't become a maintenance nightmare? NB: I am a developer consultant of 4 years and I've seen my share of painful customer implementations.

    Read the article

  • Session serialization in JavaEE environment

    - by Ionut
    Please consider the following scenario: We are working on a JavaEE project for which the scalability starts to become an issue. Up until now, we were able to scale up but this is no longer an option. Therefore we need to consider scaling out and preparing the App for a clustered environment. Our main concern right now is serializing the user sessions. Sadly, we did not consider from the beginning the issue and we are encountering the following excetion: java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: org.apache.catalina.session.StandardSessionFacade I did some research and this exception is thrown because there are objects stored on the session which does not implement the Serializable interface. Considering that all over the app there are quite a few custom objects which are stored on the session without implementing this interface, it would require a lot of tedious work and dedication to fix all these classes declaration. We will fix all this declarations but the main concern is that, in the future, there may be a developer which will add a non Serializable object on the session and break the session serialization & replication over multiple nodes. As a quick overview of the project, we are developing using a home grown framework based on Struts 1 with the Servlet 3.0 API. This means that at this point, we are using the standard session.getAttribute() and session.setAttribute() to work with the session and the session handling is scattered all over the code base. Besides updating the classes of the objects stored on session and making sure that they implement the Serializable interface, what other measures of precaution should we take in order to ensure a reliable Session replication capability on the Application layer? I know it is a little bit late to consider this but what would be the best practice in this case? Furthermore, are there any other issues we should consider regarding this transition? Thank you in advance!

    Read the article

  • What's the best structure for a repository?

    - by jpmelos
    I've looked into many open source software repositories, and I've found some common elements and somethings people do in different fashion from one another. For example, every repository has a README file, a INSTALL file, a COPYING file and stuff like that. Other things differ: Some projects, like git, have their source code in the root level, while others have the source code in a src/ folder and others, like the Linux kernel, have the source code spread in different folders in root level, that divide code by areas; Some have their tests in a t/ folder, while others in a tests/ folder, or named otherwise; Some have files about submitting patches and who the maintainers are, and those might be inside some Documentation/ or in the root level. Are there recommendations? A best practice? For example: personally, I don't like the code in the root level, git-fashion. It looks messy and confuses one trying to start as a contributor (especially because they have some code inside folders, and scripts in the root level as well, it's really messy). If I were to start a project of my own and wanted to start right from the start, are there recommendations? Best practices? How can I make a clean and clear structure? Thank you!

    Read the article

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

  • Tracking feature requests for small-scale components

    - by DXM
    I'm curious how other development teams (especially those that work in moderate to large development groups) track "future" features/wishlists for functionality for internally developed frameworks or components. I know the standard advice is that a development team should find one good tool for tracking bugs/features and use that for everything and I agree with that if the future requests are for the product itself. In my company we have an engineering department, which is broken up into multiple groups and within each there can be one to several agile teams. The bug tracking product we use has been "a leader since 1997" (their UI/usability seems to also be evaluated against that year even today) but my agile team or even group doesn't really control what is being used by the whole department. What we are looking to track is not necessarily product features but expansion/nice to have functionality for internal components that go into our product. So to name a few for example... framework/utility library on top of CppUnit which our developers share low-level IPC communications framework Common development SDK that myself and several other team leads started to help share some common code/tools at the department-wide level (this SDK is released as internal "product" to each of the groups). Is the standard practice to use the one bug tracking tool? Or would it make more sense to setup something more localized specifically for our needs and maintain it ourselves? It's also unclear how management will feel if developers start performing "IT" roles of maintaining software and servers. At the same time, right now, we use excel files, internal wiki and MS OneNote for this kind of stuff and that just doesn't feel right. (I'm afraid to ask for actual software recommendations, since that might make this question more localized or something. Also developers needs this way more than management, so it would be nice to find something either free or no more than the cost of a happy hour).

    Read the article

  • C++11 Tidbits: access control under SFINAE conditions

    - by Paolo Carlini
    Lately I have been spending quite a bit of time on the SFINAE ("Substitution failure is not an error") features of C++, fixing and tweaking various bits of the GCC implementation. An important missing piece was the implementation of the resolution of DR 1170 which, in a nutshell, mandates that access checking is done as part of the substitution process. Consider: class C { typedef int type; }; template <class T, class = typename T::type> auto f(int) - char; template <class> auto f(...) -> char (&)[2]; static_assert (sizeof(f<C>(0)) == 2, "Ouch"); According to the resolution, the static_assert should not fire, and the snippet should compile successfully. The reason being that the first f overload must be removed from the candidate set because C::type is private to C. On the other hand, before the resolution of DR 1170, the expected behavior was for the first overload to remain in the candidate set, win over the second one, to eventually lead to an access control error (*). GCC mainline (would be 4.8) finally implements the DR, thus benefiting the many modern programming techniques heavily exploiting SFINAE, among which certainly the GNU C++ runtime library itself, which relies on it for the internals of <type_traits> and in several other places. Note that the resolution of the DR is active even in C++98 mode, not just in C++11 mode, because it turned out that the traditional behavior, as implemented in GCC, wasn't fully consistent in all the possible circumstances. (*) In practice, GCC didn't really implement this, the static_assert triggered instead.

    Read the article

  • Oracle Enterprise Manager 12c Anniversary at Open World General Session and Twitter Chat using #em12c on October 2nd

    - by Anand Akela
    As most of you will remember, Oracle Enterprise Manager 12c was announced last year at Open World. We are celebrating first anniversary of Oracle Enterprise Manager 12c next week at Open world. During the last year, Oracle customers have seen the benefits of federated self-service access to complete application stacks, elastic scalability, automated metering, and charge-back from capabilities of Oracle Enterprise manager 12c. In this session you will learn how customers are leveraging Oracle Enterprise Manager 12c to build and operate their enterprise cloud. You will also hear about Oracle’s IT management strategy and some new capabilities inside the Oracle Enterprise Manager product family. In this anniversary general session of Oracle Enterprise Manager 12c, you will also watch an interactive role play ( similar to what some of you may have seen at "Zero to Cloud" sessions at the Oracle Cloud Builder Summit ) depicting a fictional company in the throes of deploying a private cloud. Watch as the CIO and his key cloud architects battle with misconceptions about enterprise cloud computing and watch how Oracle Enterprise Manager helps them address the key challenges of planning, deploying and managing an enterprise private cloud. The session will be led by Sushil Kumar, Vice President, Product Strategy and Business Development, Oracle Enterprise Manager. Jeff Budge, Director, Global Oracle Technology Practice, CSC Consulting, Inc. will join Sushil for the general session as well. Following the general session, Sushil Kumar ( Twitter user name @sxkumar ) will join us for a Twitter Chat on Tuesday at 1:00 PM to 2:00 PM.  Sushil will answer any follow-up questions from the general session or any question related to Oracle Enterprise Manager and Oracle Private Cloud . You can participate in the chat using hash tag #em12c on Twitter.com or by going to  tweetchat.com/room/em12c (Needs Twitter credential for participating).  You could pre-submit your questions for Sushil using any of the social media channels mentioned below. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Best arguments for/against introducing ORM technology into a companies dev process

    - by james
    I have started using ORM technology in the last few years. My first exposure was NHibernate. I then moved onto Linq 2 Sql, and Entity Framework. The issue I have however is, there are some organisations where I have found strong opposition to introducing ORM tools. They usually have a number of reasons: they have a lot of built up SQL skills in the team, and are worried about the underlying SQL that ORM's generate. they have DBA's who like to be able to see the SQL an app uses in order that can review it for best practice. they are worried about performance (some people have "heard" the ORM's aren't as performant but have no real proof themselves - there may well be some truth in this! :). So, I'm looking for the best or most convincing arguments that you have put forward FOR the use of ORM tools. Equally, I would be interested in the against arguments too. Note: this is NOT a discussion over which ORM I should use.

    Read the article

  • Best practices for using namespaces in C++.

    - by Dima
    I have read Uncle Bob's Clean Code a few months ago, and it has had a profound impact on the way I write code. Even if it seemed like he was repeating things that every programmer should know, putting them all together and putting them into practice does result in much cleaner code. In particular, I found breaking up large functions into many tiny functions, and breaking up large classes into many tiny classes to be incredibly useful. Now for the question. The book's examples are all in Java, while I have been working in C++ for the past several years. How would the ideas in Clean Code extend to the use of namespaces, which do not exist in Java? (Yes, I know about the Java packages, but it is not really the same.) Does it make sense to apply the idea of creating many tiny entities, each with a clearly define responsibility, to namespaces? Should a small group of related classes always be wrapped in a namespace? Is this the way to manage the complexity of having lots of tiny classes, or would the cost of managing lots of namespaces be prohibitive?

    Read the article

  • How to create and administer multi-architecture PPAs?

    - by maxschlepzig
    I have a program that needs to be recompiled for every ubuntu version. Currently I am packaging it using Ubuntu's PPA just for the current distribution. Eventually, I have to provide packages for the previous ubuntu version. I am not sure how to accomplish this. How does the Ubuntu PPA build server works - does it just look at the distribution field in the most current changelog entry (in the debian/changelog file) to determine for what distribution the package should be build? The debian specification allows to add multiple distributions into the distribution field. But this does not seam to help me. Some ubuntu documents talk about encoding the distribution name into the version number (in the debian changelog file). But how does this work in practice? A new version of the program is available, then what? Do I add for each distribution a new changelog entry and the PPA buildserver builds automatically for each distribution new packages after dput'ing it up? Or does the PPA buildserver just looks at the first changelog entry?

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

  • What is the precise definition of programming paradigm?

    - by Kazark
    Wikipedia defines programming paradigm thus: a fundamental style of computer programming which is echoed in the descriptive text of the paradigms tag on this site. I find this a disappointing definition. Anyone who knows the words programming and paradigm could do about that well without knowing anything else about it. There are many styles of computer programming at many level of abstraction; within any given programming paradigm, multiple styles are possible. For example, Bob Martin says in Clean Code (13), Consider this book a description of the Object Mentor School of Clean Code. The techniques and teachings within are the way that we practice our art. We are willing to claim that if you follow these teachings, you will enjoy the benefits that we have enjoyed, and you will learn to write code that is clean and professional. But don't make the mistake of thinking that we are somehow "right" in any absolute sense. Thus Bob Martin is not claiming to have the correct style of Object-Oriented programming, even though he, if anyone, might have some claim to doing so. But even within his school of programming, we might have different styles of formatting the code (K&R, etc). There are many styles of programming at many levels. So how can we define programming paradigm rigorously, to distinguish it from other categories of programming styles? Fundamental is somewhat helpful, but not specific. How can we define the phrase in a way that will communicate more than the separate meanings of each of the two words—in other words, how can we define it in a way that will provide additional meaning for someone who speaks English but isn't familiar with a variety of paradigms?

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Complimentary Refills

    - by onefloridacoder
    My son and I were out to dinner and right after we sat down, he combs the menu to locate the soda  selection.  Then he looks up at me and says “Looks like we get free refills here, sweet!”  While we were sitting there I was thinking where that statement came from and I remember one time where he was helping to figure out the tip and saw that were we charged for six sodas, but there were only four of us at the table.  I would say that’s when this started for eateries he’s not familiar with. I was talking a friend of mine this week and this thought came to me, why can’t we manage expectations like my son – find out before the order is placed.  Find out what’s expected first then use the other bits of guidance to move forward.  But how many times have we all paid way to much for something we thought was free on a project – me, plenty.  This quote is going up in my work space, next to one I picked up Corey Haines’ Software Craftsmanship talk at Open Agile Romania - “Work != Practice”.  So if anyone else has gotten burnt, maybe check the menu, it will be in the area where the customer will pick two from the list of “Price, Quality, or Speed”.  Refills will be listed just beneath that.

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • Prevent oversteering catastrophe in racing games

    - by jdm
    When playing GTA III on Android I noticed something that has been annoying me in almost every racing game I've played (maybe except Mario Kart): Driving straight ahead is easy, but curves are really hard. When I switch lanes or pass somebody, the car starts swiveling back and forth, and any attempt to correct it makes it only worse. The only thing I can do is to hit the brakes. I think this is some kind of oversteering. What makes it so irritating is that it never happens to me in real life (thank god :-)), so 90% of the games with vehicles inside feel unreal to me (despite probably having really good physics engines). I've talked to a couple of people about this, and it seems either you 'get' racing games, or you don't. With a lot of practice, I did manage to get semi-good at some games (e.g. from the Need for Speed series), by driving very cautiously, braking a lot (and usually getting a cramp in my fingers). What can you do as a game developer to prevent the oversteering resonance catastrophe, and make driving feel right? (For a casual racing game, that doesn't strive for 100% realistic physics) I also wonder what games like Super Mario Kart exactly do differently so that they don't have so much oversteering? I guess one problem is that if you play with a keyboard or a touchscreen (but not wheels and pedals), you only have digital input: gas pressed or not, steering left/right or not, and it's much harder to steer appropriately for a given speed. The other thing is that you probably don't have a good sense of speed, and drive much faster than you would (safely) in reality. From the top of my head, one solution might be to vary the steering response with speed.

    Read the article

  • Rule of thumb for cost vs. savings for code re-use

    - by Styler
    Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A, and B. A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >