Search Results

Search found 3826 results on 154 pages for 'graph theory'.

Page 107/154 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • A Community Cure for a String Splitting Headache

    - by Tony Davis
    A heartwarming tale of dogged perseverance and Community collaboration to solve some SQL Server string-related headaches. Michael J Swart posted a blog this week that had me smiling in recognition and agreement, describing how an inquisitive Developer or DBA deals with a problem. It's a three-step process, starting with discomfort and anxiety; a feeling that one doesn't know as much about one's chosen specialized subject as previously thought. It progresses through a phase of intense research and learning until finally one achieves breakthrough, blessed relief and renewed optimism. In this case, the discomfort was provoked by the mystery of massively high CPU when searching Unicode strings in SQL Server. Michael explored the problem via Stack Overflow, Google and Twitter #sqlhelp, finally leading to resolution and a blog post that shared what he learned. Perfect; except that sometimes you have to be prepared to share what you've learned so far, while still mired in the phase of nagging discomfort. A good recent example of this recently can be found on our own blogs. Despite being a loud advocate of the lightning fast T-SQL-based string splitting techniques, honed to near perfection over many years by Jeff Moden and others, Phil Factor retained a dogged conviction that, in theory, shredding element-based XML using XQuery ought to be even more efficient for splitting a string to create a table. After some careful testing, he found instead that the XML way performed and scaled miserably by comparison. Somewhat subdued, and with a nagging feeling that perhaps he was still missing "something", he posted his findings. What happened next was a joy to behold; the community jumped in to suggest subtle changes in approach, using an attribute-based rather than element-based XML list, and tweaking the XQuery shredding. The result was performance and scalability that surpassed all other techniques. I asked Phil how quickly he would have arrived at the real breakthrough on his own. His candid answer was "never". Both are great examples of the power of Community learning and the latter in particular the importance of being brave enough to parade one's ignorance. Perhaps Jeff Moden will accept the string-splitting gauntlet one more time. To quote the great man: you've just got to love this community! If you've an interesting tale to tell about being helped to a significant breakthrough for a problem by the community, I'd love to hear about it. Cheers, Tony.

    Read the article

  • Easy Profiling Point Insertion

    - by Geertjan
    One really excellent feature of NetBeans IDE is its Profiler. What's especially cool is that you can analyze code fragments, that is, you can right-click in a Java file and then choose Profiling | Insert Profiling Point. When you do that, you're able to analyze code fragments, i.e., from one statement to another statement, e.g., how long a particular piece of code takes to execute: https://netbeans.org/kb/docs/java/profiler-profilingpoints.html However, right-clicking a Java file and then going all the way down a longish list of menu items, to find "Profiling", and then "Insert Profiling Point" is a lot less easy than right-clicking in the sidebar (known as the glyphgutter) and then setting a profiling point in exactly the same way as a breakpoint: That's much easier and more intuitive and makes it far more likely that I'll use the Profiler at all. Once profiling points have been set then, as always, another menu item is added for managing the profiling point: To achieve this, I added the following to the "layer.xml" file: <folder name="Editors"> <folder name="AnnotationTypes"> <file name="profiler.xml" url="profiler.xml"/> <folder name="ProfilerActions"> <file name="org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.shadow"> <attr name="originalFile" stringvalue="Actions/Profile/org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.instance"/> <attr name="position" intvalue="300"/> </file> </folder> </folder> </folder> Notice that a "profiler.xml" file is referred to in the above, in the same location as where the "layer.xml" file is found. Here is the content: <!DOCTYPE type PUBLIC '-//NetBeans//DTD annotation type 1.1//EN' 'http://www.netbeans.org/dtds/annotation-type-1_1.dtd'> <type name='editor-profiler' description_key='HINT_PROFILER' localizing_bundle='org.netbeans.eppi.Bundle' visible='true' type='line' actions='ProfilerActions' severity='ok' browseable='false'/> Only disadvantage is that this registers the profiling point insertion in the glyphgutter for all file types. But that's true for the debugger too, i.e., there's no MIME type specific glyphgutter, instead, it is shared by all MIME types. Little bit confusing that the profiler point insertion can now, in theory, be set for all MIME types, but that's also true for the debugger, even though it doesn't apply to all MIME types. That probably explains why the profiling point insertion can only be done, officially, from the right-click popup menu of Java files, i.e., the developers wanted to avoid confusion and make it available to Java files only. However, I think that, since I'm already aware that I can't set the Java debugger in an HTML file, I'm also aware that the Java profiler can't be set that way as well. If you find this useful too, you can download and install the NBM from here: http://plugins.netbeans.org/plugin/55002

    Read the article

  • Why wearing Jeans is considered unprofessional?

    - by Gopinath
    When I started my career 9 years ago I use to wear casual wear to office – Jeans & T-Shirts all the 5 days. The environment at workplace during those days encouraged me to be casual and many of my colleagues use to come in Jeans. We just started our career those days it was perfectly fine to be in casual. As I grow up in the ladder, I started feeling the discomfort of wearing Jeans at work. During clients visits, senior managers meetings and consultations I was an odd man in the crowd as the rest of them are in formals. In order to be one among the professionals I’m forced change my dressing style and start wearing formals. But  the question of “Why wearing jeans to workplace is considered as unprofessional?” use in linger in my mind till today. I got the answer to my question from a discussion thread on Quora When they were invented, jeans were associated with blue-collar work. They were meant to get muddy and gross and take lots of abuse without falling apart, even if you wore the same pair every day. The people who bought them were the ones whose lives required durable clothing. And another commenter says… A professional image is critical to cementing business relationships, and part of that is, for right or wrong, how you dress. Jeans are typically associated with "kicking back", relaxation, leisure, informality,  and even a slightly rebellious flavor. The style and condition of the jeans are a consideration, as we often wear jeans into advanced states of being worn down, with tearing, etc.. that we generally do not do with other clothing items. I agree with this theory even though it may be centuries old. If you want to look like a professional and treated like a professional it’s better to be dress up in formals. These days I make a point to be in formals at workplace. Not everyone is Steve Jobs to wear a Jean & Turtle Neck T-shirt  right? CC Image credit flickr/exey

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • Action delegate in C#

    - by Jalpesh P. Vadgama
    In last few posts about I have written lots of things about delegates and this post is also part of that series. In this post we are going to learn about Action delegates in C#.  Following is a list of post related to delegates. Delegates in C#. Multicast Delegates in C#. Func Delegates in C#. Action Delegates in c#: As per MSDN action delegates used to pass a method as parameter without explicitly declaring custom delegates. Action Delegates are used to encapsulate method that does not have return value. C# 4.0 Action delegates have following different variants like following. It can take up to 16 parameters. Action – It will be no parameter and does not return any value. Action(T) Action(T1,T2) Action(T1,T2,T3) Action(T1,T2,T3,T4) Action(T1,T2,T3,T4,T5) Action(T1,T2,T3,T4,T5,T6) Action(T1,T2,T3,T4,T5,T6,T7) Action(T1,T2,T3,T4,T5,T6,T7,T8) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15) Action(T1,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T13,T14,T15,T16) So for this Action delegate you can have up to 16 parameters for Action.  Sound interesting!!… Enough theory now. It’s time to implement real code. Following is a code for that. using System; using System.Collections.Generic; namespace DelegateExample { class Program { static void Main(string[] args) { Action<String> Print = p => Console.WriteLine(p); Action<String,String> PrintAnother = (p1,p2)=> Console.WriteLine(string.Format("{0} {1}",p1,p2)); Print("Hello"); PrintAnother("Hello","World"); } } } In the above code you can see that I have created two Action delegate Print and PrintAnother. Print have one string parameter and its printing that. While PrintAnother have two string parameter and printing both the strings via Console.Writeline. Now it’s time to run example and following is the output as expected. That’s it. Hope you liked it. Stay tuned for more updates!!

    Read the article

  • Rewriting code under BSD license

    - by Frank
    I am currently studding OpengGL with OpenGL Supebible 5th edition. I've found interested for me some C++ code that is distributed with the book (see also on google code). That code is under New BSD License. I am writing my software on C# with SharpGL wrapper and I'd like to know following things: Can I rewrite that C++ to C#? edid: I'am interesting in using such things like GLBatch, GLShaderManager and some other thing from GLTools. Problem is that library is on C++, but I use C#. How do I have to mark my source code if I put it somewhere like to my github account? What disclaimer should be? Original disclaimer looks like: /* GLShaderManager.h Copyright (c) 2009, Richard S. Wright Jr. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Richard S. Wright Jr. nor the names of other contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ Edit: Should my copyright looks like after rewriting something like that? Copyright (c) 2014, My Name Copyright (c) 2009, Richard S. Wright Jr. All rights reserved. Redistribution...................

    Read the article

  • Adjusting the Score on Oracle Text search results

    - by Kyle Hatlestad
    When you sort the results of a search by Score using OracleTextSearch as the search engine in WebCenter Content, the results coming back are based on the relevancy on the document.  In theory, the more relevant the search term is to the document, the higher ranked Score it should receive.  But in practice, the relevancy score can seem somewhat of a mystery.  It's not entirely clear how it ranks the importance of some documents over others based on the search term.  And often times, once a word appears a certain number of times within a document, the Score simply maxes out at 100 and the top results can be difficult to discern from one another.  Take for example the search for 'vacation' on this set of documents: Out of 7 results, 6 of them have a Score of '100' which means they are basically ranked the same.  This doesn't make the sort by Score very meaningful.   Besides sorting by relevance, you can also tell Oracle Text to sort by occurrence.  In that case, it is a much more predictable result in how they would be ranked. And for many cases provide a more meaningful sorting of results then relevance. To change this takes a small component change to the SearchOperatorMap resource.  By default, the query used for full-text searching looks like: <td>(ORACLETEXTSEARCH)fullText</td> <td>DEFINESCORE((%V), RELEVANCE * .1)</td> <td>text</td> Overriding this resource and changing it to: <td>(ORACLETEXTSEARCH)fullText</td> <td>DEFINESCORE((%V), OCCURRENCE * .01)</td>  <td>text</td> will force it to now use occurrence (note the change in scale to .01 as well).  So running the same search and sort options as the example above, the results come out quite a bit differently: In this case, there is a clear understanding of how the items rank.   And generally, if the search term appears 3 times more in one document then another, it's got a better chance of being a document I'm interested in.  You may or may not feel the relevance ranking is better then the search term occurrence, but this provides the opportunity to try an alternate method that might work better for your results.  A pre-built component is available for download here. There is one caveat in using this method.  The occurrence ranking also maxes out at 100, so if a search term is in the document more then that, the Score result will stay at 100.

    Read the article

  • Finding work as a college student

    - by lightburst
    I'm a 3rd year CS student. I'm currently really, really, bored and tired of cheap school programming(I go to a fairly respectable(albeit not top) university in my country, but, really, it's not MIT) so I've been thinking about getting a part-time dev job for a long while now. I'm not some hotshot programmer or anything, but "Add/Delete XYZ class objects to list" or "Do this web feature/pattern in PHP" does get old after a few semesters. I also only learned(heard?) of programming when I entered college, so the duration of me being in contact with any code is short. I can't really apply as an intern as I have not accumulated the necessary credits yet to do that so I was thinking of selling myself as a part-time dev. I still need to go to school, and don't want to subject myself to living two lives. Plus, I think I'd have better chances considering my lack of things to write on the resume. The only language I know at heart is C (I've written several pointer-oriented stuff on my freshman year, which is apparently pretty leet(for some reason), or so Joel says. That sort of boosted my morale a bit) but I've worked with several other languages only for the sake of course work such as C#/Java/PHP/ASM. My only user-worthy project was a recursive quicksort simulator I wrote for class using GTK+ that used a textbox to output the progress. I also have not taken the compiler theory class yet, or my thesis. All that being said, I wonder if any legitimate software company(big or small) would hire somebody like me considering all that. If there are companies that do accept anybody like me, would I be doing programming or maybe just be a tester or something? Would anybody hire me as a dev at all? I know I don't have much(not even a degree) but what I lack in experience, I compensate with interest? I am less interested in websites and 'management' software(no offense or anything. also, most of the places here ONLY have those), and more into 'process driven'(I'm not sure how to call it) and system software. I have my eyes on a startup that sells basically an extension of Google Drive, but I feel like I'm too 'risky' for them. Oh, I'm also 19 if that means anything. We're not K-12 so kids go to college earlier than in the US.

    Read the article

  • The value of money

    - by ambreesh
    A dictionary definition of money is "any circulating medium of exchange, including coins, paper money, anddemand deposits". If you ask an economist for a definition of money, you will be introduced to terms like M1, M2, M3, all of which denote tangible assets - currency, and anything that is liquid enough to be used as currency; checks, stamps and now mobile minutes being examples. The macroeconomic theory of money is fascinating - the effect of money supply on exchange rates and interest rates, the concept of the "money multiplier" (if I deposit $10 into a bank, the bank will likely loan $8 of it to someone else, who will then give it to someone else in exchange for goods and services, who will then likely deposit it again, which will result in the bank loaning it again and so on - making that $10 of money supply worth a lot more ($10+$8+$x+...)).  But all this depends on money supply - in other words, money that is printed by the mint. The Treasury Department spends a lot of time figuring out how much money to print, there is lot being written on QE2 now-a-days, which is intended to increase the money supply. Money is used to purchase goods and services, and yes it is saved too but that is so one can purchase goods and services later. Completely unrelated, there is a sea change occurring in the web world, dominated by, I believe, Facebook. With 500M active users and growing, FB has the ability to introduce a "money supply" which is completely unrelated to today's "money". Using today's money, a FB user can buy a certain number of FB$s, and then use the FB$s within FB to purchase goods and services - with the money multiplier kicking in. I remember talking with a colleague about this a few years ago, the true way to monetize the web is to introduce an alternative system to the existing, and FB has the ability to do just that. There is enough momentum, enough mass for FB to start to monetize its user base. And completely screw up the economists at the Treasury, not to mention disintermediating the banks completely. The only other ubiquitous asset is mobile minutes. People exchanging mobile minutes for tangible goods and services happens today, the big difference however is the demographic. While Safaricom offers this ability in Kenya today, FB has the 15-40 year middle class user as their user. And the next generation is growing up with FB as a standard channel for communicating with their peers. Virtual flowers when going in for the kill? If your target is an avid FB user, why not? It certainly is a lot more green - no pun intended!

    Read the article

  • Music Notation Editor - Refactoring view creation logic elseware

    - by Cyril Silverman
    Let me preface by saying that knowing some elementary music theory and music notation may be helpful in grasping the problem at hand. I'm currently building a Music Notation and Tablature Editor (in Javascript). But I've come to a point where the core parts of the program are more or less there. All functionality I plan to add at this point will really build off the foundation that I've created. As a result, I want to refactor to really solidify my code. I'm using an API called VexFlow to render notation. Basically I pass the parts of the editor's state to VexFlow to build the graphical representation of the score. Here is a rough and stripped down UML diagram showing you the outline of my program: In essence, a Part has many Measures which has many Notes which has many NoteItems (yes, this is semantically weird, as a chord is represented as a Note with multiple NoteItems, individual pitches or fret positions). All of the relationships are bi-directional. There are a few problems with my design because my Measure class contains the majority of the entire application view logic. The class holds the data about all VexFlow objects (the graphical representation of the score). It contains the graphical Staff object and the graphical notes. (Shouldn't these be placed somewhere else in the program?) While VexFlowFactory deals with actual creation (and some processing) of most of the VexFlow objects, Measure still "directs" the creation of all the objects and what order they are supposed to be created in for both the VexFlowStaff and VexFlowNotes. I'm not looking for a specific answer as you'd need a much deeper understanding of my code. Just a general direction to go in. Here's a thought I had, create an MeasureView/NoteView/PartView classes that contains the basic VexFlow objects for each class in addition to any extraneous logic for it's creation? but where would these views be contained? Do I create a ScoreView that is a parallel graphical representation of everything? So that ScoreView.render() would cascade down PartView and call render for each PartView and casade down into each MeasureView, etc. Again, I just have no idea what direction to go in. The more I think about it, the more ways to go seem to pop into my head. I tried to be as concise and simplistic as possible while still getting my problem across. Please feel free to ask me any questions if anything is unclear. It's quite a struggle trying to dumb down a complicated problem to its core parts.

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

  • Token based Authentication and Claims for Restful Services

    - by Your DisplayName here!
    WIF as it exists today is optimized for web applications (passive/WS-Federation) and SOAP based services (active/WS-Trust). While there is limited support for WCF WebServiceHost based services (for standard credential types like Windows and Basic), there is no ready to use plumbing for RESTful services that do authentication based on tokens. This is not an oversight from the WIF team, but the REST services security world is currently rapidly changing – and that’s by design. There are a number of intermediate solutions, emerging protocols and token types, as well as some already deprecated ones. So it didn’t make sense to bake that into the core feature set of WIF. But after all, the F in WIF stands for Foundation. So just like the WIF APIs integrate tokens and claims into other hosts, this is also (easily) possible with RESTful services. Here’s how. HTTP Services and Authentication Unlike SOAP services, in the REST world there is no (over) specified security framework like WS-Security. Instead standard HTTP means are used to transmit credentials and SSL is used to secure the transport and data in transit. For most cases the HTTP Authorize header is used to transmit the security token (this can be as simple as a username/password up to issued tokens of some sort). The Authorize header consists of the actual credential (consider this opaque from a transport perspective) as well as a scheme. The scheme is some string that gives the service a hint what type of credential was used (e.g. Basic for basic authentication credentials). HTTP also includes a way to advertise the right credential type back to the client, for this the WWW-Authenticate response header is used. So for token based authentication, the service would simply need to read the incoming Authorization header, extract the token, parse and validate it. After the token has been validated, you also typically want some sort of client identity representation based on the incoming token. This is regardless of how technology-wise the actual service was built. In ASP.NET (MVC) you could use an HttpModule or an ActionFilter. In (todays) WCF, you would use the ServiceAuthorizationManager infrastructure. The nice thing about using WCF’ native extensibility points is that you get self-hosting for free. This is where WIF comes into play. WIF has ready to use infrastructure built-in that just need to be plugged into the corresponding hosting environment: Representation of identity based on claims. This is a very natural way of translating a security token (and again I mean this in the widest sense – could be also a username/password) into something our applications can work with. Infrastructure to convert tokens into claims (called security token handler) Claims transformation Claims-based authorization So much for the theory. In the next post I will show you how to implement that for WCF – including full source code and samples. (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • What are some good realistic programming related movies (docu-dramas, documentaries, accurate fiction, etc)?

    - by EpsilonVector
    A while ago I asked this question and the result was this. Following the response I got in the meta question I'm re-asking the question with new guidelines to focus it on the direction I wanted it to have originally. ================================================================== The guidelines are as follows: by "programming related" I mean movies from which we can learn about stuff like the development process, or history of software/computers, or programming culture. In other words, they must be grounded in the industry. No tangential stuff. Good entries answer as many of the following criteria as possible: Teach you about the history of the industry, or the development process, or teach you about important industry related topics (software patents for example) Are based on real life events, companies, people, practices, and they are the main focus of the movie After watching them, you feel like you understand or know something about the programmers' world that you didn't before (or you can see how someone could have such a response). You can point to it and say "this faithfully represents the industry/programmer culture at some point in time". This might be something you would show laymen to explain to them what "your people" are like and what is it that you do. Examples for good entries include: Pirates of Silicon Valley- the story of how Microsoft and Apple started the industry. Revolution OS- The story of Linux's rise to fame, and a pretty good cover of the Free Software/Open Source world. Aardvark'd: 12 Weeks with Geeks- development process. Examples for bad entries: Movies who's sole relevance is that they can be appreciated by programmers. The point of this question is not to be "what are some good movies" with "for a programmer" appended to it. Just because the writers got a few computer jokes right in itself doesn't make it about the industry. Movies where there's a computer related element, but are not about the industry. For example, 24 (the TV series). It's a product of the information age but it isn't actually about it. Another example is movies where there's a really cool programmer character, but are overall about something completely different. Likewise, The Big Bang Theory is not about physics, even though they have a cool physicist as a character. Science fiction, even if it draws ideas from computers. For example, the Matrix trilogy. Movies that you can't point to them and say: this is a faithful representation of our world (at some point in time). If you can't do that then it doesn't mirror the industry. Keep it one entry per answer so that the voting could sort the entries out.

    Read the article

  • Purple Cows, Copernicus, and Shampoo – Lessons in Customer Experience

    - by Christina McKeon
    What makes a great customer experience? And, why should you or your organization care? These are the questions that set the stage for the Oracle Customer Experience Summit, which kicked off yesterday in San Francisco. Day 1: The first day was filled with demos and insights from customer experience experts and Oracle customers sharing what it takes to deliver great customer experiences. Author Seth Godin delivered an entertaining presentation that included an in-depth exploration of the always-connected, always-sharing experience revolution that we are witnessing and yes, talked about the purple cow. It turns out that customer experience is your way to be the purple cow. Before everyone headed out to see Pearl Jam and Kings of Leon at the Oracle customer appreciation event, the day wrapped up with a discussion around building a customer-centric culture. Where do you start? Whom does it involve? What are some pitfalls to avoid? Day 2: The second day addressed the details behind all the questions brought up at the end of Day 1. Before you start on a customer experience initiative, Paul Hagen noted that you must understand you will forge a path similar to Copernicus. You will be proposing ideas and approaches that challenge current thinking in your organization. Just as Copernicus' heliocentric theory started a scientific revolution, your customer-centric efforts will start an experience revolution. If you think customer experience is like a traditional marketing approach, think again. It’s not about controlling your customers and leading them where you want them to go. It might sound like heresy to some, but your customers are already in control, whether or not your company realizes and acknowledges it. And, to survive and thrive, you'll have to focus on customers by thinking outside-in and working towards a brand that is better and more authentic. We learned how Vail Resorts takes this customer-centric approach. Employees must experience the mountain themselves and understand the experience from the guest’s standpoint. This has created a culture where employees do things for guests that are not expected. We also learned a valuable lesson in designing and innovating customer-centered experiences from Kerry Bodine. First you make the thing, and then you make the thing right. In this case, the thing is customer experience. Getting customer experience right means iterative prototyping and testing of your ideas. This is where shampoo comes in—think lather, rinse, repeat. Be prepared to keep repeating until the customer experience is right. Many of these sessions will be posted to YouTube in the coming weeks so be sure to subscribe to our CX channel.

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • Multiple country-specific domains or one global domain [closed]

    - by CJM
    Possible Duplicate: How should I structure my urls for both SEO and localization? My company currently has its main (English) site on a .com domain with a .co.uk alias. In addition, we have separate sites for certain other countries - these are also hosted in the UK but are distinct sites with a country-specific domain names (.de, .fr, .se, .es), and the sites have differing amounts of distinct but overlapping content, For example, the .es site is entirely in Spanish and has a page for every section of the UK site but little else. Whereas the .de site has much more content (but still less than the UK site), in German, and geared towards our business focus in that country. The main point is that the content in the additional sites is a subset of the UK, is translated into the local language, and although sometimes is simply only a translated version of UK content, it is usually 'tweaked' for the local market, and in certain areas, contains unique content. The other sites get a fraction of the traffic of the UK site. This is perfectly understandable since the biggest chunk of work comes from the UK, and we've been established here for over 30 years. However, we are wanting to build up our overseas business and part of that is building up our websites to support this. The Question: I posed a suggestion to the business that we might consider consolidating all our websites onto the .com domain but with /en/de/fr/se/etc sections, as plenty of other companies seem to do. The theory was that the non-english sites would benefit from the greater reputation of the parent .com domain, and that all the content would be mutually supporting - my fear is that the child domains on their own are too small to compete on their own compared to competitors who are established in these countries. Speaking to an SEO consultant from my hosting company, he feels that this move would have some benefit (for the reasons mentioned), but they would likely be significantly outweighed by the loss of the benefits of localised domains. Specifically, he said that since the Panda update, and particularly the two sets of changes this year, that we would lose more than we would gain. Having done some Panda research since, I've had my eyes opened on many issues, but curiously I haven't come across much that mentions localised domain names, though I do question whether Google would see it as duplicated content. It's not that I disagree with the consultant, I just want to know more before I make recommendations to my company. What is the prevailing opinion in this case? Would I gain anything from consolidating country-specific content onto one domain? Would Google see this as duplicate content? Would there be an even greater penalty from the loss of country-specific domains? And is there anything else I can do to help support the smaller, country-specific domains?

    Read the article

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • What economic books would you suggest for learning about economic valuation of goods and simulations thereof?

    - by Rushyo
    I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to (Addendum: somewhat) correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling? EDIT: I think I should underline that I'm not looking for optimal solutions. I'm looking to make the actors impulsive - making rudimentary decisions based on fuzzy inputs about what they care about or don't. I'm aiming to understand the problem area better not derive answers. All the textbooks I've found seem to be about real-world economics or how to solve complex theoretical problems, neither of which are terribly relevant to the actor's decision making.

    Read the article

  • What is a good design model for my new class?

    - by user66662
    I am a beginning programmer who, after trying to manage over 2000 lines of procedural php code, now has discovered the value of OOP. I have read a few books to get me up to speed on the beginning theory, but would like some advice on practical application. So,for example, let's say there are two types of content objects - an ad and a calendar event. what my application does is scan different websites (a predefined list), and, when it finds an ad or an event, it extracts the data and saves it to a database. All of my objects will share a $title and $description. However, the Ad object will have a $price and the Event object will have $startDate. Should I have two separate classes, one for each object? Should I have a 'superclass' with the $title and $description with two other Ad and Event classes with their own properties? The latter is at least the direction I am on now. My second question about this design is how to handle the logic that extracts the data for $title, $description, $price, and $date. For each website in my predefined list, there is a specific regex that returns the desired value for each property. Currently, I have an extremely large switch statement in my constructor which determines what website I am own, sets the regex variables accordingly, and continues on. Not only that, but now I have to repeat the logic to determine what site I am on in the constructor of each class. This doesn't feel right. Should I create another class Algorithms and store the logic there for each site? Should the functions of to handle that logic be in this class? or specific to the classes whos properties they set? I want to take into account in my design two things: 1) I will add different content objects in the future that share $title and $description, but will have their own properties, so, I want to be able to easily grow these as needed. 2) I will add more websites constantly (each with their own algorithms for data extraction) so I would like to plan efficienty managing and working with these now. I thought about extending the Ad or Event class with 'websiteX' class and store its functions there. But, this didn't feel right either as now I have to manage 100s of little website specific class files. Note, I didn't know if this was the correct site or stackoverflow was the better choice. If so, let me know and I'll post there.

    Read the article

  • How is fundamental mathematics efficiently evaluated by programming languages?

    - by Korvin Szanto
    As I get more and more involved with the theory behind programming, I find myself fascinated and dumbfounded by seemingly simple things.. I realize that my understanding of the majority of fundamental processes is justified through circular logic Q: How does this work? A: Because it does! I hate this realization! I love knowledge, and on top of that I love learning, which leads me to my question (albeit it's a broad one). Question: How are fundamental mathematical operators assessed with programming languages? How have current methods been improved? Example var = 5 * 5; My interpretation: $num1 = 5; $num2 = 5; $num3 = 0; while ($num2 > 0) { $num3 = $num3 + $num1; $num2 = $num2 - 1; } echo $num3; This seems to be highly inefficient. With Higher factors, this method is very slow while the standard built in method is instantanious. How would you simulate multiplication without iterating addition? var = 5 / 5; How is this even done? I can't think of a way to literally split it 5 into 5 equal parts. var = 5 ^ 5; Iterations of iterations of addition? My interpretation: $base = 5; $mod = 5; $num1 = $base; while ($mod > 1) { $num2 = 5; $num3 = 0; while ($num2 > 0) { $num3 = $num3 + $num1; $num2 = $num2 - 1; } $num1 = $num3; $mod -=1; } echo $num3; Again, this is EXTREMELY inefficient, yet I can't think of another way to do this. This same question extends to all mathematical related functions that are handled automagically.

    Read the article

  • How to appear very professional on my first freelance project?

    - by iamserious
    I need some help on appearing very professional on my first freelance project. How / What should you do to achieve this? Background: I started as a full time web developer 18 months ago, two promotions later I am now a senior software engineer. I've never had any problems with designing / developing / coding a complicated system. I thought I could use some help for Christmas and I started bidding for a project and now I have one - from a very reputable lawyers association in London. I have no problem dealing with the actual implementation of the system, but I have no idea how to appear professional throughout the whole process. About the project: This lawyers association are starting a distant training courses and in addition to having a website to show off all their clients etc, they want a students area where their students would log in, download course materials allocated to them etc.. and an admin section where they assign courses to students / create new ones / upload materials etc.. Questions breakdown: 1) How should I start with the requirement gathering? Is using scrum a good idea, or should I use something like Volere Template - and what should I do with it? should I submit a copy to the client etc.. 2) How often should I meet the client? Would once a fortnight would be good? 3) What are the processes / protocols that I need to follow so that they would be satisfied with me and think that I am very professional 4) How much should I charge for the product? 5) How should I get a quote / contract / receipt for the whole project? 6) What are the steps that professional freelancers go through, during the life cycle of a project? My Research so far Looking at How much should I charge doesn't help.. I live in London, zone 1, though I have no idea how much a project of this size would cost. Help on this would be appreciated. How to be professional articles talks about the work / time management etc and not the actual process, what would real people do etc.. it's like academia theory, but not practical. If this needs revising, please let me know, do not close it because of whatever reason, I will edit the question or details to fit the needs. Thanks for reading a lengthy question.

    Read the article

  • How to be successful at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Strategy to use two different measurement systems in software

    - by Dennis
    I have an application that needs to accept and output values in both US Custom Units and Metric system. Right now the conversion and input and output is a mess. You can only enter in US system, but you can choose the output to be US or Metric, and the code to do the conversions is everywhere. So I want to organize this and put together some simple rules. So I came up with this: Rules user can enter values in either US or Metric, and User Interface will take care of marking this properly All units internally will be stored as US, since the majority of the system already has most of the data stored like that and depends on this. It shouldn't matter I suppose as long as you don't mix unit. All output will be in US or Metric, depending on user selection/choice/preference. In theory this sounds great and seems like a solution. However, one little problem I came across is this: There is some data stored in code or in the database that already returns data like this: 4 x 13/16" screws, which means "four times screws". I need the to be in either US or Metric. Where exactly do I put the conversion code for doing the conversion for this unit? The above already mixing presentation and data, but the data for the field I need to populate is that whole string. I can certainly split it up into the number 4, the 13/16", and the " x " and the " screws", but the question remains... where do I put the conversion code? Different Locations for Conversion Routines 1) Right now the string is in a class where it's produced. I can put conversion code right into that class and it may be a good solution. Except then, I want to be consistent so I will be putting conversion procedures everywhere in the code at-data-source, or right after reading it from the database. The problem though is I think that my code will have to deal with two systems, all throughout the codebase after this, should I do this. 2) According to the rules, my idea was to put it in the view script, aka last change to modify it before it is shown to the user. And it may be the right thing to do, but then it strikes me it may not always be the best solution. (First, it complicates the view script a tad, second, I need to do more work on the data side to split things up more, or do extra parsing, such as in my case above). 3) Another solution is to do this somewhere in the data prep step before the view, aka somewhere in the middle, before the view, but after the data-source. This strikes me as messy and that could be the reason why my codebase is in such a mess right now. It seems that there is no best solution. What do I do?

    Read the article

  • Is my JavaScript/jQuery methodology good? [migrated]

    - by absentx
    I am seeking critique on what has become my normal methodology of writing JavaScript code. I have become heavily reliant on the jQuery library, but I think this has helped me learn the native language better also. Anyway, please critique the following style of JavaScript coding... Buried are a lot of questions of scope; if you could point out the strengths and weaknesses of this style I would appreciate it. var critique ={ start: function(){ globalness = 'GLOBAL-GLOBAL'; //Available to all critique's methods var notglobalness = 'LOCAL-LOCAL'; // Only available to critiques start method //Am I using the "method" teminology properly here?? $('#stuff').on('click','a.closer-target',function(){ $target = $(this); if($target.hasClass('active')){ $target.removeClass('active'); } else{ $target.addClass('active'); critique.madness($target); } }) console.log(notglobalness+': at least I am useful at home'); console.log('note here that: '+notglobalness+' is no longer available after this point, lets continue on:'); critique.madness(notglobalness); }, madness: function($e){ //Do a bunch of awesomeness with $e, //but continue to keep it seperate because you think its best to keep things isolated. //Send to the next function when complete here console.log('Here is globalness, which is still available from the start method of critique!! ' + globalness); console.log('Let us see if the globalness carries on to a new var object!!'); console.log('The locally isolated variable of NOTGLOBALNESS is available here, because it was passed to this method. Let us show it:'+$e); carryOn.start(); } } //end critique var carryOn={ start: function(){ console.log('any chance critique.globalness will work here??? lets see: ' +globalness); console.log('it absolutely does'); } } $(document).ready(critique.start); (I always struggle with which of the Stack Exchange sites is best to post "questions of theory" like this, but I think Programmers is the best, if not, as usual a mod will move it, etc...)

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >