Search Results

Search found 857 results on 35 pages for 'scientific notation'.

Page 25/35 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • parsing/matching string occurrence in C

    - by David
    I have the following string: const char *str = "\"This is just some random text\" 130 28194 \"Some other string\" \"String 3\"" I would like to get the the integer 28194 of course the integer varies, so I can't do strstr("20194"). So I was wondering what would be a good way to get that part of the string? I was thinking to use #include <regex.h> which I already have a procedure to match regexp's but not sure how the regexp in C will look like using the POSIX style notation. [:alpha:]+[:digit:] and if performance will be an issue. Or will it be better using strchr,strstr? Any ideas will be appreciate it

    Read the article

  • idiomatic way to take groups of n items from a list in Python?

    - by Wang
    Given a list A = [1 2 3 4 5 6] Is there any idiomatic (Pythonic) way to iterate over it as though it were B = [(1, 2) (3, 4) (5, 6)] other than indexing? That feels like a holdover from C: for a1,a2 in [ (A[i], A[i+1]) for i in range(0, len(A), 2) ]: I can't help but feel there should be some clever hack using itertools or slicing or something. (Of course, two at a time is just an example; I'd like a solution that works for any n.) Edit: related http://stackoverflow.com/questions/1162592/iterate-over-a-string-2-or-n-characters-at-a-time-in-python but even the cleanest solution (accepted, using zip) doesn't generalize well to higher n without a list comprehension and *-notation.

    Read the article

  • Ant Exec environment var

    - by Mike
    I have a problem where I don't want to have to call a setEnv.sh file before i call my ant target that calls an exec task. Right now I have a way to save the environment variables in setenv.properties file in the key=value notation. The exec task for some reason does not see the variables that are set in the .properties file.... (I know i could use the tag but the setenv.properties is dynamically generated) setenv.properties: HELLO=XYZ part of my build.xml : <property file="setenv.properties"/> <target name="test" depends="setEnv"> <exec executable="/bin/ksh" newenvironment="false"> test.sh : echo ${HELLO} Any thoughts?

    Read the article

  • Spring @Value annotation not using defaults when property is not present

    - by garyj
    Hi All I am trying to use @Value annotation in the parameters of a constructor as follows: @Autowired public StringEncryptor( @Value("${encryptor.password:\"\"}") String password, @Value("${encryptor.algorithm:\"PBEWithMD5AndTripleDES\"}") String algorithm, @Value("${encryptor.poolSize:10}") Integer poolSize, @Value("${encryptor.salt:\"\"}") String salt) { ... } When the properties file is present on the classpath, the properties are loaded perfectly and the test executes fine. However when I remove the properties file from the classpath, I would have expected that the default values would have been used, for example poolSize would be set to 10 or algorithm to PBEWithMD5AndTripleDES however this is not the case. Running the code through a debugger (which would only work after changing @Value("${encryptor.poolSize:10}") Integer poolSize to @Value("${encryptor.poolSize:10}") String poolSize as it was causing NumberFormatExceptions) I find that the defaults are not being set and the parameters are in the form of: poolSize = ${encryptor.poolSize:10} or algorithm = ${encryptor.algorithm:"PBEWithMD5AndTripleDES"} rather than the expected poolSize = 10 or algorithm = "PBEWithMD5AndTripleDES" Based on SPR-4785 the notation such as ${my.property:myDefaultValue} should work. Yet it's not happening for me! Thank you

    Read the article

  • Remote offscreen rendering

    - by redmoskito
    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA. My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start. We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    Read the article

  • Postfix and right-associative operators in LR(0) parsers

    - by Ian
    Is it possible to construct an LR(0) parser that could parse a language with both prefix and postfix operators? For example, if I had a grammar with the + (addition) and ! (factorial) operators with the usual precedence then 1+3! should be 1 + 3! = 1 + 6 = 7, but surely if the parser were LR(0) then when it had 1+3 on the stack it would reduce rather than shift? Also, do right associative operators pose a problem? For example, 2^3^4 should be 2^(3^4) but again, when the parser have 2^3 on the stack how would it know to reduce or shift? If this isn't possible is there still a way to use an LR(0) parser, possibly by converting the input into Polish or Reverse Polish notation or adding brackets in the appropriate places? Would this be done before, during or after the lexing stage?

    Read the article

  • Scala :: operator, how it works?

    - by Felix
    Hello Guys, in Scala, I can make a caseclass case class Foo(x:Int) and then put it in a list like so: List(Foo(42)) Now, nothing strange here. The following is strange to me. The operator :: is a function on a list, right? With any function with 1 argument in Scala, I can call it with infix notation. An example is 1 + 2 is a function (+) on the object Int. The class Foo I just defined does not have the :: operator, so how is the following possible: Foo(40) :: List(Foo(2)) ? In scala 2.8 rc1, I get the following output from the interactive prompt: scala> case class Foo(x:Int) defined class Foo scala> Foo(40) :: List(Foo(2)) res2: List[Foo] = List(Foo(40), Foo(2)) scala> I can go on and use it, but if someone can explain it I will be glad :)

    Read the article

  • Are there any benchmarks showing difference between hardware virtualisation enabled/disabled?

    - by Wil
    I have a 13" sub-laptop/large-netbook, it has an AMD Athlon Neo X2 L335, and I chose this one because it supports hardware virtualisation. In the end, I hardly do any virtualisation on it, however, when I do... it is fast. To my shock, I went in to the BIOS and saw that virtualisation was disabled! I turned this on and, I see no speed difference.... or at least none that I can tell. I do not have time to do a full set of benchmarks - and I run quite a bit of software on the host, so it wouldn't be scientific. I have searched quite a few places and I just can not find any benchmarks showing the difference of virtualisation bit enabled/disabled on the same hardware. Does anyone have any benchmarks they have seen that they can share? In addition, I know there was an uproar a while ago as Sony disable the hardware virtualisation on some models and only offer it in their higher models as a premium feature, however, apart from forcing an up-sell, are there any benefits to having it disabled e.g. battery/heat? I just can't find any information and can't work out why it would be disabled by default. Edit--- To add, The only thing I can find is that without it, you can not perform x64 virtualisation as fast. This is the only down side I can find. However, if this is the only difference, then I am still interested in the second part of the question - why offer the option to disable it?

    Read the article

  • Library for Dataflow in C

    - by msutherl
    How can I do dataflow (pipes and filters, stream processing, flow based) in C? And not with UNIX pipes. I recently came across stream.py. Streams are iterables with a pipelining mechanism to enable data-flow programming and easy parallelization. The idea is to take the output of a function that turns an iterable into another iterable and plug that as the input of another such function. While you can already do this using function composition, this package provides an elegant notation for it by overloading the operator. I would like to duplicate a simple version of this kind of functionality in C. I particularly like the overloading of the operator to avoid function composition mess. Wikipedia points to this hint from a Usenet post in 1990. Why C? Because I would like to be able to do this on microcontrollers and in C extensions for other high level languages (Max, Pd*, Python). * (ironic given that Max and Pd were written, in C, specifically for this purpose – I'm looking for something barebones)

    Read the article

  • Why Use java.lang.reflect.Array For Anything Other Than Array Creation?

    - by dimo414
    Java Class java.lang.reflect.Array provides a set of tools for creating an array dynamically. However in addition to that it has a whole set of methods for accessing (get, set, and length) an array. I don't understand the point of this, since you can (and presumably would) cast your dynamically generated array as an array upon creation, which means you can use the normal array access (bracket notation) functionality. In fact, looking at the source code you can see that is all the class does, cast the array, and throw an exception if the cast fails. So what's the point / usefulness of all of these extra methods?

    Read the article

  • Making a PHP object behave like an array?

    - by Mark Biek
    I'd like to be able to write a PHP class that behaves like an array and uses normal array syntax for getting & setting. For example (where Foo is a PHP class of my making): $foo = new Foo(); $foo['fooKey'] = 'foo value'; echo $foo['fooKey']; I know that PHP has the _get and _set magic methods but those don't let you use array notation to access items. Python handles it by overloading __getitem__ and __setitem__. Is there a way to do this in PHP? If it makes a difference, I'm running PHP 5.2.

    Read the article

  • Interface naming convention

    - by Frederick
    This is a subjective thing of course, but I don't see anything positive in prefixing interface names with an 'I'. To me, Thing is practically always more readable than IThing. My question is, why does this convention exist then? Sure, it makes it easier to tell interfaces from other types. But wouldn't that argument extend to retaining the Hungarian notation, which is now widely censured? What's your argument for that awkward 'I'? Or, more importantly, what could be Microsoft's?

    Read the article

  • g++: Use ZIP files as input

    - by Notinlist
    We have the Boost library in our side. It consists of huge amount of files which are not changing ever and only a tiny portion of it is used. We swap the whole boost directory if we are changing versions. Currently we have the Boost sources in our SVN, file by file which makes the checkout operations very slow, especially on Windows. It would be nice if there were a notation / plugin to address C++ files inside ZIP files, something like: // @ZIPFS ASSIGN 'boost' 'boost.zip/boost' #include <boost/smart_ptr/shared_ptr.hpp> Are there any support for compiler hooks in g++? Are there any effort regarding ZIP support? Other ideas?

    Read the article

  • JSON Syntax, what is this?

    - by danp
    I understand concepts of JSON ok, but after starting to use ebay's api, I came across a notation which I've not seen before, and was wondering if anyone could explain what's going on with it? { "findItemsByKeywordsResponse": [ { "ack": [ "Success" ], "version": [ "1.5.0" ], "timestamp": [ "2010-06-16T08:42:21.468Z" ], "searchResult": [ { "@count": "0" } ], "paginationOutput": [ { "pageNumber": [ "0" ], "entriesPerPage": [ "10" ], "totalPages": [ "0" ], "totalEntries": [ "0" ] } ] } ] } What's the "@count" thing? I noticed when I reference it in chrome, it throws an error: But in Firefox not. JSON Lint reports it's valid, as I'd expect... ;)

    Read the article

  • Reflection for Class of generic parameter in Java?

    - by hatboysam
    Imagine the following scenario: class MyClass extends OtherClass<String>{ String myName; //Whatever } class OtherClass<T> { T myfield; } And I am analyzing MyClass using reflection specifically (MyClass.class).getDeclaredFields(), in this case I will get the following fields (and Types, using getType() of the Field): myName --> String myField --> T I want to get the actual Type for T, which is known at runtime due to the explicit "String" in the extends notation, how do I go about getting the non-genetic type of myField?

    Read the article

  • Prove 2^sqrt(log(n))= O(n^a)

    - by user1830621
    I posted a question similar to this earlier, although it seemed like it was much easier. I know the underlying principle of Big-O notation says that to prove that 2^sqrt(log(n)) is O(n^a), there must exist a positive value c for which c(n^a) = 2^sqrt(log(n)) for all values n = N. c*n^a >= 2^sqrt(log(n)) c >= 2^sqrt(log(n)) / n^a This number will always be positive so long as n 0. I suppose I could make N = 1 just to be safe. c = 2^sqrt(log(n)) / n^a N = 1 c*n^a = 2^sqrt(log(n) <= 2^log(n) for all values of n >= 1 Now, I know this is incorrect, because I could just as easily claim that the function 2^sqrt(log(n)) is O(n)...

    Read the article

  • ASP.NET Web API - Screencast series Part 2: Getting Data

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. This second screencast starts to build out the Comments example - a JSON API that's accessed via jQuery. This sample uses a simple in-memory repository. At this early stage, the GET /api/values/ just returns an IEnumerable<Comment>. In part 4 we'll add on paging and filtering, and it gets more interesting.   The get by id (e.g. GET /api/values/5) case is a little more interesting. The method just returns a Comment if the Comment ID is valid, but if it's not found we throw an HttpResponseException with the correct HTTP status code (HTTP 404 Not Found). This is an important thing to get - HTTP defines common response status codes, so there's no need to implement any custom messaging here - we tell the requestor that the resource the requested wasn't there.  public Comment GetComment(int id) { Comment comment; if (!repository.TryGet(id, out comment)) throw new HttpResponseException(HttpStatusCode.NotFound); return comment; } This is great because it's standard, and any client should know how to handle it. There's no need to invent custom messaging here, and we can talk to any client that understands HTTP - not just jQuery, and not just browsers. But it's crazy easy to consume an HTTP API that returns JSON via jQuery. The example uses Knockout to bind the JSON values to HTML elements, but the thing to notice is that calling into this /api/coments is really simple, and the return from the $.get() method is just JSON data, which is really easy to work with in JavaScript (since JSON stands for JavaScript Object Notation and is the native serialization format in Javascript). $(function() { $("#getComments").click(function () { // We're using a Knockout model. This clears out the existing comments. viewModel.comments([]); $.get('/api/comments', function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); }); }); That's it! Easy, huh? In Part 3, we'll start modifying data on the server using POST and DELETE.

    Read the article

  • How do I (tactfully) tell my project manager or lead developer that the project's codebase needs serious work?

    - by Adam Maras
    I just joined a (relatively) small development team that's been working on a project for several months, if not a year. As with most developer joining a project, I spent my first couple of days reviewing the project's codebase. The project (a medium- to large-sized ASP.NET WebForms internal line of business application) is, for lack of a more descriptive term, a disaster. There are three immediately noticeable problems with the coding standards: The standard is very loose. It describes more of what not to do (don't use Hungarian notation, etc..) than what to do. The standard isn't always followed. There are inconsistencies with the code formatting everywhere. The standard doesn't follow Microsoft's style guidelines. In my opinion, there's no value in deviating from the guidelines that were set forth by the developer of the framework and the largest contributor to the language specification. As for point 3, perhaps it bothers me more because I've taken the time to get my MCPD with a focus on web applications (specifically, ASP.NET). I'm also the only Microsoft Certified Professional on the team. Because of what I learned in all of my schooling, self-teaching, and on-the-job learning (including my preparation for the certification exams) I've also spotted several instances in the project's code where things are simply not done in the best way. I've only been on this team for a week, but I see so many issues with their codebase that I imagine I'll be spending more time fighting with what's already written to do things in "their way" than I would if I were working on a project that, for example, followed more widely accepted coding standards, architecture patterns, and best practices. This brings me to my question: Should I (and if so, how do I) propose to my project manager and team lead that the project needs to be majorly renovated? I don't want to walk into their office, waving my MCTS and MCPD certificates around, saying that their project's codebase is crap. But I also don't want to have to stay silent and have to write kludgey code atop their kludgey code, because I actually want to write quality software and I want the end product to be stable and easily maintainable.

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Characteristics of a Web service that promote reusability and change

    Characteristics of a Web service that promote reusability and change:  Standardized Data Exchange Formats (XML, JSON) Standardized communication protocols (Soap, Rest) Promotes Loosely Coupled Systems  Standardized Data Exchange Formats (XML, JSON) XML W3.org defines Extensible Markup Language (XML) as a simplistic text format derived from SGML. XML was designed to solve challenges found in large-scale electronic publishing. In addition,  XML is playing an important role in the exchange of data primarily focusing on data exchange on the web. JSON JavaScript Object Notation (JSON) is a human-readable text-based standard designed for data interchange. This format is used for serializing and transmitting data over a network connection in a structured format. The primary use of JSON is to transmit data between a server and web application. JSON is an alternative to XML. Standardized communication protocols (Soap, Rest) Soap W3Scools.com defines SOAP as a simple XML-based protocol. This protocol lets applications exchange data over HTTP.  SOAP provides a way to communicate between applications running on different operating systems, with different technologies and programming languages. Rest In 2007, Stefan Tilkov defines Representational State Transfer (REST) as a set of principles that outlines how Web standards are supposed to be used.  Using REST in an application will ensure that it exploits the Web’s architecture to its benefit. Promotes Loosely Coupled Systems “Loose coupling as an approach to interconnecting the components in a system or network so that those components, also called elements, depend on each other to the least extent practicable. Coupling refers to the degree of direct knowledge that one element has of another.” (TechTarget.com, 2007) “Loosely coupled system can be easily broken down into definable elements. The extent of coupling in a system can be measured by mapping the maximum number of element changes that can occur without adverse effects. Examples of such changes include adding elements, removing elements, renaming elements, reconfiguring elements, modifying internal element characteristics and rearranging the way in which elements are interconnected.” (TechTarget.com, 2007) References: W3C. (2011). Extensible Markup Language (XML). Retrieved from W3.org: http://www.w3.org/XML/ W3Scools.com. (2011). SOAP Introduction. Retrieved from W3Scools.com: http://www.w3schools.com/soap/soap_intro.asp Tilkov, Stefan. (2007). A Brief Introduction to REST. Retrieved from Infoq.com: http://www.infoq.com/articles/rest-introduction TechTarget.com. (2011). loose coupling. Retrieved from TechTarget.com: http://searchnetworking.techtarget.com/definition/loose-coupling

    Read the article

  • Tools for Enterprise Architects: OmniGraffle for iPad?

    - by pat.shepherd
    Well, I have to admit to being a bit of an Apple fan and, of course, and early adopter of gadgets and technology in general.  So, when FedEx showed up with my iPad 3G last week, I was a kid in a candy store.  One of the apps that my “buy finger” was hovering over for a while (like all of 3 days) was Omnigraffle for the iPad.  I imagined that it would be very cool to use this with a customer’s EA’s to sketch out Business, Application, Information and Technology architectures.  Instead of using the blackboard, this seemed to offer promise as a white-boarding tool with obvious benefits over a traditional white-board.  I figured I’d get a VGA adapter, plug it into the customer’s projector and off we would go with a great JAD tool.  The touch pad approach offered an additional hands-on kind of feel. So, I made the $49.99 purchase + the $29.99 VGA adapter and tried to give it a go.  Well, I was both pleasantly and unpleasantly surprised.  It is both powerful and easy to use.  There are great stencils included for shapes, software icons, Visio shapes, and even UML notation.  There is even a free-hand tool that works well.  I created some diagrams pretty quickly.   The one below was just a test and took all of 10 minuets to do. The only problem was that Onmigraffle does not recognize the VGA output, so I was stopped dead in my tracks, as it were.  My use case was as a collaborative diagramming tool with other architects, though I can still use it off line.  I called Omnigraffle and they said that VGA support is on the feature request list so, hopefully, in a short amount of time, I can use the tool as I envisioned.   Review: Criteria Result Is it fun? Yes Is it Useful? Yes Does it Show Promise? Yes Did the VGA Output Work? No File/diagram Formats PDF, Onmigraffle proprietary, image   Quick Sample:     OmniGraffle for iPad - Products - The Omni Group

    Read the article

  • C# Dev - I've tried Lisps, but I don't get it.

    - by Jonathan Mitchem
    After a few months of learning about and playing with lisps, both CL and a bit of Clojure, I'm still not seeing a compelling reason to write anything in it instead of C#. I would really like some compelling reasons, or for someone to point out that I'm missing something really big. The strengths of a Lisp (per my research): Compact, expressive notation - More so than C#, yes... but I seem to be able to express those ideas in C# too. Implicit support for functional programming - C# with LINQ extension methods: mapcar = .Select( lambda ) mapcan = .Select( lambda ).Aggregate( (a,b) = a.Union(b) ) car/first = .First() cdr/rest = .Skip(1) .... etc. Lambda and higher-order function support - C# has this, and the syntax is arguably simpler: "(lambda (x) ( body ))" versus "x = ( body )" "#(" with "%", "%1", "%2" is nice in Clojure Method dispatch separated from the objects - C# has this through extension methods Multimethod dispatch - C# does not have this natively, but I could implement it as a function call in a few hours Code is Data (and Macros) - Maybe I haven't "gotten" macros, but I haven't seen a single example where the idea of a macro couldn't be implemented as a function; it doesn't change the "language", but I'm not sure that's a strength DSLs - Can only do it through function composition... but it works Untyped "exploratory" programming - for structs/classes, C#'s autoproperties and "object" work quite well, and you can easily escalate into stronger typing as you go along Runs on non-Windows hardware - Yeah, so? Outside of college, I've only known one person who doesn't run Windows at home, or at least a VM of Windows on *nix/Mac. (Then again, maybe this is more important than I thought and I've just been brainwashed...) The REPL for bottom-up design - Ok, I admit this is really really nice, and I miss it in C#. Things I'm missing in a Lisp (due to a mix of C#, .NET, Visual Studio, Resharper): Namespaces. Even with static methods, I like to tie them to a "class" to categorize their context (Clojure seems to have this, CL doesn't seem to.) Great compile and design-time support the type system allows me to determine "correctness" of the datastructures I pass around anything misspelled is underlined realtime; I don't have to wait until runtime to know code improvements (such as using an FP approach instead of an imperative one) are autosuggested GUI development tools: WinForms and WPF (I know Clojure has access to the Java GUI libraries, but they're entirely foreign to me.) GUI Debugging tools: breakpoints, step-in, step-over, value inspectors (text, xml, custom), watches, debug-by-thread, conditional breakpoints, call-stack window with the ability to jump to the code at any level in the stack (To be fair, my stint with Emacs+Slime seemed to provide some of this, but I'm partial to the VS GUI-driven approach) I really like the hype surrounding Lisps and I gave it a chance. But is there anything I can do in a Lisp that I can't do as well in C#? It might be a bit more verbose in C#, but I also have autocomplete. What am I missing? Why should I use Clojure/CL?

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >