Search Results

Search found 457 results on 19 pages for 'ken liu'.

Page 18/19 | < Previous Page | 14 15 16 17 18 19  | Next Page >

  • How and Where do I learn about how to hook up my uncreated Database to a 3rd party API

    - by raln
    I want to create a database with information that I can send to a third party... they give their api on their website but they don't break it down for new / newbs like myself... I'm confused and was thinking of getting a developer to do this all for me but I DON'T want to get ripped off... and overpay All it is is a webapplication backed by a database onto their platform where they support different API's but I also need a function in my web app that can send various messages via XML, or SMPP to their webservice.. . hmm -Ken

    Read the article

  • mocking command object in grails controller results in hasErrors() return false no matter what! Plea

    - by egervari
    I have a controller that uses a command object in a controller action. When mocking this command object in a grails' controller unit test, the hasErrors() method always returns false, even when I am purposefully violating its constraints. def save = { RegistrationForm form -> if(form.hasErrors()) { // code block never gets executed } else { // code block always gets executed } } In the test itself, I do this: mockCommandObject(RegistrationForm) def form = new RegistrationForm(emailAddress: "ken.bad@gmail", password: "secret", confirmPassword: "wrong") controller.save(form) I am purposefully giving it a bad email address, and I am making sure the password and the confirmPassword properties are different. In this case, hasErrors() should return true... but it doesn't. I don't know how my testing can be any where reliable if such a basic thing does not work :/ Here is the RegistrationForm class, so you can see the constraints I am using: class RegistrationForm { def springSecurityService String emailAddress String password String confirmPassword String getEncryptedPassword() { springSecurityService.encodePassword(password) } static constraints = { emailAddress(blank: false, email: true) password(blank: false, minSize:4, maxSize: 10) confirmPassword(blank: false, validator: { confirmPassword, form -> confirmPassword == form.password }) } }

    Read the article

  • Safari/Chrome (Webkit) - Cannot hide iframe vertical scrollbar

    - by BrainCore
    Hello, I have an iframe on www.mydomain.com that points to support.mydomain.com (which is a CNAME to a foreign domain). I automatically resize the height of my iframe so that the frame will not need any scrollbars to display the contained webpage. On Firefox and IE this works great, there is no scrollbar since I use <iframe ... scrolling="no"></iframe>. However, on webkit browsers (Safari and Chrome), the vertical scrollbar persists even when there is sufficient room for the page without the scrollbar (the scrollbar is grayed out). How do I hide the scrollbar for webkit browsers? Thanks, Ken

    Read the article

  • Silverlight User Group of Switzerland (SLUGS)

    - by Laurent Bugnion
    Last Thursday, the Silverlight Firestarter event took place in Redmond, and was streamed live to a large audience worldwide (around 20’000 people). Approximately 30 if them were in Wallisellen near Zurich, in Microsoft Switzerland’s offices. This was not only a great occasion to learn more about the future of Silverlight and to see great demos, but also it was the very first meeting of the Silverlight User Group of Switzerland (SLUGS). Having 30 people for a first meeting was a great success, especially if we consider that it was REALLY cold that night, that it had snowed 20 cm the night before! We all had a good time, and 3 lucky winners went back home with a prize: One LG Optimus 7 Windows Phone and two copies of Silverlight 4 Unleashed. Congratulations to the winners! After the keynote (which went in a whirlwind, shortest 90 minutes ever!), we all had pizza and beverages generously sponsored by the Swiss DPE team, of which not less than 5 guys came to the event! Thanks to Stefano, Ronnie, Sascha, Big Mike and Ken for attending! We decided to have meetings every month. Stay tuned for announcements on when and where the events will take place. We are also in the process of creating various groups online where the attendees can find more information. For instance, I created a group on Flickr where the pictures taken at events will be published. The group is public, and the pictures of the first event are already online! We also have the already known page at http://www.slugs.ch/, check it out. A national group Even though the first event was in Zurich, and that 3 of the founding members live nearby, we would like to try and be a national group. That means having events sometimes in other parts of Switzerland, collaborating with other local user groups, etc. Stay tuned for more Join! We want you, we need you If you are doing Silverlight, for a living or as a hobby, if you are interested in user experience, XAML, Expression Blend and many more topics, you should consider joining! This is a great occasion to exchange experiences, to learn from Silverlight experts, to hear sessions about various topics related to Silverlight, etc. If you want to talk about a topic that is of interest to you, If you want to propose a topic of discussion Or if you just want to hang out then go to http://www.slugs.ch and register! Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Updated 9th May 2010 – I have now presented at both of these sessions  and posted about it. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • SQLAuthority News – Presented Soft Skill Session on Presentation Skills at SQL Bangalore on May 3, 2014

    - by Pinal Dave
    I have presented on various database technologies for almost 10 years now. SQL, Database and NoSQL have been part of my life. Earlier this month, I had the opportunity to present on the topic Performing an Effective Presentation. I must say it was blast to prepare as well as present this session. This event was part of the SQL Bangalore community. If you are in Bangalore, you must be part of this group. SQL Bangalore is a wonderful community and we always have a great response when we present on technology. It is SQL User Group and we discuss everything SQL there. This month we had SQL Server 2014 theme and we had a community launch of SQL Server. We have the best of the best speakers presenting on SQL Server 2014 technology. The event had amazing speakers and each of them did justice to the subject. You can read about this over here. In this session I told a story from my life. I talked about who inspired me and how I learned to speak in public. I told stories about two legends  who have inspired me. There is no video recording of this session. If you want to get resources from this session, please sign up my newsletter at http://bit.ly/sqllearn. Well, I had a great time at this event. We had over 250 people showed up at this event and had a grand  time together. I personally enjoyed a session of Amit Benerjee, Balmukund Lakhani and Vinod Kumar. Ken and Surabh also entertained the audience. Overall, this was a grand event and if you were in Bangalore and did not make it to this event. You did miss out on a few things. Here are a few photos of this event. SQL Bangalore UG Nupur, Chandra, Shaivi, Balmukund, Amit, Vinod [captions This] SQL Bangalore UG Audience Pinal Dave presenting at SQL UG in Bangalore Here are few of the slides from this presentation: Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL

    Read the article

  • links for 2010-12-22

    - by Bob Rhubart
    @hajonormann: BPM: Top Seven Architectural Topics in 2010 Oracle ACE Director Hajo Normann offers details on how to design a BPM/SOA solution including: modeling human interaction, improving BPM models, orchestrating composed services, central task management, new approaches for business-IT alignment, solutions for non-deterministic processes, and choreography. (tags: oracle otn soasymposium infoq soa bpm) InfoQ: Simplicity, The Way of the Unusual Architect Dan North talks about the tendency developers-becoming-architects have to create bigger and more complex systems. Without trying to be simplistic, North argues for simplicity, offering strategies to extract the simple essence from complex situations. (tags: ping.fm) Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV (Wim Coekaerts Blog) "One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) Oracle VM VirtualBox 4.0.0 Released! (Oracle's Virtualization Blog) And you were worried about what to get that special someone for Christmas... (tags: oracle otn virtualization virtualbox) Virtual Developer Day: Oracle WebLogic Server & Java EE (#OTNVDD) (Oracle Technology Network Blog (aka TechBlog)) "Virtual Developer Day is back with a vengeance! On Feb. 1, login to learn how Oracle WebLogic Server enables a whole new level of productivity for enterprise developers." Registration is open. (tags: oracle otn events webinar java) New Coherence 3.6 Oracle University Course (Cristóbal Soto's Blog) Cristóbal Soto shares information on the "Oracle Coherence 3.6: Share and Manage Data in Clusters" course now available through Oracle University. (tags: oracle otn grid coherence) The Aquarium: Oracle WebLogic Server & Java EE developer day "Oracle WebLogic is well on its way to contribute to the general Java EE 6 momentum and the OTN Blog has just announced a Virtual Developer Day for Oracle WebLogic." (tags: oracle otn weblogic java) Enterprise 2.0 Use Cases for Semantic Web (Reiser 2.0) "How can an enterprise improve the efficiency and effectiveness of their Knowledge and Community model leveraging semantic technologies and social networking dynamics?" - Peter Reiser (tags: oracle otn enterprise2.0 semanticweb) John Gøtze: European Interoperability Framework 2.0 "This week, the European Commission announced an updated interoperability policy in the EU. The Commission has committed itself to adopt a Communication that introduces the European Interoperability Strategy (EIS) and an update to the European Interoperability Framework (EIF)..." - John Gøtze (tags: entarch Interoperability) Andy Mulholland: Maybe Web 3.0 is quite understandable – and a natural result "The idea of Web 1.0 = content, Web 2.0 = people and Web 3.0 = services has a nice symmetrical feel to it, in fact it feels basically right as such a definition would include the two other major definitions as well. So if we put these things all together what picture do we see?" - Andy Mulholland (tags: web2.0 web3.0) Ken Downs: A Working Definition of Business Logic, with Implications for CRUD Code "The Wikipedia entry on 'Business Logic' has a wonderfully honest opening sentence stating that 'Business logic, or domain logic, is a non-technical term...'"  (tags: businesslogic crud)

    Read the article

  • Is there any way for ME to improve routing to an overseas server?

    - by Simon Hartcher
    I am trying to make a connection to a gaming server in Asia from Australia, but my ISP routes my connection through the US. Tracing route to worldoftanks-sea.com [116.51.25.54]over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.1.1 2 34 ms 42 ms 45 ms 10.20.21.123 3 40 ms 40 ms 43 ms 202.7.173.145 4 51 ms 42 ms 36 ms syd-sot-ken-crt1-ge-6-0-0.tpgi.com.au [202.7.171.121] 5 175 ms 200 ms 195 ms ge5-0-5d0.cir1.seattle7-wa.us.xo.net [216.156.100.37] 6 212 ms 228 ms 229 ms vb2002.rar3.sanjose-ca.us.xo.net [207.88.13.150] 7 205 ms 204 ms 206 ms 207.88.14.226.ptr.us.xo.net [207.88.14.226] 8 207 ms 215 ms 220 ms xe-0.equinix.snjsca04.us.bb.gin.ntt.net [206.223.116.12] 9 198 ms 201 ms 199 ms ae-7.r20.snjsca04.us.bb.gin.ntt.net [129.250.5.52] 10 396 ms 391 ms 395 ms as-6.r20.sngpsi02.sg.bb.gin.ntt.net [129.250.3.89] 11 383 ms 384 ms 383 ms ae-3.r02.sngpsi02.sg.bb.gin.ntt.net [129.250.4.178] 12 364 ms 381 ms 359 ms wotsg1-slave-54.worldoftanks.sg [116.51.25.54] Trace complete. Since I think it will be unlikely that my ISP will do anything, are there any ways to improve my routing to the server without them having to intervene? NB. The game runs predominately over UDP, so I believe most low ping services are out of the question, as they rely on TCP traffic.

    Read the article

  • Squirrelmail receiving duplicate emails

    - by Austin
    A client of mine is experiencing issues with his email, it appears that whenever he receives email from a certain domain it comes as duplicates. Not only are they duplicates but the duplicated items have a (+) sign next to them which usually indicates an attachment. Could this be because of a forwarding issue? Here are the headers: Return-Path: <[email protected]> Received: from bigcat.centralmasswebdesign.com (root@localhost) by tarbellconstruction.com (8.13.1/8.13.1) with ESMTP id o4OFnO23003379 for <[email protected]>; Mon, 24 May 2010 11:49:24 -0400 X-ClientAddr: 72.249.26.200 Received: from mf3.spamfiltering.com (mf3.spamfiltering.com [72.249.26.200]) by bigcat.centralmasswebdesign.com (8.13.1/8.13.1) with ESMTP id o4OFnOjF005520 for <[email protected]>; Mon, 24 May 2010 11:49:24 -0400 X-Envelope-From: [email protected] X-Envelope-To: [email protected] Received: From 67-132-16-226.dia.static.qwest.net (67.132.16.226) by mf3.spamfiltering.com (MAILFOUNDRY) id 6lzIAmdLEd+oFQAw for [email protected]; Mon, 24 May 2010 15:49:23 -0000 (GMT) Received: from mail pickup service by WMA2-EXCH1.NELCO-USA.net with Microsoft SMTPSVC; Mon, 24 May 2010 11:49:18 -0400 Content-Transfer-Encoding: 7bit Importance: normal Priority: normal X-MimeOLE: Produced By Microsoft MimeOLE V6.00.3790.4325 Content-Class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CAFB58.AAB268D0" Subject: weekly activity report for week ending May 22, 2010 Date: Mon, 24 May 2010 11:49:16 -0400 Message-ID: <15BCC4D99E8CBF48A2FA37A318CFF5C801209CCC@wma2-exch1.NELCO-USA.net> X-MS-Has-Attach: yes X-MS-TNEF-Correlator: Thread-Topic: weekly activity report for week ending May 22, 2010 thread-index: Acr7WKpdCelRCiocT1eBY2YN5Ma8DA== From: "Mike LeBlanc" <[email protected]> To: "Keith Berube" <[email protected]>, "Ken Tarbell" <[email protected]> X-OriginalArrivalTime: 24 May 2010 15:49:18.0361 (UTC) FILETIME=[AB546890:01CAFB58]

    Read the article

  • How to recover a Linksys WRT54GL router that has a blinking green power LED and no response from the

    - by Peter Mounce
    I was flashing the router with the Tomato firmware, but something went wrong; I'm not sure what. Now, the router responds to ping at 192.168.1.1 (my Mac's on a static IP 192.168.1.21), but the web-interface doesn't come up. I have read that this situation is recoverable in a [couple of places][2], but I haven't been having much success and so I wondered whether anyone could help. From my Mac (OSX 10.5) I have tried to tftp a new vanilla-Linksys firmware to the router and reboot; according to the trace, this sends it but the router behaves no differently after a reboot. I've read that if boot_wait is turned on, I'll have an easier time, but I haven't been able to find any instructions that tell me how I can tell whether I did this or not (I don't think I have, but I might have, when I tinkered the first time months ago - the router has worked since then, though). I have found a couple of references to [something called JTAG][3], which seems like some kind of [homebrew diagnostic cable thing][4], but that's a little beyond my ken. Happy to try it, with muppet-level instructions, though (I do software, not hardware!). So, I'm at a bit of a loss, really, and wondered whether anyone could provide me with the route (ha. ha.) out of this mess? Hm, I can't post all the links I wanted to until I have some more reputation.

    Read the article

  • Is there any way for ME to improve routing to an overseas server? [migrated]

    - by Simon Hartcher
    I am trying to make a connection to a gaming server in Asia from Australia, but my ISP routes my connection through the US. Tracing route to worldoftanks-sea.com [116.51.25.54]over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.1.1 2 34 ms 42 ms 45 ms 10.20.21.123 3 40 ms 40 ms 43 ms 202.7.173.145 4 51 ms 42 ms 36 ms syd-sot-ken-crt1-ge-6-0-0.tpgi.com.au [202.7.171.121] 5 175 ms 200 ms 195 ms ge5-0-5d0.cir1.seattle7-wa.us.xo.net [216.156.100.37] 6 212 ms 228 ms 229 ms vb2002.rar3.sanjose-ca.us.xo.net [207.88.13.150] 7 205 ms 204 ms 206 ms 207.88.14.226.ptr.us.xo.net [207.88.14.226] 8 207 ms 215 ms 220 ms xe-0.equinix.snjsca04.us.bb.gin.ntt.net [206.223.116.12] 9 198 ms 201 ms 199 ms ae-7.r20.snjsca04.us.bb.gin.ntt.net [129.250.5.52] 10 396 ms 391 ms 395 ms as-6.r20.sngpsi02.sg.bb.gin.ntt.net [129.250.3.89] 11 383 ms 384 ms 383 ms ae-3.r02.sngpsi02.sg.bb.gin.ntt.net [129.250.4.178] 12 364 ms 381 ms 359 ms wotsg1-slave-54.worldoftanks.sg [116.51.25.54] Trace complete. Since I think it will be unlikely that my ISP will do anything, are there any ways to improve my routing to the server without them having to intervene? NB. The game runs predominately over UDP, so I believe most low ping services are out of the question, as they rely on TCP traffic.

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by [email protected]
    By ken.pulverman on March 24, 2010 8:51 AM ERP has been the backbone of enterprise software. The data held in your ERP system is core of most companies. Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years. Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor. Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough." So your ERP is working. It's humming along. You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere. Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on. And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™. SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications. Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise. In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure that Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution. It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do. The diagram below reflects that change. In this model, the hegemony of ERP is over. It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information. And through modern middleware it will connect to everything. So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • What are the disadvantages of self-encapsulation?

    - by Dave Jarvis
    Background Tony Hoare's billion dollar mistake was the invention of null. Subsequently, a lot of code has become riddled with null pointer exceptions (segfaults) when software developers try to use (dereference) uninitialized variables. In 1989, Wirfs-Brock and Wikerson wrote: Direct references to variables severely limit the ability of programmers to re?ne existing classes. The programming conventions described here structure the use of variables to promote reusable designs. We encourage users of all object-oriented languages to follow these conventions. Additionally, we strongly urge designers of object-oriented languages to consider the effects of unrestricted variable references on reusability. Problem A lot of software, especially in Java, but likely in C# and C++, often uses the following pattern: public class SomeClass { private String someAttribute; public SomeClass() { this.someAttribute = "Some Value"; } public void someMethod() { if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { return this.someAttribute; } } Sometimes a band-aid solution is used by checking for null throughout the code base: public void someMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void anotherMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Default Value" ) ) { // do something... } } The band-aid does not always avoid the null pointer problem: a race condition exists. The race condition is mitigated using: public void anotherMethod() { String someAttribute = this.someAttribute; assert someAttribute != null; if( someAttribute.equals( "Some Default Value" ) ) { // do something... } } Yet that requires two statements (assignment to local copy and check for null) every time a class-scoped variable is used to ensure it is valid. Self-Encapsulation Ken Auer's Reusability Through Self-Encapsulation (Pattern Languages of Program Design, Addison Wesley, New York, pp. 505-516, 1994) advocated self-encapsulation combined with lazy initialization. The result, in Java, would resemble: public class SomeClass { private String someAttribute; public SomeClass() { setAttribute( "Some Value" ); } public void someMethod() { if( getAttribute().equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { String someAttribute = this.someAttribute; if( someAttribute == null ) { setAttribute( createDefaultValue() ); } return someAttribute; } protected String createDefaultValue() { return "Some Default Value"; } } All duplicate checks for null are superfluous: getAttribute() ensures the value is never null at a single location within the containing class. Efficiency arguments should be fairly moot -- modern compilers and virtual machines can inline the code when possible. As long as variables are never referenced directly, this also allows for proper application of the Open-Closed Principle. Question What are the disadvantages of self-encapsulation, if any? (Ideally, I would like to see references to studies that contrast the robustness of similarly complex systems that use and don't use self-encapsulation, as this strikes me as a fairly straightforward testable hypothesis.)

    Read the article

  • SR Activity Summaries Via Direct Email? You Bet!

    - by PCat
    Courtesy of Ken Walker. I’m a “bottom line” kind of guy.  My friends and co-workers will tell you that I’m a “Direct Communicator” when it comes to work or my social life.  For example, if I were to come up with a fantastic new recipe for a low-fat pan fried chicken, I’d Tweet, email, or find a way to blast the recipe directly to you so that you could enjoy it immediately.  My friends would see the subject, “Awesome New Fried Chicken” and they’d click and see the recipe there before them.Others are “Indirect Communicators.”  My friend Joel is like this.  He would post the recipe in his blog, and then Tweet or email a link back to his blog with a subject, “Fried Chicken.”  Then Joel would sit back and expect his friends to read the email, AND click the link to his blog, and then read the recipe.  As a fan of the “Direct” method, I wish there was a way for me to “Opt-in” for immediate updates from Joel so I could see the recipe without having to click over to his blog to search for it.The same is true for MOS.  If you’ve ever opened a Service Request through My Oracle Support (MOS), you know that most of the communication between you and the Oracle Support Engineer with respect to the issue in the SR, is done via email.  Which type of email would you rather receive in your email account? Example1:Your SR has been updated.  Click HERE to see the update. Or Example2:Your SR has been updated.  Here is the update:  “Hi John, Oracle Development has completed the patch we’ve been waiting for!  Here’s a direct “LINK” to the patch that should resolve your issue.  Please download and install the patch via the instructions (included with the link) and let me know if it does, in fact, resolve your issue!”Example2 is available to you!  All you need to do is to “Opt-In” for the direct email updates.  The default is for the indirect update as seen in Example1.  To turn on “Service Request Details in Email” simply follow these instructions (aided by the screenshot below):1.    Log into MOS, and click on your name in the upper right corner.  Select “My Account.”2.    Make sure “My Account” is highlighted in bold on the left.3.    Turn ON, “Service Request Details in Email” That’s it!  You will now receive the SR Updates, directly in your email account without having to log into MOS, click the SR, scroll down to the updates, etc.  That’s better than Fried Chicken!  (Well; almost better....).

    Read the article

  • What good technology podcasts are out there?

    - by Michael Stum
    Yes, Podcasts, those nice little Audiobooks I can listen to on the way to work. With the current amount of Podcasts, it's like searching a needle in a haystack, except that the haystack happens to be the Internet and is filled with too many of these "Hot new Gadgets" stuff :( Now, even though I am mainly a .NET developer nowadays, maybe anyone knows some good Podcasts from people regarding the whole software lifecycle? Unit Testing, Continous Integration, Documentation, Deployment... So - what are you guys and gals listening to? Please note that the categorizations are somewhat subjective and may not be 100% accurate as many podcasts cover several areas. Categorization is made against what is considered the "main" area. General Software Engineering / Productivity Stack Overflow TekPub (Requires Paid Subscription) SE Radio 43 Folders Perspectives Dr. Dobb's (now a video feed) The Pragmatic Podcast (Inactive) IT Matters Agile Toolkit Podcast The Stack Trace (Inactive) Parleys Techzing The Startup Success Podcast Berkeley CS class lectures FOSS Weekly .NET / Visual Studio / Microsoft Herding Code Hanselminutes .NET Rocks! Deep Fried Bytes Alt.Net Podcast Polymorphic Podcast Sparkling Client (The Silverlight Podcast) dnrTV! Spaghetti Code ASP.NET Podcast Channel 9 Radio TFS PowerScripting Podcast The Thirsty Developer Elegant Code ConnectedShow Crafty Coders Coding QA jQuery yayQuery The official jQuery podcast Java / Groovy The Java Posse Grails Podcast Java Technology Insider Ruby / Rails Railscasts Rails Envy The Ruby on Rails Podcast Rubiverse Web Design / JavaScript / Ajax WebDevRadio Boagworld The Rissington podcast Ajaxian YUI Theater Unix / Linux / Mac / iPhone Mac Developer Network Hacker Public Radio Linux Outlaws Mac OS Ken LugRadio Linux radio show (Inactive) The Linux Action Show! Linux Kernel Mailing List (LKML) Summary Podcast Stanford's iPhone programming class SysAdmin, Security or Infrastructure RunAs Radio Security Now! Crypto-Gram Security Podcast Hak5 VMWare VMTN Windows Weekly PaulDotCom Security The Register - Semi-Coherent Computing FeatherCast General Tech / Business Tekzilla This Week in Tech The Guardian Tech Weekly PCMag Radio Podcast Entrepreneurship Corner Manager Tools Other / Misc. / Podcast Networks IT Conversations Retrobits Podcast No Agenda Netcast Cranky Geeks The Command Line Freelance Radio IBM developerWorks The Register - Open Season Drunk and Retired Technometria Sod This Radio4Nerds Hacker Medley

    Read the article

  • Need help limiting a join in Transact-sql

    - by MsLis
    I'm somewhat new to SQL and need help with query syntax. My issue involves 2 tables within a larger multi-table join under Transact-SQL (MS SQL Server 2000 Query Analyzer) I have ACCOUNTS and LOGINS, which are joined on 2 fields: Site & Subset. Both tables may have multiple rows for each Site/Subset combination. ACCOUNTS: | LOGINS: SITE SUBSET FIELD FIELD FIELD | SITE SUBSET USERID PASSWD alpha bravo blah blah blah | alpha bravo foo bar alpha charlie blah blah blah | alpha bravo bar foo alpha charlie bleh bleh blue | alpha charlie id ego delta bravo blah blah blah | delta bravo john welcome delta foxtrot blah blah blah | delta bravo jane welcome | delta bravo ken welcome | delta bravo barbara welcome I want to select all rows in ACCOUNTS which have LOGIN entries, but only 1 login per account. DESIRED RESULT: SITE SUBSET FIELD FIELD FIELD USERID PASSWD alpha bravo blah blah blah foo bar alpha charlie blah blah blah id ego alpha charlie bleh bleh blue id ego delta bravo blah blah blah jane welcome I don't really care which row from the login table I get, but the UserID and Password have to correspond. [Don't return invalid combinations like foo/foo or bar/bar] MS Access has a handy FIRST function, which can do this, but I haven't found an equivalent in TSQL. Also, if it makes a difference, other tables are joined to ACCOUNTS, but this is the only use of LOGINS in the structure. Thank you very much for any assistance.

    Read the article

  • How can I handle these different HTML chunks in Perl?

    - by kkchaitu
    I need to write a single regular expression with the three possible cases. case1 <td width="100%" align="left" bgcolor="#001E5A"><font face="Arial" size="2" color="#FFFFFF"><font size="1"><b>[Punjabi Music]</font></b> <a id="listlinks" target="_scurl" href="http://www.apnaradio.com/">Apna Radio Broadcast Live 24x7: Indian - Pakistani - Punjabi - Bhangra and Hindi Music !!</a> </font></td> <td nowrap align="center" width="10" bgcolor="#001E5A">&nbsp;</td> case2 <td width="100%" align="left" bgcolor="#001E32"><font face="Arial" size="2" color="#FFFFFF"><font size="1"><b>[jazz]</font></b> <a id="listlinks" target="_scurl" href="http://www.dinnerjazzexcursion.com">Dinner Jazz Excursion</a> <br> <font size="1"><a id="chatstuff" href="aim:goim?screenname=NA">[ AIM ]</a>&nbsp;<font color="#FF0000">Now Playing:</font> Ken Peplowski - Indian Summer</font></font></td> <td nowrap align="center" width="10" bgcolor="#001E32">&nbsp;</td> case 3 <td width="100%" align="left" bgcolor="#001E5A"><font face="Arial" size="2" color="#FFFFFF"><font size="1"><b>[World Bollywood Hindi]</font></b> <a id="listlinks" target="_scurl" href="http://www.bollywoodmusicradio.com/">Bollywood Music Radio :: Indian Music :: Request your Hindi Songs</a> <br> <font size="1"><font color="#FF0000">Now Playing:</font> Bollywood Music Radio - Fear (2007) - Tu Hai Ishq @ 13:46</font></font></td> <td nowrap align="center" width="10" bgcolor="#001E5A">&nbsp;</td>

    Read the article

  • How do you read a file line by line in your language of choice?

    - by Jon Ericson
    I got inspired to try out Haskell again based on a recent answer. My big block is that reading a file line by line (a task made simple in languages such as Perl) seems complicated in a functional language. How do you read a file line by line in your favorite language? So that we are comparing apples to other types of apples, please write a program that numbers the lines of the input file. So if your input is: Line the first. Next line. End of communication. The output would look like: 1 Line the first. 2 Next line. 3 End of communication. I will post my Haskell program as an example. Ken commented that this question does not specify how errors should be handled. I'm not overly concerned about it because: Most answers did the obvious thing and read from stdin and wrote to stdout. The nice thing is that it puts the onus on the user to redirect those streams the way they want. So if stdin is redirected from a non-existent file, the shell will take care of reporting the error, for instance. The question is more aimed at how a language does IO than how it handles exceptions. But if necessary error handling is missing in an answer, feel free to either edit the code to fix it or make a note in the comments.

    Read the article

  • Better documentation for tasks waiting on resources

    - by SQLOS Team
    The sys.dm_os_waiting_tasks DMV contains a wealth of useful information about tasks waiting on a resource, but until now detailed information about the resource being consumed - sys.dm_os_waiting_tasks.resource_description - hasn't been documented, apart from a rather self-evident "Description of the resource that is being consumed."   Thanks to a recent Connect suggestion this column will get more information added. Here is a summary of the possible values that can appear in this column - Note this information is current for SQL Server 2008 R2 and Denali:   Thread-pool resource owner:•       threadpool id=scheduler<hex-address> Parallel query resource owner:•       exchangeEvent id={Port|Pipe}<hex-address> WaitType=<exchange-wait-type> nodeId=<exchange-node-id> Exchange-wait-type can be one of the following.•       e_waitNone•       e_waitPipeNewRow•       e_waitPipeGetRow•       e_waitSynchronizeConsumerOpen•       e_waitPortOpen•       e_waitPortClose•       e_waitRange Lock resource owner:<type-specific-description> id=lock<lock-hex-address> mode=<mode> associatedObjectId=<associated-obj-id>               <type-specific-description> can be:• For DATABASE: databaselock subresource=<databaselock-subresource> dbid=<db-id>• For FILE: filelock fileid=<file-id> subresource=<filelock-subresource> dbid=<db-id>• For OBJECT: objectlock lockPartition=<lock-partition-id> objid=<obj-id> subresource=<objectlock-subresource> dbid=<db-id>• For PAGE: pagelock fileid=<file-id> pageid=<page-id> dbid=<db-id> subresource=<pagelock-subresource>• For Key: keylock  hobtid=<hobt-id> dbid=<db-id>• For EXTENT: extentlock fileid=<file-id> pageid=<page-id> dbid=<db-id>• For RID: ridlock fileid=<file-id> pageid=<page-id> dbid=<db-id>• For APPLICATION: applicationlock hash=<hash> databasePrincipalId=<role-id> dbid=<db-id>• For METADATA: metadatalock subresource=<metadata-subresource> classid=<metadatalock-description> dbid=<db-id>• For HOBT: hobtlock hobtid=<hobt-id> subresource=<hobt-subresource> dbid=<db-id>• For ALLOCATION_UNIT: allocunitlock hobtid=<hobt-id> subresource=<alloc-unit-subresource> dbid=<db-id> <mode> can be:• Sch-S• Sch-M• S• U• X• IS• IU• IX• SIU• SIX• UIX• BU• RangeS-S• RangeS-U• RangeI-N• RangeI-S• RangeI-U• RangeI-X• RangeX-S• RangeX-U• RangeX-X External resource owner:•       External ExternalResource=<wait-type> Generic resource owner:•       TransactionMutex TransactionInfo Workspace=<workspace-id>•       Mutex•       CLRTaskJoin•       CLRMonitorEvent•       CLRRWLockEvent•       resourceWait Latch resource owner:•       <db-id>:<file-id>:<page-in-file>•       <GUID>•       <latch-class> (<latch-address>)   Further Information Slava Oks's weblog: sys.dm_os_waiting_tasks.Informit.com: Identifying Blocking Using sys.dm_os_waiting_tasks - Ken Henderson   - Guy

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • Is there an algorithm for converting quaternion rotations to Euler angle rotations?

    - by Will Baker
    Is there an existing algorithm for converting a quaternion representation of a rotation to an Euler angle representation? The rotation order for the Euler representation is known and can be any of the six permutations (i.e. xyz, xzy, yxz, yzx, zxy, zyx). I've seen algorithms for a fixed rotation order (usually the NASA heading, bank, roll convention) but not for arbitrary rotation order. Furthermore, because there are multiple Euler angle representations of a single orientation, this result is going to be ambiguous. This is acceptable (because the orientation is still valid, it just may not be the one the user is expecting to see), however it would be even better if there was an algorithm which took rotation limits (i.e. the number of degrees of freedom and the limits on each degree of freedom) into account and yielded the 'most sensible' Euler representation given those constraints. I have a feeling this problem (or something similar) may exist in the IK or rigid body dynamics domains. Solved: I just realised that it might not be clear that I solved this problem by following Ken Shoemake's algorithms from Graphics Gems. I did answer my own question at the time, but it occurs to me it may not be clear that I did so. See the answer, below, for more detail. Just to clarify - I know how to convert from a quaternion to the so-called 'Tait-Bryan' representation - what I was calling the 'NASA' convention. This is a rotation order (assuming the convention that the 'Z' axis is up) of zxy. I need an algorithm for all rotation orders. Possibly the solution, then, is to take the zxy order conversion and derive from it five other conversions for the other rotation orders. I guess I was hoping there was a more 'overarching' solution. In any case, I am surprised that I haven't been able to find existing solutions out there. In addition, and this perhaps should be a separate question altogether, any conversion (assuming a known rotation order, of course) is going to select one Euler representation, but there are in fact many. For example, given a rotation order of yxz, the two representations (0,0,180) and (180,180,0) are equivalent (and would yield the same quaternion). Is there a way to constrain the solution using limits on the degrees of freedom? Like you do in IK and rigid body dynamics? i.e. in the example above if there were only one degree of freedom about the Z axis then the second representation can be disregarded. I have tracked down one paper which could be an algorithm in this pdf but I must confess I find the logic and math a little hard to follow. Surely there are other solutions out there? Is arbitrary rotation order really so rare? Surely every major 3D package that allows skeletal animation together with quaternion interpolation (i.e. Maya, Max, Blender, etc) must have solved exactly this problem?

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • CodePlex Daily Summary for Tuesday, October 25, 2011

    CodePlex Daily Summary for Tuesday, October 25, 2011Popular ReleasesScrum Task Board Card Creator: TaskCardCreator 2.5.0.0: What's New: UX improvement: Loading of work items is done in a worker thread Use memory, not the file system, when creating reports for better performance UI improvement: Better parent/child relation between the TeamProjectPicker and MainWindow Microsoft Visual Studio Scrum 1.0: Dedicated Impediment card added Supported Templates: Microsoft Visual Studio Scrum 1.0 Product Backlog Item, Task, Impediment, and Bug MSF for Agile Software Development v5.0 User Story, Task, and BugSubtitleTools: SubtitleTools 1.9: - Improved: Some DVD players need a mandatory UTF8-BOM (http://en.wikipedia.org/wiki/Byteordermark) at the beginning of the file to show RTL subtitles correctly.THE NVL Maker: The NVL Maker Ver 3.06: 3.06 ??? ??3.04,???CG MODE?? ??3.05—— ??????????????????????????????? ?????????config.tjs????????(??????????????????????????) ????config.tjs???,????????????,????????????Config.tjs。(???????,?????????,???KAGConfig??) ???????“????”??,??????????????????(Data) ??????????????????? ???????????,????????,??????????? ?fadeoutbgm?????????,???????????????,?????????? ????????,???“??????”??。(??????????macro_edu.ks??) ????????,????????A??????,???“??????”。 ?????????????,???????B??????,???“...Xomega Framework: Xomega.Framework 1.2: Release 1.2 of Xomega Framework refactors projects to allow generating assemblies for different target frameworks and profiles and allows deploying it as a NuGet package. It also includes some small fixes to the service and presentation layer classes.People's Note: People's Note 0.31: Added note tag editing. Changed note edit conflict resolution to keep the latest version. To install: copy the appropriate CAB file onto your WM device and run it.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...xUnit.net Contrib: xunitcontrib-resharper 0.4.4 (dotCover): xunitcontrib release 0.4.4 (ReSharper runner) This release provides a test runner plugin for Resharper 6.0 RTM, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release addresses the following issues:Support for dotCover code coverage 4132 Note that this build work against ALL VERSIONS of xunit. The files are compiled against xunit.dll 1.8 - DO NOT REPLACE THIS FILE. Thanks to xunit's version independent runner system, this package can r...BookShop: BookShop: BookShop WP7 clientRibbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2122.266): Added CodePlex and PayPal links New icon Bug fix: can't connect to an IFD deployment when the discovery service url has been customizedDotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...Media Companion: MC 3.419b Weekly: A couple of minor bug fixes, but the important fix in this release is to tackle the extremely long load times for users with large TV collections (issue #130). A note has been provided by developer Playos: "One final note, you will have to suffer one final long load and then it should be fixed... alternatively you can delete the TvCache.xml and rebuild your library... The fix was to include the file extension so it doesn't have to look for the video file (checking to see if a file exists is a...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.GridLibre para Visual FoxPro: GridLibre para Visual FoxPro v3.5: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.WiX Toolset: WiX v3.6 Beta: First beta release of WiX v3.6. The primary focus is on Burn but there are also many small bug fixes to the core toolset. For more information see: http://robmensching.com/blog/posts/2011/10/24/WiX-v3.6-Beta-releasedUmbraco CMS: Umbraco 5.0 CMS Alpha 3: Umbraco 5 Alpha 3Umbraco 5 (aka Jupiter) will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out the Alpha of v5 today! If you're new to Umbraco and would like to get a low-down on our popular and easy-to-learn approach to content management, check out our intro video. What's Alpha 3?This is our third Alpha release. It's intended for developers looking to become familiar with the codebase & architecture, or for thos...Vkontakte WP: Vkontakte: source codeWay2Sms Applications for Android, Desktop/Laptop & Java enabled phones: Way2SMS Desktop App v2.0: 1. Fixed issue with sending messages due to changes to Way2Sms site 2. Updated the character limit to 160 from 140GART - Geo Augmented Reality Toolkit: 1.0.1: About Release 1.0.1 Release 1.0.1 is a service release that addresses several issues and improves performance. As always, check the Documentation tab for instructions on how to get started. If you don't have the Windows Phone SDK yet, grab it here. Breaking Change Please note: There is a breaking change in this release. As noted below, the WorldCalculationMode property of ARItem has been replaced by a user-definable function. ARItem is now automatically wired up with a function that perform...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.32: Fix for issue #16710 - string literals in "constant literal operations" which contain ASP.NET substitutions should not be considered "constant." Move the JS1284 error (Misplaced Function Declaration) so it only fires when in strict mode. I got a couple complaints that people didn't like that error popping up in their existing code when they could verify that the location of that function, although not strict JS, still functions as expected cross-browser.New ProjectsApiDiff.Net: API Difference/Reporting project written in C# using the wonderful Cecil engine (http://www.mono-project.com/Cecil) to show differences between versions of a public API. Something like the unsupported MS tool "libcheck"Aspect Oriented Programming with .Net: Demo project using PostSharp for AOP in .Net.CCBBAA: CCBBAACdf.Iris.MiniProjetCalculateur: Application Client / Serveur calculateur en langage CCmjSiverlight: a silverlight libraryCSharpKaartSpel: Super Kaart Spel.Developer Team Article System Management: Developer Team Article System Management Makes it easier for Authors to publish their articles . It developed in VB.NET .DirectoryMonitor: DirectoryMonitorDNN Administrador de Tareas: Un Proyecto de codigo abierto Domek: Taki sobie domekEDXGraphics: This is Edward(Shiqiu) Liu's personal code repository. Projects here are primarily about Graphics, AI and Algorithms. For more introduction, please visit [url:www.edxgraphics.com] Enterprise SSIS Framework: The Enterprise SSIS Framework is a project intended to simplify the management of an organization’s administration and ETL assets by abstracting the coordination and control flow into metadata. This project consists of the four parts: - Core SSIS Packages: A set of packages that are responsible for workflow within the framework - SQL Server 2008 DB: Holds all the metadata necessary to drive the application - Administration App: A web-based front end to facilitate managing assets running ...FIM Task Sequencer "RunJob": Runjob is a simple batch controller for FIM (MIIS and ILM) for starting and monitoring the execution of FIM Run Profiles that allows for complex (looping and branching) sequences to be put together with a simple XML 'task' file. Key features include: * Works with MIIS / ILM / FIM * Optionally Connects to the SQL database to verify that run profiles exist * Can execute arbitrary tasks via a command line. * Based on the results of command line or previous run history can loop and jump ...Ham Bone Soup amateur radio software and electronic log: Ham Bone Soup is a open source software written for satisfying the needs of Amateur Radio operators. The central feature is an especially flexible logging system for acommodating contests and general purpose logs, but will grow to include many other vital Amateur radio features. It doesn't do anything yet, we are just geting started.H? th?ng qu?n lý chi tiêu - windows phone: H? th?ng qu?n lý chi tiêu - windows phoneJavascript Editor for SharePoint: Javascript Editor for SharePoint is a lightweight, in-browser editor that lets you quickly prototype and test custom javascript code in SharePointLegoWeb: Open source Web CMS base on ASP.NET Webparts + MARCXML metadata: LegoWeb is an open source web content management solution developed base on combination of ASP.NET 2.0 Webparts and MARCXML Metadata. It is very simple and very flexible. LuaXna: An simple engine that allow to devolop in LUA for XNA. It have logging feature and cycles.Manager Game: Student project for game developement course. Using XNA 4.0 for Windows platform.Metroed: A Metro version of the PhoneyTools (a WP7 toolkit) for use with C#/VB WinRT applications.My Masters Sample Project for Sharepoint 2010: The feature stabling example project which is provide to deploy custom master page to personel sites on Sharepoint 2010 The solution is anwering fallowing questions : * How to deploy a custom master page ? * How to customize a masterpage ? * How to attach custom master page to personal sites using stabling feature ? NDepend TFS 2010 integration: NDepend TFS 2010 integration provides build activities, a build report customization addin, and extends the TFS Warehouse in order to leverage NDepend quality statistics for your projects. Open Source Renderer: Open Source RendererPearTunes: Peartunes is a university project.Plugin Framework Web: Lighweight plugin framework for web applicationsQTP TFS Generic Test Integration: In Microsoft Visual Studio 2010 there is a Generic Test type that allows you to integrate with automation tools like QTP. I have created a pretty simple solution that allows you to use your existing QTP automation scripts within the Microsoft Visual Studio testing framework and here's a sample below. The key part of this solution is transforming HP's QTP format to Microsoft’s Generic Test format so that you can publish the results to TFS. The added benefit of this integration is that you c...RequiemDream: the project for manage workflow records reexecute to helping developers for debugging.Rift Addon Studio 2010: Rift AddOn Studio (AOS) is an open source development environment designed to bring a Visual Studio-like experience to building Rift AddOns. For more information on exactly what AOS does, check out the list of features below. Small Database Tools: This is a project for me to create small tools I created for database related functions.SQL Azure Membership, Profile, Role provider starter kit for MVC3 project: When you use out of box MVC 3 site template the Membership, Profile and Role provider DB is created as attached MDF file by the name AspNetDB.mdf. This project has needed steps to easily migrate this DB into SQL Azure. This is a complete MVC3 Starter site project and could be used for your next 'Big Thing' as soon as properly setup. Enjoy :)SQL Server BareMetal Hands-on-Labs: The SQL Server BareMetal Hands-on-Labs project allows you to build a SQL Server Hands-on-Lab Enviroment from scratch, using Windows Server 2008 R2 SP1 Hyper-V. State Theater Website: Building a website designed to provide information about live theater throughout a state. Basically, it a port of NJTheater.com from classic ASP to Asp.Net making it customizable to any state along the way.Static web generator: Zoltar let you generate your website (blog, personl website,etc) from a set of md files. Ukulele Chord Finder: Ukulele Chord FinderUser Profile Cleaner: This application allows you to manage via GUI or command line user profiles, removed the profiles no longer used by a number of days. The application allows you to set a list of profiles that can be excluded from deletion. Virtual Visit: This project have made the virtual visite for your siteWP7 Open source project collection by eLite: ??Windows Phone????????zebrawebservice: ??AWB,??OPS??

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

< Previous Page | 14 15 16 17 18 19  | Next Page >