Search Results

Search found 12502 results on 501 pages for 'kind'.

Page 341/501 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • Visual Studio 2010 RC &ndash; Silverlight 4 and WCF RIA Services Development - Updates from MIX Anno

    - by Harish Ranganathan
    MIX is happening and there is a lot of excitement around the various releases such as the Windows Phone 7 Developer Preview, IE9 Platform Preview and few other announcements that have been made.  Clearly, the Windows Phone 7 Developer Preview has generated the maximum interest and opened a plethora of opportunities for .NET Developers.  It also takes the mobile development to a new generation and doesn’t force developers to learn different programming language. Along with this, few other releases have been out.  The most anticipated Silverlight 4 RC is out and its corresponding templates are also out there for you to download.  Once VS 2010 RC was released, it was much of a disappointment that it doesn’t support SL4 development as well as the SL4 Business Application Development (a.k.a. WCF RIA Services).   There were few workarounds though nothing concrete.  Earlier I had written about how the WCF RIA Services Preview does work with ASP.NET Development using VS 2010 RC. However, with the release of SL4 RC and the corresponding tooling updates, one can develop for both SL4 as well as SL4 + WCF RIA Services using VS 2010 RC.  This is kind of important and keeps the continuum going until VS 2010 RTMs.  So, the purpose of this post is to quickly give the updates and links to install the relevant tools. Silverlight 4 RC Runtime Windows Runtime or the Mac Runtime Silverlight 4 RC Developer Tools for Visual Studio 2010 RC Silverlight 4 Tools for Visual Studio 2010 (this would install the Runtime as well automatically) Expression Blend 4 Beta Expression Blend 4 Beta If you install the SL4 RC Developer Tools, it also installs the WCF RIA Services Preview automatically.  You just need to install the WCF RIA Services Toolkit that can be downloaded from Install the WCF RIA Services Toolkit Of course you can also just install the WCF RIA Services for VS 2010 RC separately (without SL4 Tools) from here Kindly note, all the above mentioned links are with respect to Visual Studio 2010 RC edition.  If you are developing with VS 2008, then you can just target SL3 (as I write this, there seems to be no official support for developing SL4 with VS 2008) and the related tools can be downloaded from http://www.silverlight.net/getstarted/ Basically you need to download SL 3 Runtime, SDK, Expression Blend 3 and the Silverlight Toolkit.  All the links for the download are available in the above mentioned page. Also, a version of WCF RIA Services that is supported in VS 2008 is available for download at WCF RIA Services Beta for VS 2008 I know there are far too many things to keep in mind.  So, I put a flowchart that could help with depicting it pictorial.  Note that this is just my own imagination and doesn’t cover all scenarios.  for example, if you are neither developing for Webform, Silverlight, you end up nowhere whereas in actual scenario you may want to develop Desktop, Services, Console, Game and what not.  So, keep in mind this is just Web. Cheers !!!

    Read the article

  • Book Review (Book 10) - The Information: A History, a Theory, a Flood

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for March 2012 was: The Information: A History, a Theory, a Flood by James Gleick. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: My personal belief about computing is this: All computing technology is simply re-arranging data. We take data in, we manipulate it, and we send it back out. That’s computing. I had heard from some folks about this book and it’s treatment of data. I heard that it dealt with the basics of data - and the semantics of data, information and so on. It also deals with the earliest forms of history of information, which fascinates me. It’s similar I was told, to GEB which a favorite book of mine as well, so that was a bonus. Some folks I talked to liked it, some didn’t - so I thought I would check it out. What I learned: I liked the book. It was longer than I thought - took quite a while to read, even though I tend to read quickly. This is the kind of book you take your time with. It does in fact deal with the earliest forms of human interaction and the basics of data. I learned, for instance, that the genesis of the binary communication system is based in the invention of telegraph (far-writing) codes, and that the earliest forms of communication were expensive. In fact, many ciphers were invented not to hide military secrets, but to compress information. A sort of early “lol-speak” to keep the cost of transmitting data low! I think the comparison with GEB is a bit over-reaching. GEB is far more specific, fanciful and so on. In fact, this book felt more like something fro Richard Dawkins, and tended to wander around the subject quite a bit. I imagine the author doing his research and writing each chapter as a book that followed on from the last one. This is what possibly bothered those who tended not to like it, I think. Towards the middle of the book, I think the author tended to be a bit too fragmented even for me. He began to delve into memes, biology and more - I think he might have been better off breaking that off into another work. The existentialism just seemed jarring. All in all, I liked the book. I recommend it to any technical professional, specifically ones involved with data technology in specific. And isn’t that all of us? :)

    Read the article

  • debootstrap or virt-install Ubuntu Server Maverick fails

    - by poelinca
    Oki so running any kind of variation of debootsrap i get the following error I: Extracting zlib1g... W: Failure trying to run: chroot /lxc/iso/dodo mount -t proc proc /proc debootstrap.log : mount: permission denied if i manualy chroot into the directory then i get promted with: id: cannot find name for group ID 0 I have no name!@...# i tryed addgroup but it's not installed , apt-get/aptitude : command not found , so i can't do anything with it . I've tryed ubuntu-vm-builder but since it's calling debootstrap i get the same error . Played with it for a few days and then i stoped and gaved virt-install a try , everithing works till i get to the console to finish the install witch shows only : Escape character is ^] and nothing more , no matter what i type . So basicly what i'm trying to do is build a usable chroot system so i can use it with lxc or libvirt . What are my options to get containers/virtualisation up and running ? I've read somewhere that i can use openvz templates with lxc or libvirt ? but how ? Let me know if you need aditional info ( p.s. doing all this on a dedicated server so i can't access it by hand , only ssh , plus on my local pc running ubuntu desktop maverick everithing works ) . EDIT Getting closer , i managed to understand how to use an openvz template with lxc , now the problem comes with the network bridge lxc-start: invalid interface name: br0 # Use same bridge device used in your controlling host setup lxc-start: failed to process 'lxc.network.link = br0 # Use same bridge device used in your controlling host setup ' lxc-start: failed to read configuration file i followed the exact steps to create a bridge and lxc conf looks like: lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 # Use same bridge device used in your controlling host setup lxc.network.hwaddr = {a1:b2:c3:d4:e5:f6} # As appropiate (line only needed if you wish to dhcp later) lxc.network.ipv4 = {10.0.0.100} # (Use 0.0.0.0 if you wish to dhcp later) lxc.network.name = eth0 # could likely be whatever you want Since it's not working i know smth is wrong so could somebody guyde me ? EDIT , looks like the base install was using an custom kernel ( bzImage-2.6.34.6-xxxx-grs-ipv6-65 ) for witch you i didn't found the headers , i did a update-grub after i installed a new kernel , edited menu.lst and no it's using 2.6.35-23-server and now debootstrap is working just fine same as ubuntu-vm-builder .

    Read the article

  • Scan Your Thumb Drive for Viruses from the AutoPlay Dialog

    - by Mysticgeek
    It’s always a good idea to scan someone’s flash drive for viruses when you use it on your PC. Today we look at how to use Microsoft Security Essentials to scan thumb drives via the AutoPlay dialog. Editor Note: This technique was created by our friend Ramesh Srinivasan from the winhelponline tech blog. If you haven’t done so already, download and install Microsoft Security Essentials (link below), which has earned the How-To Geek official endorsement. Next download the mseautoplay.zip (link below). Unzip the file to view its contents. Then move the msescan.vbs script file into the Windows directory. Next double-click on the mseautoplay.reg file… Click Yes to the warning dialog window asking if you’re sure you want to add to the registry. After it’s added you’ll get a confirmation message…click OK. Now when you pop in a thumb drive, when AutoPlay comes up you will have the options to scan it with MSE first. MSE starts the scan of the thumb drive…   You can use this to scan any removable media. Here is an example of the ability to scan a DVD with MSE before opening any files. You can also go into Control Panel and set it as a default option of AutoPlay. Open Control Panel, View by Large icons, and click on AutoPlay. Notice that now when you go to change the default options for different types of media, Scanning with MSE is now included in the dropdown lists. Remove Settings If you want to remove the MSE AutoPlay Handler, Ramesh was kind enough to create an undo registry file. Double-click on undo.reg from the original MSE AutoPlay folder and click yes to the message to remove the setting.   Then you will need to go into the Windows directory and manually delete the msescan.vbs script file. This is an awesome trick which will allow you to scan your thumb drives and other removable media from the AutoPlay dialog. We tested it out on XP, Vista, and Windows 7 and it works perfectly on each one. Download mseautoplay.zip Download Microsoft Security Essentials Read Our Review of MSE Similar Articles Productive Geek Tips Disable AutoPlay in Windows VistaFind Your Missing USB Drive on Windows XPDisable Autoplay of Audio CDs and USB DrivesHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareScan Files for Viruses Before You Download With Dr.Web TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • Easy Scaling in XAML (WPF)

    - by Robert May
    Ran into a problem that needed solving that was kind of fun.  I’m not a XAML guru, and I’m sure there are better solutions, but I thought I’d share mine. The problem was this:  Our designer had, appropriately, designed the system for a 1920 x 1080 screen resolution.  This is for a full screen, touch screen device (think Kiosk), which has that resolution, but we also wanted to demo the device on a tablet (currently using the AWESOME Samsung tablet given out at Microsoft Build).  When you’d run it on that tablet, things were ugly because it was at a lower resolution than the target device. Enter scaling.  I did some research and found out that I probably just need to monkey with the LayoutTransform of some grid somewhere.  This project is using MVVM and has a navigation container that we built that lives on a single root view.  User controls are then loaded into that view as navigation occurs. In the parent grid of the root view, I added the following XAML: <Grid.LayoutTransform> <ScaleTransform ScaleX="{Binding ScaleWidth}" ScaleY="{Binding ScaleHeight}" /> </Grid.LayoutTransform> And then in the root View Model, I added the following code: /// <summary> /// The required design width /// </summary> private const double RequiredWidth = 1920; /// <summary> /// The required design height /// </summary> private const double RequiredHeight = 1080; /// <summary>Gets the ActualHeight</summary> public double ActualHeight { get { return this.View.ActualHeight; } } /// <summary>Gets the ActualWidth</summary> public double ActualWidth { get { return this.View.ActualWidth; } } /// <summary> /// Gets the scale for the height. /// </summary> public double ScaleHeight { get { return this.ActualHeight / RequiredHeight; } } /// <summary> /// Gets the scale for the width. /// </summary> public double ScaleWidth { get { return this.ActualWidth / RequiredWidth; } } Note that View.ActualWidth and View.ActualHeight are just pointing directly at FrameworkElement.ActualWidth and FrameworkElement.ActualHeight. That’s it.  Just calculate the ratio and bind the scale transform to it. Hopefully you’ll find this useful. Technorati Tags: WPF,XAML

    Read the article

  • SharePoint 2010 Data Retrival Techinques

    - by Jayant Sharma
    In SharePoint, we have two options to perform CRUD operation.1. using server side code2. using client side codeusing server side code, we have 1. CAML2. LINQusing client side code, we have 1. Client Object Model    1.1.      Managed Client Object Model     1.2.     Silverlight Client Object Model    1.3.     ECMA Client Object Model2. SharePoint Web Services3. ADO Data Service (based on REST Web Services)4. Using RPC Call (owssvr.dll)Which and when these options are used depend upon requirements. Every options are certain advantages and disadvantages. So, before start development of any new sharepoint project, it is important to understand the limitations of different methods.Server Object Model is used when our application is host on the same server on which sharepoint is installed. while Client Side code is used to access sharepoint from client system. In SharePoint 2010 specially Client Object Model (COM) are introduced to perform the sharepoint operations from client system. Advantage of CAML:    -  It is fast.    -  Can be use it from all kind of technology like Silverlight, or Jquery    -  You can use U2U CAML Query builder to generate CAML Query.Disadvantage Of CAML:    - Error Prone, as we can detect the error only at runtimeAdvantage of LINQ:    -  Object Oriented technique (Object Relation Model)    -  LINQ  to SharePoint provider are working with Strongly Type List Item Objects, So intellisence are present at runtime    -  No need of knowledge of CAML    -  Less Error Prone as it as it uses C# syntex.    -  You can compare two Fields of SharePoint ListDisadvantage Of LINQ:    -  List Attachment is not supported in SPMetal Tool    -  Created By, Created, Modified and Modified By Fields are not created by SPMetal Tool.    -  Custom fields are not created by SPMetal Tools    -  External Lists are not supported    -  Though at backend LINQ genenates CAML Query so it is slower than directly using CAML in Code.  Advantage of Client Object Model    -  Used to access sharepoint from client system    -  No WebServer is required at Client End    - Can use Silverlight and JavaScripts to make better and fast User experienceDisadvantage of Client Object Model    -  You cannot use RunwithEleveatedPrivilege    - Cross Site Collection query are not possible    - Lesser API's are availableADO.Net Data Services:    -  Only List based operations are possible, other type of operations are not possible.SharePoint Web Services and RPC Call:    - Previously it was used in SharePoint 2007 but after the introduction  of Client Object Model,  Microsoft recommends not to use Web Services to fetch data from SharePoint. In SharePoint 2010 it is avaliable only for backward compatibility.Ref: http://msdn.microsoft.com/en-us/library/ee539764Jayant Sharma

    Read the article

  • java+netbeans+mysql+ubuntu 64 bits LTS VS C#+MS SQL for fast develop trading system

    - by crunchor
    I am going to build a trading system and use with the broker "Interactive Broker" API. The API supports C++ Socket, Java Socket, DDE, Active X APIs in Windows. The API supports Java Socket API, Posix C++ Socket API in Unix kind like Ubuntu. My trading system has some real time long calculations to do and a lot of maths for backtest. I am using a retail trading program Amibroker which is written in C++ and I run it in windows xp 32 bits, it will take me days to do one serious backtest with my G620 Sandy Bridge CPU and 3GB of ram. So for my trading system, I need to have 1. speed, 2. stability, 3. fast development I have done some research, C++ is fastest but I am not good at it and it takes much longer time to develop. Other than C++, Java in linux has the best speed. I also did some research for database and look like mysql has the best speed. Mysql and PostgreSQL both are very popular open source sql db, which one should I choose? I see MySQL has Workbench now which looks like similar to MS SQL management studio so look like a good start. Netbeans should be the most popular Java IDE now and seem like its GUI design can be as easy as Virtual Studio now. I am not sure if made by Netbeans would affect the speed and if its GUI design is really that good and easy to use. Ubuntu 64 bits LTS has good long term support, good community support, and stable. I will buy a new computer if I can create a good trading system for live trading and backtest. Very likely I will buy a I7 or I5 depends on if I7 can really have better speed for my case. Actually I mainly deal with C# in my jobs and I just knew java but not good at java. What would you guys recommend? Any better solution? This will be a big project and very likely life long project for me so I seriously do research including asking you guys before I start and focus on what I should, thanks!

    Read the article

  • Game development: “Play Now” via website vs. download & install

    - by Inside
    Heyo, I've spent some time looking over the various threads here on gamedev and also on the regular stackoverflow and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on. In particular, I'm talking about browser games vs. desktop games. I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D (and also some synchronized/networked programming), but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now). For simple & short games the newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "ok, I'm going to download and play that"? Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does it make sense to even go with a web-environment for the kind of game that I want to make? Does anyone have any experiences with decisions like this? Thanks! (* With the exception of flash-based engines it seems like most of the other approaches have these downsides when compared to what is available for desktop-based environments. I'd go with flash, but I'm worried that flash's 3D capabilities aren't mature enough right now to do what I want easily. There's also Unity3D, but I'm not sure how I feel about that at all. It seems highly polished, but requires a plugin to be downloaded for the game to be played -- at that rate I might as well have players download my game.)

    Read the article

  • Is commented out code really always bad?

    - by nikie
    Practically every text on code quality I've read agrees that commented out code is a bad thing. The usual example is that someone changed a line of code and left the old line there as a comment, apparently to confuse people who read the code later on. Of course, that's a bad thing. But I often find myself leaving commented out code in another situation: I write a computational-geometry or image processing algorithm. To understand this kind of code, and to find potential bugs in it, it's often very helpful to display intermediate results (e.g. draw a set of points to the screen or save a bitmap file). Looking at these values in the debugger usually means looking at a wall of numbers (coordinates, raw pixel values). Not very helpful. Writing a debugger visualizer every time would be overkill. I don't want to leave the visualization code in the final product (it hurts performance, and usually just confuses the end user), but I don't want to loose it, either. In C++, I can use #ifdef to conditionally compile that code, but I don't see much differnce between this: /* // Debug Visualization: draw set of found interest points for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); */ and this: #ifdef DEBUG_VISUALIZATION_DRAW_INTEREST_POINTS for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); #endif So, most of the time, I just leave the visualization code commented out, with a comment saying what is being visualized. When I read the code a year later, I'm usually happy I can just uncomment the visualization code and literally "see what's going on". Should I feel bad about that? Why? Is there a superior solution? Update: S. Lott asks in a comment Are you somehow "over-generalizing" all commented code to include debugging as well as senseless, obsolete code? Why are you making that overly-generalized conclusion? I recently read Robert Glass' "Clean Code", which says: Few practices are as odious as commenting-out code. Don't do this!. I've looked at the paragraph in the book again (p. 68), there's no qualification, no distinction made between different reasons for commenting out code. So I wondered if this rule is over-generalizing (or if I misunderstood the book) or if what I do is bad practice, for some reason I didn't know.

    Read the article

  • The Red Gate Guide to SQL Server Team based Development Free e-book

    - by Mladen Prajdic
    After about 6 months of work, the new book I've coauthored with Grant Fritchey (Blog|Twitter), Phil Factor (Blog|Twitter) and Alex Kuznetsov (Blog|Twitter) is out. They're all smart folks I talk to online and this book is packed with good ideas backed by years of experience. The book contains a good deal of information about things you need to think of when doing any kind of multi person database development. Although it's meant for SQL Server, the principles can be applied to any database platform out there. In the book you will find information on: writing readable code, documenting code, source control and change management, deploying code between environments, unit testing, reusing code, searching and refactoring your code base. I've written chapter 5 about Database testing and chapter 11 about SQL Refactoring. In the database testing chapter (chapter 5) I cover why you should test your database, why it is a good idea to have a database access interface composed of stored procedures, views and user defined functions, what and how to test. I talk about how there are many testing methods like black and white box testing, unit and integration testing, error and stress testing and why and how you should do all those. Sometimes you have to convince management to go for testing in the development lifecycle so I give some pointers and tips how to do that. Testing databases is a bit different from testing object oriented code in a way that to have independent unit tests you need to rollback your code after each test. The chapter shows you ways to do this and also how to avoid it. At the end I show how to test various database objects and how to test access to them. In the SQL Refactoring chapter (chapter 11) I cover why refactor and where to even begin refactoring. I also who you a way to achieve a set based mindset to solve SQL problems which is crucial to good SQL set based programming and a few commonly seen problems to refactor. These problems include: using functions on columns in the where clause, SELECT * problems, long stored procedure with many input parameters, one subquery per condition in the select statement, cursors are good for anything problem, using too large data types everywhere and using your data in code for business logic anti-pattern. You can read more about it and download it here: The Red Gate Guide to SQL Server Team-based Development Hope you like it and send me feedback if you wish too.

    Read the article

  • Best pathfinding for a 2D world made by CPU Perlin Noise, with random start- and destinationpoints?

    - by Mathias Lykkegaard Lorenzen
    I have a world made by Perlin Noise. It's created on the CPU for consistency between several devices (yes, I know it takes time - I have my techniques that make it fast enough). Now, in my game you play as a fighter-ship-thingy-blob or whatever it's going to be. What matters is that this "thing" that you play as, is placed in the middle of the screen, and moves along with the camera. The white stuff in my world are walls. The black stuff is freely movable. Now, as the player moves around he will constantly see "monsters" spawning around him in a circle (a circle that's larger than the screen though). These monsters move inwards and try to collide with the player. This is the part that's tricky. I want these monsters to constantly spawn, moving towards the player, but avoid walls entirely. I've added a screenshot below that kind of makes it easier to understand (excuse me for my bad drawing - I was using Paint for this). In the image above, the following rules apply. The red dot in the middle is the player itself. The light-green rectangle is the boundaries of the screen (in other words, what the player sees). These boundaries move with the player. The blue circle is the spawning circle. At the circumference of this circle, monsters will spawn constantly. This spawncircle moves with the player and the boundaries of the screen. Each monster spawned (shown as yellow triangles) wants to collide with the player. The pink lines shows the path that I want the monsters to move along (or something similar). What matters is that they reach the player without colliding with the walls. The map itself (the one that is Perlin Noise generated on the CPU) is saved in memory as two-dimensional bit-arrays. A 1 means a wall, and a 0 means an open walkable space. The current tile size is pretty small. I could easily make it a lot larger for increased performance. I've done some path algorithms before such as A*. I don't think that's entirely optimal here though.

    Read the article

  • Good approach for hundreds of comsumers and big files

    - by ????? ???????
    I have several files (nearly 1GB each) with data. Data is a string line. I need to process each of these files with several hundreds of consumers. Each of these consumers does some processing that differs from others. Consumers do not write anywhere concurrently. They only need input string. After processing they update their local buffers. Consumers can easily be executed in parallel. Important: With one specific file each consumer has to process all lines (without skipping) in correct order (as they appear in file). The order of processing different files doesn't matter. Processing of a single line by one consumer is comparably fast. I expect less than 50 microseconds on Corei5. So now I'm looking for the good approach to this problem. This is going to be be a part of a .NET project, so please let's stick with .NET only (C# is preferable). I know about TPL and DataFlow. I guess that the most relevant would be BroadcastBlock. But i think that the problem here is that with each line I'll have to wait for all consumers to finish in order to post the new one. I guess that it would be not very efficient. I think that ideally situation would be something like this: One thread reads from file and writes to the buffer. Each consumer, when it is ready, reads the line from the buffer concurrently and processes it. The entry from the buffer shouldn't be deleted as one consumer reads it. It can be deleted only when all consumers have processed it. TPL schedules consumer threads itself. If one consumer outperforms the others, it shouldn't wait and can read more recent entries from the buffer. Am i right with this kind of approach? Whether yes or not, how can i implement the good solution? A bit was already discussed on StackOverflow: link

    Read the article

  • Am I experienced enough to learn and develop immediately using Ruby on Rails?

    - by acheong87
    General Question I understand that discussions revolving around questions of this form run the risk of becoming too specific to help others. So, perhaps a better, general question would be: What kind of experience, if any, translates easily to Ruby on Rails; and if none, then what's the learning curve like, in comparison to other popular languages? Background I have the opportunity to build a website using whatever technologies I wish to use. It's a fairly simple website, for listing products, taking payments, managing customer data, providing a back-end portal for employees to manage data, possibly hooking in flight information (the products are travel related), possibly integrating a blog and all the social-networking goodies. Specific Problem I have to let the client know by tonight whether I'm interested in taking up this project, before he talks to other potential developers, but I'm on the fence. I already work a full-time C++ development job, so the money doesn't do it for me. It's the opportunity to (be paid to) learn some new technologies and to have a real, running product in the end. I've heard and read great things about Ruby, and am really intrigued. I zipped through some introductory Ruby tutorials, no sweat. However I found the Rails tutorials a little overwhelming, especially not being able to try it out anywhere. And researching Rails hosts like Heroku and EngineYard makes me think that maybe I don't know what I'm getting myself into. The ship's leaving port! I wish I had more time to learn, better yet play with the language, but I have to decide soon! Should I venture or pass? Additional Details My experiences are in C/C++/Tcl/Perl/PHP/jQuery, and basic knowledge of Java/C#. I didn't study C.S. formally so I wasn't exposed to design principles, programming paradigms, etc., which is my greatest concern. Will my lack of understanding in this realm make RoR frustrating to learn? Will it be so incompatible with a C++ "way" of thinking that I'll wish I never started? Am I putting my client at risk by attempting this? If it helps, I'm quick to learn new things (self-taught so far) and care a great deal about correctness, using things for their intended purposes, and so on. I've read numerous recommendations of Agile Development with Rails and would love to read it (though perhaps, while developing in parallel, for shortness of time). Worse comes to worst, I'd give up and do the standard LAMP gig, of course, not charging the client for wasted time. But I'm hoping to avoid the project altogether if it's gonna come down to that! Thanks in advance for any tips, insights, votes of confidence, votes of discouragement (for the better), and such.

    Read the article

  • Why lock statements don't scale

    - by Alex.Davies
    We are going to have to stop using lock statements one day. Just like we had to stop using goto statements. The problem is similar, they're pretty easy to follow in small programs, but code with locks isn't composable. That means that small pieces of program that work in isolation can't necessarily be put together and work together. Of course actors scale fine :) Why lock statements don't scale as software gets bigger Deadlocks. You have a program with lots of threads picking up lots of locks. You already know that if two of your threads both try to pick up a lock that the other already has, they will deadlock. Your program will come to a grinding halt, and there will be fire and brimstone. "Easy!" you say, "Just make sure all the threads pick up the locks in the same order." Yes, that works. But you've broken composability. Now, to add a new lock to your code, you have to consider all the other locks already in your code and check that they are taken in the right order. Algorithm buffs will have noticed this approach means it takes quadratic time to write a program. That's bad. Why lock statements don't scale as hardware gets bigger Memory bus contention There's another headache, one that most programmers don't usually need to think about, but is going to bite us in a big way in a few years. Locking needs exclusive use of the entire system's memory bus while taking out the lock. That's not too bad for a single or dual-core system, but already for quad-core systems it's a pretty large overhead. Have a look at this blog about the .NET 4 ThreadPool for some numbers and a weird analogy (see the author's comment). Not too bad yet, but I'm scared my 1000 core machine of the future is going to go slower than my machine today! I don't know the answer to this problem yet. Maybe some kind of per-core work queue system with hierarchical work stealing. Definitely hardware support. But what I do know is that using locks specifically prevents any solution to this. We should be abstracting our code away from the details of locks as soon as possible, so we can swap in whatever solution arrives when it does. NAct uses locks at the moment. But my advice is that you code using actors (which do scale well as software gets bigger). And when there's a better way of implementing actors that'll scale well as hardware gets bigger, only NAct needs to work out how to use it, and your program will go fast on it's own.

    Read the article

  • Equal Gifts Algorithm Problem

    - by 7Aces
    Problem Link - http://opc.iarcs.org.in/index.php/problems/EQGIFTS It is Lavanya's birthday and several families have been invited for the birthday party. As is customary, all of them have brought gifts for Lavanya as well as her brother Nikhil. Since their friends are all of the erudite kind, everyone has brought a pair of books. Unfortunately, the gift givers did not clearly indicate which book in the pair is for Lavanya and which one is for Nikhil. Now it is up to their father to divide up these books between them. He has decided that from each of these pairs, one book will go to Lavanya and one to Nikhil. Moreover, since Nikhil is quite a keen observer of the value of gifts, the books have to be divided in such a manner that the total value of the books for Lavanya is as close as possible to total value of the books for Nikhil. Since Lavanya and Nikhil are kids, no book that has been gifted will have a value higher than 300 Rupees... For the problem, I couldn't think of anything except recursion. The code I wrote is given below. But the problem is that the code is time-inefficient and gives TLE (Time Limit Exceeded) for 9 out of 10 test cases! What would be a better approach to the problem? Code - #include<cstdio> #include<climits> #include<algorithm> using namespace std; int n,g[150][2]; int diff(int a,int b,int f) { ++f; if(f==n) { if(a>b) { return a-b; } else { return b-a; } } return min(diff(a+g[f][0],b+g[f][1],f),diff(a+g[f][1],b+g[f][0],f)); } int main() { int i; scanf("%d",&n); for(i=0;i<n;++i) { scanf("%d%d",&g[i][0],&g[i][1]); } printf("%d",diff(g[0][0],g[0][1],0)); } Note - It is just a practice question, & is not part of a competition.

    Read the article

  • A Quick HLSL Question (How to modify some HLSL code)

    - by electroflame
    Thanks for wanting to help! I'm trying to create a circular, repeating ring (that moves outward) on a texture. I've achieved this, to a degree, with the following code: float distance = length(inTex - in_ShipCenter); float time = in_Time; ///* Simple distance/time combination */ float2 colorIndex = float2(distance - time, .3); float4 shipColor = tex2D(BaseTexture, inTex); float4 ringColor = tex2D(ringTexture, colorIndex); float4 finalColor; finalColor.rgb = (shipColor.rgb) + (ringColor.rgb); finalColor.a = shipColor.a; // Use the base texture's alpha (transparency). return finalColor; This works, and works how I want it to. The ring moves outward from the center of the texture at a steady rate, and is constrained to the edges of the base texture (i.e. it won't continue past an edge). However, there are a few issues with it that I would like some help on, though. They are: By combining the color additively (when I set finalColor.rgb), it makes the resulting ring color much lighter than I want (which, is pretty much the definition of additive blending, but I don't really want additive blending in this case). I would really like to be able to pass in the color that I want the ring to be. Currently, I have to pass in a texture that contains the color of the ring, but I think that doing it that way is kind of wasteful and overly-cumbersome. I know that I'm probably being an idiot over this, so I greatly appreciate the help. Some other (possibly relevant) information: I'm using XNA. I'm applying this by providing it to a SpriteBatch (as an Effect). The SpriteBatch is using BlendState.NonPremultiplied. Thanks in advance! EDIT: Thanks for the answers thus far, as they've helped me get a better grasp of the color issue. However, I'm still unsure of how to pass a color in and not use a texture. i.e. Can I create a tex2D by using a float4 instead of a texture? Or can I make a texture from a float4 and pass the texture in to the tex2D? DOUBLE EDIT: Here's some example pictures: With the effect off: With the effect on: With the effect on, but with the color weighting set to full: As you can see, the color weighting makes the base texture completely black (The background is black, so it looks transparent). You can also see the red it's supposed to be, and then the white-ish it really is when blended additively.

    Read the article

  • Voxel terrain rendering with marching cubes

    - by JavaJosh94
    I was working on making procedurally generated terrain using normal cubish voxels (like minecraft) But then I read about marching cubes and decided to convert to using those. I managed to create a working marching cubes class and cycle through the densities and everything in it seemed to be working so I went on to work on actual terrain generation. I'm using XNA (C#) and a ported libnoise library to generate noise for the terrain generator. But instead of rendering smooth terrain I get a 64x64 chunk (I specified 64 but can change it) of seemingly random marching cubes using different triangles. This is the code I'm using to generate a "chunk": public MarchingCube[, ,] getTerrainChunk(int size, float dMultiplyer, int stepsize) { MarchingCube[, ,] temp = new MarchingCube[size / stepsize, size / stepsize, size / stepsize]; for (int x = 0; x < size; x += stepsize) { for (int y = 0; y <size; y += stepsize) { for (int z = 0; z < size; z += stepsize) { float[] densities = {(float)terrain.GetValue(x, y, z)*dMultiplyer, (float)terrain.GetValue(x, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y, z)*dMultiplyer, (float)terrain.GetValue(x,y,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y,z+stepsize)*dMultiplyer }; Vector3[] corners = { new Vector3(x,y,z), new Vector3(x,y+stepsize,z),new Vector3(x+stepsize,y+stepsize,z),new Vector3(x+stepsize,y,z), new Vector3(x,y,z+stepsize), new Vector3(x,y+stepsize,z+stepsize), new Vector3(x+stepsize,y+stepsize,z+stepsize), new Vector3(x+stepsize,y,z+stepsize)}; if (x == 0 && y == 0 && z == 0) { temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners, device); } temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners); } } } (terrain is a Perlin Noise generated using libnoise) I'm sure there's probably an easy solution to this but I've been drawing a blank for the past hour. I'm just wondering if the problem is how I'm reading in the data from the noise or if I may be generating the noise wrong? Or maybe the noise is just not good for this kind of generation? If I'm reading it wrong does anyone know the right way? the answers on google were somewhat ambiguous but I'm going to keep searching. Thanks in advance!

    Read the article

  • Mounting a Microsoft Azure CloudDrive in a VMRole

    - by SeanBarlow
    Mounting a Drive in a VMRole is a little more complicated then a web or worker.  The Web and Worker roles offer OnStart and OnStop events, which you can use to mount or unmount your drives. The VMRole does not have these same events so you have to provide another way for the drives to be mounted or unmounted. The problem I have run into is what if you have multiple drives and you only want to mount certain drives. How do you let your user mount the drive. I am not going to go into details on what kind of GUI to present to the user. I have done this in a simple WPF application as well as a console application. We are going to need to get the storage account details. One thing to note when you are mounting cloud drives you cannot use https and have to use http. We force the use of http by using false when we create the CloudStorageAccount.   StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("AccountName", "AccountKey"); CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, false);   Next we need to get a reference to the container.   var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("ContainerName");   Now we need to get a list of the drives in the container   var drives = container.ListBlobs();   Now that we have a list of the drives in the container we can let the user choose which drive they want to mount. I am just selecting the 1st drive in the list for the example and getting the Uri of the drive.   var driveUri = drives.First().Uri;   Now that we have the Uri we need to get the reference to the drive. var drive = new CloudDrive(driveUri, storageAccount.Credentials);   Now all that is left is to mount the drive.   var driveLetter = drive.Mount(0, DriveMountOptions.None);   To unmount the drive all you have to do is call unmount on the drive. drive.Unmount();   You do need to make sure you unount the drives when you are done with them. I have run into issues with the drives being locked until the VMRole is rebooted. I have also managed to have a drive be permanently locked and I was forced to delete it and upload it again. I have been unable to reproduce the permanent lock but I am still trying. The CloudDrive class provides a handy method to retrieve all the mounted drives in the Role. foreach (var drive in CloudDrive.GetMountedDrives()) {          var mountedDrive = Account.CreateCloudDrive(drive.Value.PathAndQuery);          mountedDrive.Unmount(); }

    Read the article

  • Free eBook with SQL Server performance tips and nuggets

    - by Claire Brooking
    I’ve often found that the kind of tips that turn out to be helpful are the ones that encourage me to make a small step outside of a routine. No dramatic changes – just a quick suggestion that changes an approach. As a languages student at university, one of the best I spotted came from outside the lecture halls and ended up saving me time (and lots of huffing and puffing) – the use of a rainbow of sticky notes for well-used pages and letter categories in my dictionary. Simple, but armed with a heavy dictionary that could double up as a step stool, those markers were surprisingly handy. When the Simple-Talk editors told me about a book they were planning that would give a series of tips for developers on how to improve database performance, we all agreed it needed to contain a good range of pointers for big-hitter performance topics. But we wanted to include some of the smaller, time-saving nuggets too. We hope we’ve struck a good balance. The 45 Database Performance Tips eBook covers different tips to help you avoid code that saps performance, whether that’s the ‘gotchas’ to be aware of when using Object to Relational Mapping (ORM) tools, or what to be aware of for indexes, database design, and T-SQL. The eBook is also available to download with SQL Prompt from Red Gate. We often hear that it’s the productivity-boosting side of SQL Prompt that makes it useful for everyday coding. So when a member of the SQL Prompt team mentioned an idea to make the most of tab history, a new feature in SQL Prompt 6 for SQL Server Management Studio, we were intrigued. Now SQL Prompt can save tabs we have been working on in SSMS as a way to maintain an active template for queries we often recycle. When we need to reuse the same code again, we search for our saved tab (and we can also customize its name to speed up the search) to get started. We hope you find the eBook helpful, and as always on Simple-Talk, we’d love to hear from you too. If you have a performance tip for SQL Server you’d like to share, email Melanie on the Simple-Talk team ([email protected]) and we’ll publish a collection in a follow-up post.

    Read the article

  • Oracle releases new Java Embedded products

    - by Henrik Stahl
    With less than one week to go to JavaOne 2012, we've spiced things up a little by releasing not one but two net new embedded Java products. This is an important step towards realizing the vision of Java as the standard platform for the Internet of Things that I outlined in a recent blog post. The two new products are: Java ME Embedded 3.2. Based on same code as the widely deployed Oracle Java Wireless Client for feature phones, this new product provides a Java ME implementation optimized for very small microcontroller-based devices and adds - among other things - a new Device Access API that enables interaction with peripherals common in edge devices such as various types of sensors. In addition to the new Java ME Embedded platform, we have also released an update of the Java ME SDK which adds support for the development of small embedded devices. Java Embedded Suite 7.0. This is an integrated middleware stack for embedded devices, incorporating Java SE Embedded and versions of JavaDB, GlassFish and a Web Services stack optimized for remote operation and small footprint. A typical Internet of Things (or M2M) infrastructure contains three types of compute nodes: The edge device which is typically a sensor or control point of some kind. These devices can be connected directly to a backend through a mobile network if they are installed in - for example - a remote vending machine; or, they can be part of a local short-range network and be connected to the backend through a more powerful gateway device. A gateway is the second type of compute node and acts as an aggregator and control point for a local network. A good example of this could be a generalized home Internet access point, or home gateway. Gateways are mostly using normal wall power and are used for multiple applications, deployed by multiple service providers. Finally, the last type of compute node is the normal enterprise or cloud backend. Java ME Embedded and Java Embedded Suite are perfect base software stacks for the edge devices and the gateway respectively, providing the Java promise of a platform independent runtime and a complete set of libraries as well as allowing a programmer to focus on the business logic rather than plumbing. We are very thrilled with these new releases that open up exciting opportunities for Java developers to extend services and enterprise applications in ways that will make organizations more efficient and touch our daily lives. To find out more, come to the JavaOne conference (for technical content) and to the Java Embedded @ JavaOne subconference (for business content). There will be plenty of cool demos showing complete end-to-end applications, provided by Oracle and our partners, as well as keynotes and numerous sessions where you can learn more about the technology and business opportunities.

    Read the article

  • The Social Business Thought Leaders - Ray Wang

    - by kellsey.ruppel
    It seems both consumers and businesses are at the peak of the social hype. Overwhelmed by social media channels, platforms, and processes both in their private and professional life, many early adopters are starting to feel the social fatigue. Mirroring what happened with email and web sites during the late 1990's - early 2000's, more and more managers are looking to move from ubiquitous social media tactics to the most appropriate business use case and processes. This step becomes even more important considering the year over year contraction in IT budgets and the consequent need to maximize return on every dollar spent in new technologies. Ray Wang, CEO and Principal Analyst at Constellation Research, suggests engagement through collaborative technologies both as a conceptual model and a transformational tool for enterprises to reap business value. Without participation - the reasoning goes - there is no value and good technology alone is not enough to guarantee employee and customer adoption. Enterprise gamification is a new lever to succeed with Social Business by directing a critical mass of participation towards desired outcomes. What kind of outcomes? A recent study from Constellation Research (see 2012 Q1 Gamification Early Adopters Best Practices) highlights how Marketing, Customer Service and HR are leading the pack with gamification in processes such as: Sustaining long term customer loyalty (76.4%) Improving response in campaign to lead (74.5%) Right channeling incidents for resolution in social media (67.3%) Growing the number service and support incidents resolved by the community (63.6%) Improving employee referral rates and effective recruiting (43.6%) Driving on-boarding success with new hires (20%) More than simply adding badges, points and leaderboards to existing processes, enterprise gamification should be holistically embedded into employee and customer experience to stimulate specific behaviors. According to Ray Wang this can be done at three core levels: Measurable actions. The behaviors we want to facilitate consist of granular actions (i.e likes, comments, posts, recommendations, etc) and more complex actions (i.e projects, initiatives, programmes) attributed to individuals, groups and/or external actors  Reputation. The reputation an individual has earned through his actions is a key factor in building motivation among others and it is determined by its identity, social standing status and competitiveness Incentives or the intrinsic and extrinsic rewards that motivate behaviors and drive actions Listen to Ray Wang's video-interview to learn more about the dynamics that are shaping the future of collaboration and how gamification can help organizations attain new levels of engagement.

    Read the article

  • What to do if I am working on a language that I don't like

    - by Sayem Ahmed
    Hi there, I really don't know if this is the right place to ask this question, but if it isn't, then I guess someone will notify. Anyway, I am working in a software development farm which is currently using PowerBuilder to develop a mid-size ERP solution. The work environment and company management are so great that it may be the best in the whole Bangladesh. Only problem is the technology that are currently being used, which is this PowerBuilder. Now I am a guy who tends to prefer modern development technologies, like DI containers, ORM, TDD, JQuery etc. PowerBuilder is a great tool too, but I couldn' like the application techniques used to build PB applications. These techniques are so inheritance-dependent that many a times these create a great deal of sufferings. I remember two days ago I had to change some processing logic in a core user object and as a result I had to test and re-test all the forms that the application have(apparently, there are almost 20 forms there, each of them with 3-4 kinds of functionalities). Also, learning PB is tough, because online material on this thing is very, very low. I can't afford to read all the documentation that PB provide because I have hard deadlines on the work that I have to do. Another thing with PB is that applications tend to rely on business logic that are implemented on databases which causes debugging to be a nightmare. As a result, I don't feel motivated enough to work in this IDE/System/Framework (or whatever) anymore. My productivity has greatly decreased, and I am not delivering quality code. I think I have the following options available to me - Remain in the current job, keep delivering worse code and let my productivity decrease day by day, taking salaries and bonuses but not delivering quality codes/doing my job the way I should, Search for a new job. At this point number 2 seems a good option, but there are also some issues. As I mentioned before, our management may be the best in the country. Our company owner is himself a software developer with 24 years of experience in software development. He is currently our Team Leader and System Analyst. He is by far the greatest manager and boss I have ever seen. He understands developer's mentality very well(as he IS himself a developer). He is also a great, kind and generous guy. Our company is only a start-up company with 10 developers. Among them, only 3-4 people knows about the business logic behind the ERP, and I am one of them. If I switch my current job, it may hamper the development of this product which I really don't want. I couldn't decide what to do in this situation, so I turned to the community for advice.

    Read the article

  • Do I need to contact a lawyer to report a GPL violation in software distributed on Apple's App Store?

    - by Rinzwind
    Some company is selling software through Apple's App Store which uses portions of code that I released publicly under the GPL. The company is violating the licensing terms in two ways, by (1) not preserving my copyright statement, and not releasing their code under the GPL license and (2) by distributing my GPL-licensed code through Apple's App Store. (The Free Software Foundation has made clear that the terms of the GPL and those of the App Store are incompatible.) I want to report this to Apple, and ask that they take appropriate action. I have tried mailing them to ask for more information about the reporting process, and have received the automated reply quoted below. The last point in the list of things one needs to provide, the “a statement by you, made under penalty of perjury,” sounds as if they mean some kind of specific legal document. I'm not sure. Does this mean I need to contact a lawyer just to file the report? I'd like to avoid going through that hassle if at all possible. (Besides an answer to this specific question, I'd welcome comments and experience reports from anyone who has already had to deal with a GPL violation on Apple's App Store.) Thank you for contacting Apple's Copyright Agent. If you believe that your work has been copied in a way that constitutes infringement on Apple’s Web site, please provide the following information: an electronic or physical signature of the person authorized to act on behalf of the owner of the copyright interest; a description of the copyrighted work that you claim has been infringed; a description of where the material that you claim is infringing is located on the site; your address, telephone number, and email address; a statement by you that you have a good faith belief that the disputed use is not authorized by the copyright owner, its agent, or the law; a statement by you, made under penalty of perjury, that the above information in your Notice is accurate and that you are the copyright owner or authorized to act on the copyright owner’s behalf. For further information, please review Apple's Legal Information & Notices/Claims of Copyright Infringement at: http://www.apple.com/legal/trademark/claimsofcopyright.html To expedite the processing of your claim regarding any alleged intellectual property issues related to iTunes (music/music videos, podcasts, TV, Movies), please send a copy of your notice to [email protected] For claims concerning a software application, please send a copy of your notice to [email protected]. Due to the high volume of e-mails we receive, this may be the only reply you receive from [email protected]. Please be assured, however, that Apple's Copyright Agent and/or the iTunes Legal Team will promptly investigate and take appropriate action concerning your report.

    Read the article

  • Importing PKCS#12 (.p12) files into Firefox From the Command Line

    - by user11165
    I’ve posted this question up on #Ubuntu and #Firefox Forums, and really could do with some help.. Anyone know where i could look or help with the answer. I’m hoping the power of social media will come through… I have a need to perform the following action: Firefox 3.6.x: Quote: open Edit - Preferences - Advanced - Encryption - View Certificates - Your Certificates - Import However i need the same functionality from the bash command line. So far I’ve established that the following command is supposed to be used: Quote: certutil -A -t “u,u,u” -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -n “mycert” -i client.p12 This executes with no isses, however, doesn’t show up in any Firefox Certificate store. However, I have noted that prior to running this command, i have a cert8.db key3.db and secmod.db file in the above folder. After running the command the certutil seems to have created a cert9.db, key4.db and pkcs12.txt file Listing the contents using the command: Quote: certutil -L -d sql:/home/df001/.mozilla/firefox/qe5y5lht.tc.default/ does seem to confirm my attempts of importing files into a certificate folder of some kind have worked. because i get Quote: Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Thawte SSL CA „ Go Daddy Secure Certification Authority „ Thawte SGC CA „ Entrust Certification Authority - L1C „ My Nero CT,C,c mynero P„ davidfield - Internet Widgits Pty Ltd u,u,u So, having tried this, and heading back over to the www, i cam across this command: Quote: pk12util -d /home/df001/.mozilla/firefox/qe5y5lht.tc.default/ -i client.p12 -n “David Field” -P “cert8.db” this again, appears to be importing something somewhere, however, again, Viewing certs from the Firefox interface doesn’t show the imported Cert. I’m surmising here on reading that the certutil and pk12util are creating a new NSS database, which firefox isn’t reading. So my question is, how can i get the p12 cert from the command line so it displays in the firefox Certificate manager interface? Why have i posted this here? Why not post on the firefox forum? Well i will copy and post the same question there as well, however the ability to use the command line to do this is important, as I have potentially 2000 machines which will need a user cert imported into firefox via a p12 file. I need to do this in the form of a script, i thought the hard part was going to be making the p12 file from the microsoft 2003 CA, turns out thats easy. I can’t just import via the GUI and copy over cert8.db x 2000, i can’t ask users to use the CA webinterface as its for VPN access, the users are off site, and they need the VPN to get to the cert server.. Is there any person out there who can help? By the way, i don't have the tor buttun installed.

    Read the article

  • SQL Contests – Solution – Identify the Database Celebrity

    - by Pinal Dave
    Last week we were running contest Identify the Database Celebrity and we had received a fantastic response to the contest. Thank you to the kind folks at NuoDB as they had offered two USD 100 Amazon Gift Cards to the winners of the contest. We had also additional contest that users have to download and install NuoDB and identified the sample database. You can read about the contest over here. Here is the answer to the questions which we had asked earlier in the contest. Part 1: Identify Database Celebrity Personality 1 – Edgar Frank “Ted” Codd (August 19, 1923 – April 18, 2003) was an English computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He made other valuable contributions to computer science, but the relational model, a very influential general theory of data management, remains his most mentioned achievement. (Wki) Personality 2 – James Nicholas “Jim” Gray (born January 12, 1944; lost at sea January 28, 2007; declared deceased May 16, 2012) was an American computer scientist who received the Turing Award in 1998 “for seminal contributions to database and transaction processing research and technical leadership in system implementation.” (Wiki) Personality 3 – Jim Starkey (born January 6, 1949 in Illinois) is a database architect responsible for developing InterBase, the first relational database to support multi-versioning, the blob column type, type event alerts, arrays and triggers. Starkey is the founder of several companies, including the web application development and database tool company Netfrastructure and NuoDB. (Wiki) Part 2: Identify NuoDB Samples Database Names In this part of the contest one has to Download NuoDB and install the sample database Hockey. Hockey is sample database and contains few tables. Users have to install sample database and inform the name of the sample databases. Here is the valid answer. HOCKEY PLAYERS SCORING TEAM Once again, it was indeed fun to run this contest. I have received great feedback about it and lots of people wants me to run similar contest in future. I promise to run similar interesting contests in the near future. Winners Within next two days, we will let winners send emails. Winners will have to confirm their email address and NuoDB team will send them directly Amazon Cards. Once again it was indeed fun to run this contest. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >