Search Results

Search found 20573 results on 823 pages for 'pretty big scheme'.

Page 14/823 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • What version numbering scheme to use?

    - by deamon
    I'm looking for a version numbering scheme that expresses the extent of change, especially compatiblity. Apache APR, for example, use the well known version numbering scheme <major>.<minor>.<patch> example: 4.5.11 Maven suggests a similar but more detailed schema: <major>.<minor>.<patch>-<qualifier>-<build number> example: 4.5.11-RC1-3732 Where is the Maven versioning scheme defined? Are there conventions for qualifier and build number? Probably it is a bad idea to use maven but not to follow the Maven version scheme ... What other version numbering schemes do you know? What scheme would you prefer and why?

    Read the article

  • How to determine if CNF formula is satisfiable in Scheme?

    - by JJBIRAN
    Program a SCHEME function sat that takes one argument, a CNF formula represented as above. If we had evaluated (define cnf '((a (not b) c) (a (not b) (not d)) (b d))) then evaluating (sat cnf) would return #t, whereas (sat '((a) (not a))) would return (). You should have following two functions to work: (define comp (lambda (lit) ; This function takes a literal as argument and returns the complement literal as the returning value. Examples: (comp 'a) = (not a), and (comp '(not b)) = b. (define consistent (lambda (lit path) This function takes a literal and a list of literals as arguments, and returns #t whenever the complement of the first argument is not a member of the list represented by the 2nd argument; () otherwise. . Now for the sat function. The real searching involves the list of clauses (the CNF formula) and the path that has currently been developed. The sat function should merely invoke the real "workhorse" function, which will have 2 arguments, the current path and the clause list. In the initial call, the current path is of course empty. Hints on sat. (Ignore these at your own risk!) (define sat (lambda (clauselist) ; invoke satpath (define satpath (lambda (path clauselist) ; just returns #t or () ; base cases: ; if we're out of clauses, what then? ; if there are no literals to choose in the 1st clause, what then? ; ; then in general: ; if the 1st literal in the 1st clause is consistent with the ; current path, and if << returns #t, ; then return #t. ; ; if the 1st literal didn't work, then search << ; the CNF formula in which the 1st clause doesn't have that literal Don't make this too hard. My program is a few functions averaging about 2-8 lines each. SCHEME is consise and elegant! The following expressions may help you to test your programs. All but cnf4 are satisfiable. By including them along with your function definitions, the functions themselves are automatically tested and results displayed when the file is loaded. (define cnf1 '((a b c) (c d) (e)) ) (define cnf2 '((a c) (c))) (define cnf3 '((d e) (a))) (define cnf4 '( (a b) (a (not b)) ((not a) b) ((not a) (not b)) ) ) (define cnf5 '((d a) (d b c) ((not a) (not d)) (e (not d)) ((not b)) ((not d) (not e)))) (define cnf6 '((d a) (d b c) ((not a) (not d) (not c)) (e (not c)) ((not b)) ((not d) (not e)))) (write-string "(sat cnf1) ") (write (sat cnf1)) (newline) (write-string "(sat cnf2) ") (write (sat cnf2)) (newline) (write-string "(sat cnf3) ") (write (sat cnf3)) (newline) (write-string "(sat cnf4) ") (write (sat cnf4)) (newline) (write-string "(sat cnf5) ") (write (sat cnf5)) (newline)

    Read the article

  • OS choice for functional developing

    - by Carsten König
    I'm mainly a .NET developer so I normaly use Windows/VisualStudio (that means: I'm spoiled) but I'm enjoying Haskell and other (mostly functional) languagues in my spare time. Now for Haskell the windows-support is ok (you can get the Haskell-Platform) but latley I tried to get a basic Clojure/Scheme environment set up and it's just a pain on windows. So I'm thinking about trying out another OS for better tooling and languague support. Of course that leaves me with MacOS or some Linux distribution. I never used MacOS before and of course Linux would be cheaper (free) and I don't think I can parallel-boot MacOS on your normal PC-Hardware (can you?). PLUS: I don't have a clue about the tools you can use on those (to me) forign OSs. To make it short: I want to explore more Haskell, Clojure, Scala, Scheme and of course need at least good tooling for JavaScript/HTML5/Css. Support for .NET/Mono/F# would be great but for this I will still have my Win7 boot. So I like to know: - what is your prefered OS, Distribution (is Ubuntu viable?) - what Editor/IDE are you using Thank you for your help! PS: I'm not sure if this is the right place for this question but I surely hope so - if not please let me know where I should move this to (StackOverflow don't seem to be the right place IMHO)

    Read the article

  • Trouble upgrading OSX, because HD doesn't use GUID Partition Table Scheme

    - by Erik Vold
    So I have a intel-based macbook with osx 10.5 and I'm trying to upgrade to 10.6, but when I run the upgrade 'install' I quickly get to a page where I am supposed to 'Select the disk where you want to install Mac OS X' and there is only the one hard drive, so it is auto selected, and below that I see a warning message and the only button available is the 'Go Back' button. The warning message says: "Macintosh HD" can't be used because it doesn't use the GUID Partition Table scheme. Use Disk Utility to change the partition scheme. Select the disk, choose the Partition tab, select the Volume Scheme and then click Options. So I followed the above instructions, and I got to the last step, where I'm supposed to click the 'Options' button, the problem is that I cannot click that button, it is disabled.. So what am I supposed to do?

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Trouble upgrading OS X, because HD doesn't use GUID Partition Table Scheme

    - by Erik Vold
    So I have a MacBook with Mac OS X 10.5 and I'm trying to upgrade to 10.6, but when I run the upgrade 'install' I quickly get to a page where I am supposed to 'Select the disk where you want to install Mac OS X' and there is only the one hard drive, so it is auto selected, and below that I see a warning message and the only button available is the 'Go Back' button. The warning message says: "Macintosh HD" can't be used because it doesn't use the GUID Partition Table scheme. Use Disk Utility to change the partition scheme. Select the disk, choose the Partition tab, select the Volume Scheme and then click Options. So I followed the above instructions, and I got to the last step, where I'm supposed to click the 'Options' button, the problem is that I cannot click that button, it is disabled.. So what am I supposed to do?

    Read the article

  • Is there something better than a StringBuilder for big blocks of SQL in the code

    - by Eduardo Molteni
    I'm just tired of making a big SQL statement, test it, and then paste the SQL into the code and adding all the sqlstmt.append(" at the beginning and the ") at the end. It's 2011, isn't there a better way the handle a big chunk of strings inside code? Please: don't suggest stored procedures or ORMs. edit Found the answer using XML literals and CData. Thanks to all the people that actually tried to answer the question without questioning me for not using ORM, SPs and using VB edit 2 the question leave me thinking that languages could try to make a better effort for using inline SQL with color syntax, etc. It will be cheaper that developing Linq2SQL. Just something like: dim sql = <sql> SELECT * ... </sql>

    Read the article

  • hardware addressing and configurable addressing scheme

    - by Zia ur Rahman
    basically i want to ask question about configurable addressing scheme for LAN interface hardware. i have read about it from a book, some main points are given by a configurable addressing scheme provides a mechanism that a customer can use to set a physical address.The mechanism can be manual (the switches that must be set when the interface is first installed).or an electronic memory such as an EPROM that can be downloded from the computer(what does this means). Most hardware needs to be configured only once- configuration is usually done when the hardware is first installed. Question:Suppose a network administrator configures the LAN interface hardware (assigns the address) when he installs it. Now later on if he needs to change the physical address of the device can he change it? Or in this addressing scheme the hardware can only be configured once and we can not reconfigure it later on.

    Read the article

  • Skills to Focus on to land Big 5 Software Engineer Position

    - by Megadeth.Metallica
    Guys, I'm in my penultimate quarter of grad school and have a software engineering internship lined up at a big 5 tech company. I have dabbled a lot recently in Python and am average at Java. I want to prepare myself for coding interviews when I apply for new grad positions at the Big 5 tech companies when I graduate at the end of this year. Since I want to have a good shot at all 5 companies (Amazon,Google,Yahoo,Microsoft and Apple) - Should I focus my time and effort on mastering and improving my Java. Or is my time better spent checking out other languages and tools ( Attracted to RoR, Clojure, Git, C# ) I am planning to spend my spring break implementing all the common algorithms and Data structure out of my algorithms textbook in Java.

    Read the article

  • Skynet Big Data Demo Using Hexbug Spider Robot, Raspberry Pi, and Java SE Embedded (Part 4)

    - by hinkmond
    Here's the first sign of life of a Hexbug Spider Robot converted to become a Skynet Big Data model T-1. Yes, this is T-1 the precursor to the Cyberdyne Systems T-101 (and you know where that will lead to...) It is demonstrating a heartbeat using a simple Java SE Embedded program to drive it. See: Skynet Model T-1 Heartbeat It's alive!!! Well, almost alive. At least there's a pulse. We'll program more to its actions next, and then finally connect it to Skynet Big Data to do more advanced stuff, like hunt for Sara Connor. Java SE Embedded programming makes it simple to create the first model in the long line of T-XXX robots to take on the world. Raspberry Pi makes connecting it all together on one simple device, easy. Next post, I'll show how the wires are connected to drive the T-1 robot. Hinkmond

    Read the article

  • how to get accepted at a big company like google [on hold]

    - by prof
    I'm 18 Years old; I started teaching myself programming when I was twelve. I've developed many projects in PHP, Javascript, Ruby, Ruby on Rails. I know a very little about C, C++, Objective C and extending PHP with extensions created in C Programming Language. Now I'm working as a freelance Web Developer with a very low salary :(, My Dream is to get a good career with very high salary so I thought of Big Companies like Google Or Microsoft. My Question is How to get Accepted on those big Companies ? What Pre-requests they want And do you need to finish collage education ?

    Read the article

  • Visual Studio 2010 code display colour scheme - where do I find some properties?

    - by truthseeker
    I'm working on my own colour scheme for displaying code in visual studio. I can't find some text section name so I don't know where to change it's colour. :( Can anybody help me and tell me where do I find them, I mean what is the name of the following sections: 1)The grey one (documentation tag value and it's quote) - picture below 2)The olive colour: header of a asp.net in vb language document. - <% and underline. (picture below) *Here is second hyperlink but without the begining regarding this supid forum rules. To write my code I use vb.net language.

    Read the article

  • How can I return default at loop end in Scheme?

    - by Kufi Annan
    I'm trying to implement back-tracking search in Scheme. So far, I have the following: (define (backtrack n graph assignment) (cond (assignment-complete n assignment) (assignment) ) (define u (select-u graph assignment)) (define c 1) (define result 0) (let forLoop () (when (valid-choice graph assignment c) (hash-set! assignment u c) (set! result (backtrack n graph assignment)) (cond ((not (eq? result #f)) result)) (hash-remove! assignment u) ) (set! c (+ c 1)) (when (>= n c) (forLoop)) ) #f ) My functions assignment-complete and select-u pass unit tests. The argument assignment is a hash-table make with (make-hash), so it should be fine. I believe the problem I have is related to returning false at the end of the loop, if no recursive returns a non-false value (which should be a valid assignment).

    Read the article

  • car and cdr in scheme is driving me crazy ...

    - by kristian Roger
    Hi Im facing a probem with the car and cdr functions for example: first I defined a list caled it x (define x (a (bc) d ( (ef) g ) )) so x now is equal to (a (bc) d ( (ef) g ) now for example I need to get the g from this list using only car and cdr (!! noshortcuts as caddr cddr !!) the correct answer is: (car(cdr(car(cdr(cdr(cdr x)))))) BUT how ? :-( I work according to the rule (the car gives the head of list and cdr gives the tail) and instead of getting the answer above I keep reaching wronge answers can any one help me in understanding this ... give me step or a way to solve it step by step thanx in advance Im really sick of scheme language.

    Read the article

  • car and cdr in Scheme are driving me crazy ...

    - by kristian Roger
    Hi Im facing a problem with the car and cdr functions for example: first I defined a list called it x (define x (a (bc) d ( (ef) g ) )) so x now is equal to (a (bc) d ( (ef) g ) ) now for example I need to get the g from this list using only car and cdr (!! noshortcuts as caddr cddr !!) the correct answer is: (car(cdr(car(cdr(cdr(cdr x)))))) BUT how ? :-( I work according to the rules (the car gives the head of list and cdr gives the tail) and instead of getting the answer above I keep reaching wrong answers. Can any one help me in understanding this ... give me step or a way to solve it step by step Thanks in advance. I'm really sick of Scheme.

    Read the article

  • Websites' color scheme generators - what to do with them in real life?

    - by Marco Demaio
    On the web there are plenty of color scheme/palette generator tools and color palettes galleries. All of these tools/gelleries show many 3 to 5 colors palette as final result. Some of these tools: http://kuler.adobe.com, http://www.colorexplorer.com I know my question might sound ridicolous to someone who is involved in web dedign, but I don't understand what to do with these color palette. I mean, if I have to create a website, how am I supposed to apply this color palette to the website. Which color goes as foreground text, which one as background, which one for the links, which one for the page titles and so on? What are these color palette generators useful for then?! Thanks!

    Read the article

  • How I Can do web programming with Lisp or Scheme?

    - by Castro
    I usually write web apps in PHP, Ruby or Perl. I am starting the study of Scheme and I want to try some web project with this language. But I can't find what is the best environment for this. I am looking for the following features: A simple way of get the request parameters (something like: get-get #key, get-post #key, get-cookie #key). Mysql access. HTML Form generators, processing, validators, etc. Helpers for filter user input data (something like htmlentities, escape variables for put in queries, etc). FLOSS. And GNU/Linux friendly. So, thanks in advance to all replies.

    Read the article

  • How do i pass a number from a list as a parameter in scheme?

    - by wyatt
    I need to take a number from a list and convert it to a number so that i can pass it as a parameter. im trying to make a 1-bit adder in scheme. i've written the code for the or gate and the xor gate and also the half adder and now im trying to combine them all to make a full adder. im not sure if im going about it the right way. any input will be appreciated thank you. (define or-gate (lambda (a b) (if (= a 1) 1 (if (= b 1) 1 0)))) (define xor-gate (lambda (a b) (if (= a b) 0 1))) (define ha (lambda (a b) (list (xor-gate a b)(and-gate a b)))) (define fa (lambda (a b cin) (or-gate (cdr(ha cin (car (ha a b))))(cdr(ha a b))))) the issue i get when i run the program is that the half adder (ha) function outputs a list as a value and that makes the values incompatible with my other procedures because they require numbers and not lists. i feel like there is a simple solution but i dont know it.

    Read the article

  • How do I map a macro across a list in Scheme?

    - by josh
    I have a Scheme macro and a long list, and I'd like to map the macro across the list, just as if it were a function. How can I do that using R5RS? The macro accepts several arguments: (mac a b c d) The list has (define my-list ((a1 b1 c1 d1) (a2 b2 c2 d2) ... (an bn cn dn))) And I'd like to have this: (begin (mac a1 b1 c1 d2) (mac a2 b2 c2 d2) ... (mac an bn cn dn)) (By the way, as you can see I'd like to splice the list of arguments too)

    Read the article

  • Is it possible to define XML scheme with node names specified via regular expressions?

    - by MartyIX
    Hello, I know it is probably a question against XML philosophy but still is it possible to define scheme for XML like this: <Root> <arbitrary-name-of-node> <Name></Name> <Position></Position> <!-- ... --> </arbitrary-name-of-node> <arbitrary-name-of-node> <Name></Name> <Position></Position> <!-- ... --> </arbitrary-name-of-node> </Root> where arbitrary-name-of-node matches regular expression [a-zA-Z0-9]? Thanks for an answer!

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >