Search Results

Search found 62526 results on 2502 pages for 'data processing'.

Page 13/2502 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • The Best Articles for Backing Up and Syncing Your Data

    - by Lori Kaufman
    World Backup Day is March 31st and we decided to provide you with some useful information to make backing up your data easier. We’ve published articles about backing up various types of data and settings both offline and online. There’s all kinds of settings on your computer to backup in addition to your personal data, such as Wi-Fi passwords, drivers, and settings for programs like web browsers, Office, and Windows Live Writer. There are also many tools available to help you keep your data and settings backed up. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Big Data Learning Resources

    - by Lara Rubbelke
    I have recently had several requests from people asking for resources to learn about Big Data and Hadoop. Below is a list of resources that I typically recommend. I'll update this list as I find more resources. Let's crowdsource this... Tell me your favorite resources and I'll get them on the list! Books and Whitepapers Planning for Big Data Free e-book Great primer on the general Big Data space. This is always my recommendation for people who are new to Big Data and are trying to understand it....(read more)

    Read the article

  • Distortion in format of data in wordpad file when shifted from windows XP to winows 2007

    - by Harpreet
    I have many data files which were set to open in wordpad file in windows XP. Those files have a particular format for data, like following: Name of Data file No. of data columns Name of data in column_1 Name of data in column_2 . . . Name of data in column_n column_1 column_2 column_3 ... column_n Now my computer has been formatted and OS is changed to windows 2007, however when I open my data files in wordpad the above format of data is no more present. The format in wordpad in windows 2007 seems to be distorted. Does anyone knows what to do to restore the format as shown above, which is what the data used to look like in XP? I have attached the snap shot of the new distorted format of data as seen in wordpad in windows 2007. The snap shot shows 100 column names, however the data columns present are only 5 when it should be actually 100 data columns.

    Read the article

  • EBS Concurrent Processing Information Center

    - by LuciaC
    Do you have problems or questions about concurrent request processing?  Do you want to know: How and when to run CP Diagnostics? What are the latest Hot Topics being looked at for Concurrent Processing? All about the Concurrent Process Analyzer self-service Health-Check script? Go to the EBS Concurrent Processing Information Center (Doc ID 1304305.1) and find out the above and lots more!

    Read the article

  • Data Loading Issues? Try the new Demantra Data Load Guided Resolution

    - by user702295
    Hello!   Do you have data loading issues?  Perhaps you are trying the new partial schema export tool.   New to Demantra, the Data Load Guided Resolution, document 1461899.1.  This interactive guide will help you locate known solutions to previously discovered issues quickly.  From performance, ORA and ODPM errors to collections related issues that have no known hard number error.   This guide includes the diagnosis of data being imported into Demantra and data being exported from Demantra.  Contact me with any questions or suggestions.   Thank You!

    Read the article

  • how to recover deleted ntfs patition with data entirely while installing ubuntu 13.04

    - by Anson Varghese
    I've installed ubuntu 13.04 onto my hp 2231tx computer. During installation all of my data was erased. I didn't know all of my three partitions would be deleted. I was shocked after finding out that all of my personal data was erased. I didn't know what to do to resolve this problem so I search google for an answer. I found a program called testdisk and I used it to recover about half of my data. Among this data weren't my personal photos and videos. Is there a way to recover the other half?

    Read the article

  • I want to record a screencast of a processing sketch

    - by nathanvda
    I have a created a music visualisation using Processing. I now want to convert that to a video, and the least obtrusive way I could think of is to record a screencast. I figured exporting Processing to video including audio, from within Processing itself, on ubuntu seemed an unsolved issue. Very hard and also could cause timing sync issues (since the music keeps running while images are captured). So move on to the screencast method. Dead-easy, I figured. But I was wrong. First hurdle was to find a way to record the sound from the audio (and not the mic). I did find a tutorial for that here. In short: use gtk-recordmydesktop and pulse audio. But, apparently, what happens: Processing does not use ALSA. When the sound is playing, it does not appear in the Pulse Audio mixer. How can I record the audio now?

    Read the article

  • How to fix out the error dpkg: error processing colord (--configure):

    - by ranjitpradhan
    I have upgrade my ubuntu from 11.10 to 12.04. at last i can found that when i tries to install some packages it shows a error. after reading some blog i tried to fix that error by "sudo dpkg --configure -a". but when i run this command it show another error this Setting up colord (0.1.16-2) ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /var/lib/colord -g colord -s /bin/false -u 115 colord' returned error code 1. Exiting. dpkg: error processing colord (--configure): subprocess installed post-installation script returned error exit status 1 Setting up whoopsie (0.1.32) ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /nonexistent -g whoopsie -s /bin/false -u 115 whoopsie' returned error code 1. Exiting. dpkg: error processing whoopsie (--configure): subprocess installed post-installation script returned error exit status 1 Setting up lightdm (1.2.1-0ubuntu1) ... Adding system user `lightdm' (UID 115) ... Adding new user `lightdm' (UID 115) with group `lightdm' ... useradd: cannot lock /etc/passwd; try again later. adduser: `/usr/sbin/useradd -d /var/lib/lightdm -g lightdm -s /bin/false -u 115 lightdm' returned error code 1. Exiting. dpkg: error processing lightdm (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of ubuntu-desktop: ubuntu-desktop depends on lightdm; however: Package lightdm is not configured yet. dpkg: error processing ubuntu-desktop (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: colord whoopsie lightdm ubuntu-desktop what can i do now ?

    Read the article

  • What's the best way to learn image processing?

    - by rdasxy
    I'm a senior in college that hasn't done much image processing before (except for some basic image compression on smartphones). I'm starting a research project on machine learning next semester that would require some biomedical image processing. What's the best way to get up to speed with the basics of image processing in about two months? Or is this impractical? It's my impression that once I'm good with the basics learning more from other resources would be easier.

    Read the article

  • E-Book on big data (featuring Analysts, Customers and more)

    - by Jean-Pierre Dijcks
    As we are gearing up for Openworld, here is a nice E-book on big data to start paging through. It contains Gartner's take on big data, customer and partner interviews and a lot more good info. Enjoy the read so you come prepared for Openworld!! Read the E-Book here. For those coming to Oracle Openworld (or the Americas Cup races around the same time), you can find big data sessions via this URL. Enjoy!!

    Read the article

  • Choices in Architecture, Design, Algorithms, Data Structures for effective RDF Reasoning and Querying in a Big Data Environment [on hold]

    - by user2891213
    As part of my academic project I would like to know what choices in Architecture, Design, Algorithms, Data Structures do we need in order to provide effective and efficient RDF Reasoning and Querying in a Big Data Environment. Basically I want to get info regarding below points: What are the Systems and Software to get appropriate Architecture? What kind of API layer(s) would we need on top of the Big Data stores, to make this possible? The Indexing structures we will need. The appropriate Algorithms, and appropriate Algorithms for Query Planning across Big Data stores. The Performance Analysis and Cost Models we will need to justify the design decisions we have made along the way. Can anyone please provide pointers.. Thanks, David

    Read the article

  • E-Mail Center: How to Debug Auto-Processing Issues

    - by LuciaC
    Do you use E-mail Center Auto-Processing functionality?  Auto-processing rules can be defined which will process an email and initiate actions such as sending an auto-reply, auto-acknowledgement or service request update or creation.  If you're using auto-processing in your environment, then you will have noticed that, when things go wrong, often the only indication that there is an issue is if the email goes to the Supervisor Queue for manual processing instead of being sent.  Oracle Development has developed a script to help you debug when an email is meant to be auto-processed but is failing. There are scripts for R12.0.6 through to R12.1.3. See Doc ID 1427925.1 for details of the script and how to use it.

    Read the article

  • Data Structure Behind Amazon S3s Keys (Filtering Data Structure)

    - by dimo414
    I'd like to implement a data structure similar to the lookup functionality of Amazon S3. For those of you who don't know what I'm taking about, Amazon S3 stores all files at the root, but allows you to look up groups of files by common prefixes in their names, therefore replicating the power of a directory tree without the complexity of it. The catch is, both lookup and filter operations are O(1) (or close enough that even on very large buckets - S3's disk equivalents - both operations might as well be O(1))). So in short, I'm looking for a data structure that functions like a hash map, with the added benefit of efficient (at the very least not O(n)) filtering. The best I can come up with is extending HashMap so that it also contains a (sorted) list of contents, and doing a binary search for the range that matches the prefix, and returning that set. This seems slow to me, but I can't think of any other way to do it. Does anyone know either how Amazon does it, or a better way to implement this data structure?

    Read the article

  • Best Data Structure For Time Series Data

    - by TriParkinson
    Hi all, I wonder if someone could take a minute out of their day to give their two cents on my problem. I would like some suggestions on what would be the best data structure for representing, on disk, a large data set of time series data. The main priority is speed of insertion, with other priorities in decreasing order; speed of retrieval, size on disk, size in memory, speed of removal. I have seen that B+ trees are often used in database because of their fast search times, but how about for fast insertion times? Is a linked list really the way to go? Thanks in advance for your time, Tri

    Read the article

  • Asynchronous email processing in Java web application

    - by Denise
    Hi everyone, I would like to implement asynchronous email sending in my web application when users register for a new account. This is so that if there is a problem or delay in sending the email message (e.g. the mail server is down or the network connection to the mail server is slow) the user won't be kept waiting for the sending to complete. My web app is built using Spring and Hibernate's implementation of JPA. What would be the best and most reliable way for me to implement asynchronous email processing in this web application? I am thinking about persisting the email information in a database table which is then regularly polled by a Quartz (http://www.opensymphony.com/quartz/) scheduled job for updates and when it finds new unsent emails, it attempts to send them. Is this a reasonable way of implementing what I want? Thanks.

    Read the article

  • MySQL: Blank row in table after LOAD DATA INFILE

    - by Tom
    Hi, I'm uploading a large amount of data from a CSV (I'm doing it via MySQL Workbench): LOAD DATA INFILE 'C:/development/mydoc.csv' INTO TABLE mydatabase.mytable CHARACTER SET utf8 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\r'; However, I'm noticing that it keeps adding an empty line full of nulls/zeros after the last record. I'm guessing it's because of the "LINES TERMINATED" command. However, I need that to load the data in correctly. Is there some way around this / some better SQL to avoid the blank row in the table? Thanks

    Read the article

  • Automatic e-mail processing

    - by Jon Harrop
    I'd like to write a .NET application in F# to automate some of the processing of my e-mails. For example, when an order comes in my program might compute a new htpasswd from the e-mail's contents, upload it to our web server and reply to the customer with login details. How do people do this? I've tried Outlook 2007 automation but it just prompts the user for security and my attempts to get it to stop doing this have failed so I cannot automate anything with it. Is there a .NET-friendly e-mail client I can use more easily? This has been so tedious that I'm seriously considering writing my own .NET-friendly e-mail client...

    Read the article

  • Update tableview instantly as data pushed in core data iphone

    - by user336685
    I need to update the tableview as soon as the content is pushed in core data database. for this AppDelegate.m contains following code NSManagedObjectContext *moc = [self managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:[NSEntityDescription entityForName:@"FeedItem" inManagedObjectContext:moc]]; //for loop // push data in code data & then save context [moc save:&error]; ZAssert(error == nil, @"Error saving context: %@", [error localizedDescription]); //for loop ends This code triggers following code from RootviewController.m - (void)controllerWillChangeContent:(NSFetchedResultsController*)controller { [[self tableView] beginUpdates]; } But this updates the tableview only at the end of the for loop ,the table does not get updated after immediate push in db. I tried following code but that didn't work - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { // In the simplest, most efficient, case, reload the table view. [self.tableView reloadData]; } I have been stuck with this problem for several days.Please help.Thanks in advance for solution.

    Read the article

  • Image Processing, joining the small images to form the main image

    - by n0idea
    Good morning everyone, Actually I'm having a small issue in image processing and I'm in need of some help. First of all, let me explain what i want to do, i have an image that was split into 4 other small images. I currently have like 6 small images that i need to figure out which ones are part of the real image. Second, what i currently know is that that i should compare these images edges or last column with the first column of the other image. I'm not sure yet what exactly should be done, anyone is able to put me on the same tracks, with some detailed hints and how to compare the edges of 2 images. Some links and example codes will be help full. One more thing, how am i able to read .Raw images using java, c# or python ?

    Read the article

  • Core Data data type for just the date - not including time

    - by Jason
    I am new at Core Data, and it seems like it is a great way to manage the data store. However I am also very memory-conscious due to the fact that the iPhone doesn't have that much of it. I was a little surprised to see that the data types are so limited - eg. there is a Date type which includes also the time, but no Date type for just the date! All the time information takes up precious bytes of memory, if I just wanted an attribute with the date (e.g. 2/15/2010 rather than 2/15/2010 02:34:48), how could I do this? Is it possible?

    Read the article

  • Processing XML file with Huge data

    - by Manish Dhanotiya
    Hi,be m I am working on an application which has below requiements - 1. Download a ZIP file from a server. 2. Uncompress the ZIP file, get the content (which is in XML format) from this file into a String. 3. Pass this content into another method for parsing and further processing. Now, my concerns here is the XML file may be of Huge size say like '100MB', and my JVM has memory of only 512 MB, so how can I get this content into Chunks and pass for Parsing and then insert the data into PL/SQL tables. Since there can be multiple requests running at the same time and considering 512MB of memory what will be the best possible to process this. How I can get the data into Chunks and pass it as Stream for XML parsing. I googled on this, but didnt find any implementation. :( Thanks,

    Read the article

  • Image processing custom filter 7 by 7

    - by ladiesMan217
    Lets say I have a 7 by 7 neighborhood around a pixel that looks like this 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 and I wanna filter the above by replacing the pixel p by the average of those pixels whose value lie in the range -10<=p_value <=10. I am new to image processing and I think in this case p_value is 25 and around 25 that are many pixel values in that range but don't exactly know to construct a convolution filter out of it.

    Read the article

  • Clever ways of implementing different data structures in C & data structures that should be used mor

    - by Yktula
    What are some clever (not ordinary) ways of implementing data structures in C, and what are some data structures that should be used more often? For example, what is the most effective way (generating minimal overhead) to implement a directed and cyclic graph with weighted edges in C? I know that we can store the distances in an array as is done here, but what other ways are there to implement this kind of a graph?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >