Search Results

Search found 69357 results on 2775 pages for 'data oriented design'.

Page 4/2775 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is the Entity Component System architecture object oriented by definition?

    - by tieTYT
    Is the Entity Component System architecture object oriented, by definition? It seems more procedural or functional to me. My opinion is that it doesn't prevent you from implementing it in an OO language, but it would not be idiomatic to do so in a staunchly OO way. It seems like ECS separates data (E & C) from behavior (S). As evidence: The idea is to have no game methods embedded in the entity. And: The component consists of a minimal set of data needed for a specific purpose Systems are single purpose functions that take a set of entities which have a specific component I think this is not object oriented because a big part of being object oriented is combining your data and behavior together. As evidence: In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. ECS, on the other hand, seems to be all about separating your data from your behavior.

    Read the article

  • Design review , class design

    - by user3651810
    I have class design for storing patient information could you please review the design and let me know anything wrong or not corrent I have designed three interfaces IPatient IPatientHistory IPrescription IPatient Id Firstname LastName DOB BloogGroup Mobile List<IPatientHistory> ----------------------- GetPatientById() GetPatientHistory() IPatientHistory HistoryId PatientId DateOfVisit cause List<IPrescription> ----------------------- GetPrescription() IPrescription PrescriptionId PatientHistoryId MedicineName totalQty MorningQty NoonQty NightQTy

    Read the article

  • Liskov substitution and abstract classes / strategy pattern

    - by Kolyunya
    I'm trying to follow LSP in practical programming. And I wonder if different constructors of subclasses violate it. It would be great to hear an explanation instead of just yes/no. Thanks much! P.S. If the answer is no, how do I make different strategies with different input without violating LSP? class IStrategy { public: virtual void use() = 0; }; class FooStrategy : public IStrategy { public: FooStrategy(A a, B b) { c = /* some operations with a, b */ } virtual void use() { std::cout << c; } private: C c; }; class BarStrategy : public IStrategy { public: BarStrategy(D d, E e) { f = /* some operations with d, e */ } virtual void use() { std::cout << f; } private: F f; };

    Read the article

  • Liskov substitution principle with abstract parent class

    - by Songo
    Does Liskov substitution principle apply to inheritance hierarchies where the parent is an abstract class the same way if the parent is a concrete class? The Wikipedia page list several conditions that have to be met before a hierarchy is deemed to be correct. However, I have read in a blog post that one way to make things easier to conform to LSP is to use abstract parent instead of a concrete class. How does the choice of the parent type (abstract vs concrete) impacts the LSP? Is it better to have an abstract base class whenever possible?

    Read the article

  • How to decide to which class does a method belong

    - by Eleeist
    I have TopicBusiness.class and PostBusiness.class. I have no problem with deciding into which class methods such as addPostToDatabase() or getAllPostsFromDatabase() should go. But what about getAllPostsFromTopic(TopicEntity topic) or getNumberOfPostsInTopic(TopicEntity topic)? Should the parameter be the deciding factor? So when the method takes TopicEntity as parameter it should belong to TopicBusiness.class? I am quite puzzled by this.

    Read the article

  • Is it better to return NULL or empty values from functions/methods where the return value is not present?

    - by P B
    I am looking for a recommendation here. I am struggling with whether it is better to return NULL or an empty value from a method when the return value is not present or cannot be determined. Take the following two methods as an examples: string ReverseString(string stringToReverse) // takes a string and reverses it. Person FindPerson(int personID) // finds a Person with a matching personID. In ReverseString(), I would say return an empty string because the return type is string, so the caller is expecting that. Also, this way, the caller would not have to check to see if a NULL was returned. In FindPerson(), returning NULL seems like a better fit. Regardless of whether or not NULL or an empty Person Object (new Person()) is returned the caller is going to have to check to see if the Person Object is NULL or empty before doing anything to it (like calling UpdateName()). So why not just return NULL here and then the caller only has to check for NULL. Does anyone else struggle with this? Any help or insight is appreciated.

    Read the article

  • is it valid that a state machine can have more than one possible state for some transition?

    - by shankbond
    I have a requirement for a workflow which I am trying to model as a state machine, I see that there is more than one outcome of a given transition(or activity). Is it valid for a state machine to have more than one possible states, but only one state will be true at a given time? Note: This is my first attempt to model a state machine. Eg. might be: s1-t1-s2 s1-t1-s3 s1-t1-s4 where s1, s2, s3, s4 are states and t1 is transition/activity. A fictitious real world example might be: For a human, there can be two states: hungry, not hungry A basket can have only one item from: apple, orange. So, to model it we will have: hungry-pick from basket-apple found hungry-pick from basket-orange found apple found-eat-not hungry orange found-take juice out of it and then drink- not hungry

    Read the article

  • Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the analytics in Big Data Story. In this article we will understand how to become a Data Scientist for Big Data Story. Data Scientist is a new buzz word, everyone seems to be wanting to become Data Scientist. Let us go over a few key topics related to Data Scientist in this blog post. First of all we will understand what is a Data Scientist. In the new world of Big Data, I see pretty much everyone wants to become Data Scientist and there are lots of people I have already met who claims that they are Data Scientist. When I ask what is their role, I have got a wide variety of answers. What is Data Scientist? Data scientists are the experts who understand various aspects of the business and know how to strategies data to achieve the business goals. They should have a solid foundation of various data algorithms, modeling and statistics methodology. What do Data Scientists do? Data scientists understand the data very well. They just go beyond the regular data algorithms and builds interesting trends from available data. They innovate and resurrect the entire new meaning from the existing data. They are artists in disguise of computer analyst. They look at the data traditionally as well as explore various new ways to look at the data. Data Scientists do not wait to build their solutions from existing data. They think creatively, they think before the data has entered into the system. Data Scientists are visionary experts who understands the business needs and plan ahead of the time, this tremendously help to build solutions at rapid speed. Besides being data expert, the major quality of Data Scientists is “curiosity”. They always wonder about what more they can get from their existing data and how to get maximum out of future incoming data. Data Scientists do wonders with the data, which goes beyond the job descriptions of Data Analysist or Business Analysist. Skills Required for Data Scientists Here are few of the skills a Data Scientist must have. Expert level skills with statistical tools like SAS, Excel, R etc. Understanding Mathematical Models Hands-on with Visualization Tools like Tableau, PowerPivots, D3. j’s etc. Analytical skills to understand business needs Communication skills On the technology front any Data Scientists should know underlying technologies like (Hadoop, Cloudera) as well as their entire ecosystem (programming language, analysis and visualization tools etc.) . Remember that for becoming a successful Data Scientist one require have par excellent skills, just having a degree in a relevant education field will not suffice. Final Note Data Scientists is indeed very exciting job profile. As per research there are not enough Data Scientists in the world to handle the current data explosion. In near future Data is going to expand exponentially, and the need of the Data Scientists will increase along with it. It is indeed the job one should focus if you like data and science of statistics. Courtesy: emc Tomorrow In tomorrow’s blog post we will discuss about various Big Data Learning resources. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

  • Unstructured Data - The future of Data Administration

    Some have claimed that there is a problem with the way data is currently managed using the relational paradigm do to the rise of unstructured data in modern business. PCMag.com defines unstructured data as data that does not reside in a fixed location. They further explain that unstructured data refers to data in a free text form that is not bound to any specific structure. With the rise of unstructured data in the form of emails, spread sheets, images and documents the critics have a right to argue that the relational paradigm is not as effective as the object oriented data paradigm in managing this type of data. The relational paradigm relies heavily on structure and relationships in and between items of data. This type of paradigm works best in a relation database management system like Microsoft SQL, MySQL, and Oracle because data is forced to conform to a structure in the form of tables and relations can be derived from the existence of one or more tables. These critics also claim that database administrators have not kept up with reality because their primary focus in regards to data administration deals with structured data and the relational paradigm. The relational paradigm was developed in the 1970’s as a way to improve data management when compared to standard flat files. Little has changed since then, and modern database administrators need to know more than just how to handle structured data. That is why critics claim that today’s data professionals do not have the proper skills in order to store and maintain data for modern systems when compared to the skills of system designers, programmers , software engineers, and data designers  due to the industry trend of object oriented design and development. I think that they are wrong. I do not disagree that the industry is moving toward an object oriented approach to development with the potential to use more of an object oriented approach to data.   However, I think that it is business itself that is limiting database administrators from changing how data is stored because of the potential costs, and impact that might occur by altering any part of stored data. Furthermore, database administrators like all technology workers constantly are trying to improve their technical skills in order to excel in their job, so I think that accusing data professional is not just when the root cause of the lack of innovation is controlled by business, and it is business that will suffer for their inability to keep up with technology. One way for database professionals to better prepare for the future of database management is start working with data in the form of objects and so that they can extract data from the objects so that the stored information within objects can be used in relation to the data stored in a using the relational paradigm. Furthermore, I think the use of pattern matching will increase with the increased use of unstructured data because object can be selected, filtered and altered based on the existence of a pattern found within an object.

    Read the article

  • Explanation needed, for “Ask, don't tell” approach?

    - by the_naive
    I'm taking a course on design patterns in software engineering and here I'm trying to understand the good and the bad way of design relating to "coupling" and "cohesion". I could not understand the concept described in the following image. The example of code shown in the image is ambiguous to me, so I can't quite clearly get what exactly "Ask, don't tell!" approach mean. Could you please explain?

    Read the article

  • Need advice on design in Ruby On Rails

    - by Elad
    For personal educational purposes I am making a site for a conference. One of the object that exist in a conference is a session, which has different states and in each state it has slightly different attributes: When submitted it has a speaker (User in the system), Title and abstract. When under review it has reviews and comments (in addition to the basic data) When accepted it has a defined time-slot but no reviewers anymore. I feel that it is not the best thing to add a "status" attributes and start adding many if statements... So I thought it would be better to have different classes for each state each with it's own validations and behaviors. What do you think about this design? Do you have a better idea? *I must add this out of frustration: I had several edits of the question, including one major change but no one actually gave any hint or clue on which direction should i take or where is a better place to ask this... Hardly helpful.

    Read the article

  • how to improve design ability

    - by Cong Hui
    I recently went on a couple of interviews and all of them asked a one or two design questions, like how you would design a chess, monopoly, and so on. I didn't do good on those since I am a college student and lack of the experience of implementing big and complex systems. I figure the only way to improve my design capability is to read lots of others' code and try to implement myself. Therefore, for those companies that ask these questions, what are their real goals in this? I figure most of college grads start off working in a team guided by a senior leader in their first jobs. They might not have lots of design experience fresh out of colleges. Anyone could give pointers about how to practice those skills? Thank you very much

    Read the article

  • What is the better design decision approach?

    - by palm snow
    I have two classes (MyFoo1 and MyFoo2) that share some common functionality. So far it does not seem like I need any polymorphic inheritence but at this point I am considering the following options: Have the common functionality in a utility class. Both of these classes call these methods from that utility class. Have an abstract class and implement common methods in that abstract class. Then derive MyFoo1 and MyFoo2 from that abstract class. Any suggestion on what would be a better design decision?

    Read the article

  • Developing an analytics's system processing large amounts of data - where to start

    - by Ryan
    Imagine you're writing some sort of Web Analytics system - you're recording raw page hits along with some extra things like tagging cookies etc and then producing stats such as Which pages got most traffic over a time period Which referers sent most traffic Goals completed (goal being a view of a particular page) And more advanced things like which referers sent the most number of vistors who later hit a goal. The naieve way of approaching this would be to throw it in a relational database and run queries over it - but that won't scale. You could pre-calculate everything (have a queue of incoming 'hits' and use to update report tables) - but what if you later change a goal - how could you efficiently re-calculate just the data that would be effected. Obviously this has been done before ;) so any tips on where to start, methods & examples, architecture, technologies etc.

    Read the article

  • Relative encapsulation design

    - by taher1992
    Let's say I am doing a 2D application with the following design: There is the Level object that manages the world, and there are world objects which are entities inside the Level object. A world object has a location and velocity, as well as size and a texture. However, a world object only exposes get properties. The set properties are private (or protected) and are only available to inherited classes. But of course, Level is responsible for these world objects, and must somehow be able to manipulate at least some of its private setters. But as of now, Level has no access, meaning world objects must change its private setters to public (violating encapsulation). How to tackle this problem? Should I just make everything public? Currently what I'm doing is having a inner class inside game object that does the set work. So when Level needs to update an objects location it goes something like this: void ChangeObject(GameObject targetObject, int newX, int newY){ // targetObject.SetX and targetObject.SetY cannot be set directly var setter = new GameObject.Setter(targetObject); setter.SetX(newX); setter.SetY(newY); } This code feels like overkill, but it doesn't feel right to have everything public so that anything can change an objects location for example.

    Read the article

  • Help with design structure choice: Using classes or library of functions

    - by roverred
    So I have GUI Class that will call another class called ImageProcessor that contains a bunch functions that will perform image processing algorithms like edgeDetection, gaussianblur, contourfinding, contour map generations, etc. The GUI passes an image to ImageProcessor, which performs one of those algorithm on it and it returns the image back to the GUI to display. So essentially ImageProcessor is a library of independent image processing functions right now. It is called in the GUI like so Image image = ImageProcessor.EdgeDetection(oldImage); Some of the algorithms procedures require many functions, and some can be done in a single function or even one line. All these functions for the algorithms jam packed into ImageProcessor can be pretty messy, and ImageProcessor doesn't sound it should be a library. So I was thinking about making every algorithm be a class with a shared interface say IAlgorithm. Then I pass the IAlgorithm interface from the GUI to the ImageProcessor. public interface IAlgorithm{ public Image Process(); } public class ImageProcessor{ public Image Process(IAlgorithm TheAlgorithm){ return IAlgorithm.Process(); } } Calling in the GUI like so Image image = ImageProcessor.Process(new EdgeDetection(oldImage)); I think it makes sense in an object point of view, but the problem is I'll end up with some classes that are just one function. What do you think is a better design, or are they both crap and you have a much better idea? Thanks!

    Read the article

  • OOP Design: relationship between entity classes

    - by beginner_
    I have at first sight a simple issue but can't wrap my head around on how to solve. I have an abstract class Compound. A Compound is made up of Structures. Then there is also a Container which holds 1 Compound. A "special" implementation of Compound has Versions. For that type of Compound I want the Container to hold the Versionof the Compound and not the Compound itself. You could say "just create an interface Containable" and a Container holds 1 Containable. However that won't work. The reason is I'm creating a framework and the main part of that framework is to simplify storing and especially searching for special data type held by Structure objects. Hence to search for Containers which contain a Compound made up of a specific Structure requires that the "Path" from Containerto Structure is well defined (Number of relationships or joins). I hope this was understandable. My question is how to design the classes and relationships to be able to do what I outlined.

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • OO Design - polymorphism - how to design for handing streams of different file types

    - by Kache4
    I've little experience with advanced OO practices, and I want to design this properly as an exercise. I'm thinking of implementing the following, and I'm asking if I'm going about this the right way. I have a class PImage that holds the raw data and some information I need for an image file. Its header is currently something like this: #include <boost/filesytem.hpp> #include <vector> namespace fs = boost::filesystem; class PImage { public: PImage(const fs::path& path, const unsigned char* buffer, int bufferLen); const vector<char> data() const { return data_; } const char* rawData() const { return &data_[0]; } /*** other assorted accessors ***/ private: fs::path path_; int width_; int height_; int filesize_; vector<char> data_; } I want to fill the width_ and height_ by looking through the file's header. The trivial/inelegant solution would be to have a lot of messy control flow that identifies the type of image file (.gif, .jpg, .png, etc) and then parse the header accordingly. Instead of using vector<char> data_, I was thinking of having PImage use a class, RawImageStream data_ that inherits from vector<char>. Each type of file I plan to support would then inherit from RawImageStream, e.g. RawGifStream, RawPngStream. Each RawXYZStream would encapsulate the respective header-parsing functions, and PImage would only have to do something like height_ = data_.getHeight();. Am I thinking this through correctly? How would I create the proper RawImageStream subclass for data_ to be in the PImage ctor? Is this where I could use an object factory? Anything I'm forgetting?

    Read the article

  • Big Data – Beginning Big Data – Day 1 of 21

    - by Pinal Dave
    What is Big Data? I want to learn Big Data. I have no clue where and how to start learning about it. Does Big Data really means data is big? What are the tools and software I need to know to learn Big Data? I often receive questions which I mentioned above. They are good questions and honestly when we search online, it is hard to find authoritative and authentic answers. I have been working with Big Data and NoSQL for a while and I have decided that I will attempt to discuss this subject over here in the blog. In the next 21 days we will understand what is so big about Big Data. Big Data – Big Thing! Big Data is becoming one of the most talked about technology trends nowadays. The real challenge with the big organization is to get maximum out of the data already available and predict what kind of data to collect in the future. How to take the existing data and make it meaningful that it provides us accurate insight in the past data is one of the key discussion points in many of the executive meetings in organizations. With the explosion of the data the challenge has gone to the next level and now a Big Data is becoming the reality in many organizations. Big Data – A Rubik’s Cube I like to compare big data with the Rubik’s cube. I believe they have many similarities. Just like a Rubik’s cube it has many different solutions. Let us visualize a Rubik’s cube solving challenge where there are many experts participating. If you take five Rubik’s cube and mix up the same way and give it to five different expert to solve it. It is quite possible that all the five people will solve the Rubik’s cube in fractions of the seconds but if you pay attention to the same closely, you will notice that even though the final outcome is the same, the route taken to solve the Rubik’s cube is not the same. Every expert will start at a different place and will try to resolve it with different methods. Some will solve one color first and others will solve another color first. Even though they follow the same kind of algorithm to solve the puzzle they will start and end at a different place and their moves will be different at many occasions. It is  nearly impossible to have a exact same route taken by two experts. Big Market and Multiple Solutions Big Data is exactly like a Rubik’s cube – even though the goal of every organization and expert is same to get maximum out of the data, the route and the starting point are different for each organization and expert. As organizations are evaluating and architecting big data solutions they are also learning the ways and opportunities which are related to Big Data. There is not a single solution to big data as well there is not a single vendor which can claim to know all about Big Data. Honestly, Big Data is too big a concept and there are many players – different architectures, different vendors and different technology. What is Next? In this 31 days series we will be exploring many essential topics related to big data. I do not claim that you will be master of the subject after 31 days but I claim that I will be covering following topics in easy to understand language. Architecture of Big Data Big Data a Management and Implementation Different Technologies – Hadoop, Mapreduce Real World Conversations Best Practices Tomorrow In tomorrow’s blog post we will try to answer one of the very essential questions – What is Big Data? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Reading data from an Entity Framework data model through a WCF Data Service

    - by nikolaosk
    This is going to be the fourth post of a series of posts regarding ASP.Net and the Entity Framework and how we can use Entity Framework to access our datastore. You can find the first one here , the second one here and the third one here . I have a post regarding ASP.Net and EntityDataSource. You can read it here .I have 3 more posts on Profiling Entity Framework applications. You can have a look at them here , here and here . Microsoft with .Net 3.0 Framework, introduced WCF. WCF is Microsoft's...(read more)

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Accelerate your SOA with Data Integration - Live Webinar Tuesday!

    - by dain.hansen
    Need to put wind in your SOA sails? Organizations are turning more and more to Real-time data integration to complement their Service Oriented Architecture. The benefit? Lowering costs through consolidating legacy systems, reducing risk of bad data polluting their applications, and shortening the time to deliver new service offerings. Join us on Tuesday April 13th, 11AM PST for our live webinar on the value of combining SOA and Data Integration together. In this webcast you'll learn how to innovate across your applications swiftly and at a lower cost using Oracle Data Integration technologies: Oracle Data Integrator Enterprise Edition, Oracle GoldenGate, and Oracle Data Quality. You'll also hear: Best practices for building re-usable data services that are high performing and scalable across the enterprise How real-time data integration can maximize SOA returns while providing continuous availability for your mission critical applications Architectural approaches to speed service implementation and delivery times, with pre-integrations to CRM, ERP, BI, and other packaged applications Register now for this live webinar!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >