Search Results

Search found 70675 results on 2827 pages for 'master data management'.

Page 14/2827 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • SQL – Biggest Concerns in a Data-Driven World

    - by Pinal Dave
    The ongoing chaos over Government Agency’s snooping has ignited a heated debate on privacy of personal data and its use by government and/or other institutions. It has created a feeling of disapproval and distrust among users. This incident proves to be a lesson for companies that are looking to leverage their business using a data driven approach. According to analysts, the goal of gathering personal information should be to deliver benefits to both the parties – the user as well as the data collector(government or business). Using data the right way is crucial, and companies need to deploy the right software applications and systems to ensure that their efforts are well-directed. However, there are various issues plaguing analysts regarding available software, which are highlighted below. According to a InformationWeek 2013 Survey of Analytics, Business Intelligence and Information Management where 541 business technology professionals contributed as respondents, it was discovered that the biggest concern was deemed to be the scarcity of expertise and high costs associated with the same. This concern was voiced by as many as 38% of the participants. A close second came out to be the issue of data warehouse appliance platforms being expensive, with 33% of those present believing it to be a huge roadblock. Another revelation made in this respect was that 31% professionals weren’t even sure how Data Analytics can create business opportunities for them. Another 17% shared that they found data platform technologies such as Hadoop and NoSQL technologies hard to learn. These results clearly pointed out that there are awareness and expertise issues that also need much attention. Unless the demand-supply gap of Business Intelligence professionals well versed in data analysis technologies is met, this divide is going to affect how companies make the most of their BI campaigns. One of the key action points that can be taken to salvage the situation, is to provide training on Data Analytics concepts. Koenig Solutions offer courses on many such technologies including a course on MCSE SQL Server 2012: BI Platform. So it’s time to brush up your skills and get down to work in a data driven world that awaits you ahead. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • C++ and SDL resource management for 2D game

    - by KuruptedMagi
    My first question is about stateManagers. I do not use the singleton pattern (read many random posts with various reasons not to use it), I have gameStateManager which runs the pointer cCurrentGameState-render(), etc. I want to make a transitioning game, this engine should ideally cover both a platformer and a bird's eye RPG (with some recoding, I just mean the base engine), both of which will load different levels and events, such as world map, dungeon, shops, etc. So I then thought, rather then having to store all this data within all the states, I would break the engine into gameStates, and playStates... when gameState reaches gameStatePlay(), gameStatePlay simply runs the usual handleInput, logic, and render for the playStates, just as the low level gameStateManager does. This lets me store all the player data within the base playstate class without storing useless data in the gameStates. Now I have added a seperate mapEditor, which uses editorStates from gameStateEditor. Is this too much usage of the gameState concept? It seems to work pretty well for me, so I was wondering if I am too far off a common implementation of this. My second question is on image resources. I have my sprite class with nothing but static members, mainly loadImage, applySurface, and my screen pointer. I also have a map pairing imageName enums with actual SDL_Surface pointers, and one pairing clipNumber enums with a wrapper class for a vector of clips, so that each reference in the map can have different amounts of clips with different sizes. I thought it would be better to store all these images, and screen within one static body, since 20 different goblins all use the same sprite sheet, and all need to print to the same screen, and of course, this way I do not need to pass my screen reference to every little entity. The imageMap seems to work very well, I can even add the ability to search through the map at creation of entity type to see if a particular image at creation, creating if it doesnt exist, and destroying the image if the last entity that needs it was just destroyed. The vectored clip map however, seems to take too long to initialize, so if i run past the state that initializes them to fast, the game crashes <. Plus, the clip map call is half of this line =P SPRITE::applySurface( cEditorMap.cTiles[x][y].iX, cEditorMap.cTiles[x][y].iY, SPRITE::mImages[ IMAGE_TILEMAP ], SPRITE::screen, SPRITE::mImageClips[IMAGE_TILEMAP]->clips.at( cEditorMap.cTiles[x][y].iTileType ) ); Again, do I have the right idea? I like the imageMap, but am I better off with each entity storing its own clips? My last question is about collision detection. I only grasp the basics, will look at per-pixel and circular soon, but how can I determine which side the collision comes from with just the basic square collision detection, I tried breaking each entity into 4 collision zones, but that just gave me problems with walking through walls and the like <. Also, is per-pixel color collision a good way to decide what collision just occured, or is checking multiple colors for multiple entities too taxing each cycle?

    Read the article

  • Mercurial release management. Rejecting changes that fail testing

    - by MYou
    Researching distributed source control management (specifically mercurial). My question is more or less what is the best practice for rejecting entire sets of code that fail testing? Example: A team is working on a hello world program. They have testers and a scheduled release coming up with specific features planned. Upcoming Release: Add feature A Add feature B Add feature C So, the developers make their clones for their features, do the work and merge them into a QA repo for the testers to scrutinize. Let's say the testers report back that "Feature B is incomplete and in fact dangerous", and they would like to retest A and C. End example. What's the best way to do all this so that feature B can easily be removed and you end up with a new repo that contains only feature A and C merged together? Recreate the test repo? Back out B? Other magic?

    Read the article

  • How to validate selects / inserts are hitting the right server with MySQL Master/Slave

    - by bwizzy
    I've got a rails app using the master_slave_adapter plugin (http://github.com/mauricio/master_slave_adapter/tree/master) to send all selects to a slave, and all other statements to the master. Replication is setup using Mysql master / slave. I'm trying to validate that all the SQL statements are indeed going to the right place. Selects to the slave (db2), inserts to the master (db1) but I'm not sure how to do it. I've tried using tcpdump on the webservers: sudo /usr/sbin/tcpdump -q -i eth0 dst port 3306 and this is the output for a page request with a ton of selects: 10:32:36.570930 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.576805 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577201 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 0 10:32:36.577980 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 86 10:32:36.578186 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 21 10:32:36.578359 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 27 10:32:36.578522 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 5 10:32:36.578741 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 13 10:32:36.579611 IP web2.mydomain.com.57524 > db1.mydomain.com.mysql: tcp 29 10:32:36.588201 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588323 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588677 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 0 10:32:36.588784 IP web2.mydomain.com.45978 > db2.mydomain.com.mysql: tcp 86 It doesn't look like all the selects are going to the slave. Maybe this isn't the right way to test, anyone know a better way?

    Read the article

  • Select multiple records from sql database table in a master-detail scenario

    - by Trex
    Hello, I have two tables in a master-detail relationship. The structure is more or less as follows: Master table: MasterID, DetailID, date, ... masterID1, detailID1, 2010/5/1, .... masterID2, detailID1, 2008/6/14, ... masterID3, detailID1, 2009/5/25, ... masterID4, detailID2, 2008/7/24, ... masterID5, detailID2, 2010/4/1, ... masterID6, detailID4, 2008/9/16, ... Details table: DetailID, ... detailID1, ... detailID2, ... detailID3, ... detailID4, ... I need to get all the records from the details table plus the LAST record from the master table (last by the date in the master table). Like this: detailID1, masterID1, 2010/5/1, .... detailID2, masterID5, 2010/4/1, ... detailID3, null, null, ... detailID4, masterID6, 2008/9/16, ... I have no idea how to do this. Can anybody help me? Thanks a lot. Jan

    Read the article

  • Store XML data in Core Data

    - by ct2k7
    Hi, is there any easy way of store XML data into core data? Currently, my app just pulls the values from the XML file directly, however, this isn't efficient for XML files which holds over 100 entries, thus storing the data in Core Data would be the best option. XML file is called/downloaded/parsed ever time the app opens. With the Core Data, the XML data would be downloaded ever 3600 seconds or so, and refresh the current data in the core data, to reduce the loading time when opening the app. Any ideas on how I can do this? Having reviewed the developer documentation, it doesn't look very tasty.

    Read the article

  • Where can I find free and open data?

    - by kitsune
    Sooner or later, coders will feel the need to have access to "open data" in one of their projects, from knowing a city's zip to a more obscure information such as the axial tilt of Pluto. I know data.un.org which offers access to the UN's extensive array of databases that deal with human development and other socio-economic issues. The other usual suspects are NASA and the USGS for planetary data. There's an article at readwriteweb with more links. infochimps.org seems to stand out. Personally, I need to find historic commodity prices, stock values and other financial data. All these data sets seem to cost money however. Clarification To clarify, I'm interested in all kinds of open data, because sooner or later, I know I will be in a situation where I could need it. I will try to edit this answer and include the suggestions in a structured manners. A link for financial data was hidden in that readwriteweb article, doh! It's called opentick.com. Looks good so far! Update I stumbled over semantic data in another question of mine on here. There is opencyc ('the world's largest and most complete general knowledge base and commonsense reasoning engine'). A project called UMBEL provides a light-weight, distilled version of opencyc. Umbel has semantic data in rdf/owl/skos n3 syntax. The Worldbank also released a very nice API. It offers data from the last 50 years for about 200 countries

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • what's the key different between data management and data governance?

    - by Sid Xing
    i just read some articles about these two theories, and i thought they have the similar goal, but DG is more about process management by follow some best practice. So my 1st question is about the difference between DG & DM. I'm confused. There're so many concepts around data management. Data quality, data security, data governance, data profiling, data integration, master data management, metadata management.... It seems like neither of them is EXACTLY separated, they're together. My 2nd question, or ask for your suggestion to help me better understand the relation between these concepts. Appreciate your help.

    Read the article

  • Why are data structures so important in interviews?

    - by Vamsi Emani
    I am a newbie into the corporate world recently graduated in computers. I am a java/groovy developer. I am a quick learner and I can learn new frameworks, APIs or even programming languages within considerably short amount of time. Albeit that, I must confess that I was not so strong in data structures when I graduated out of college. Through out the campus placements during my graduation, I've witnessed that most of the biggie tech companies like Amazon, Microsoft etc focused mainly on data structures. It appears as if data structures is the only thing that they expect from a graduate. Adding to this, I see that there is this general perspective that a good programmer is necessarily a one with good knowledge about data structures. To be honest, I felt bad about that. I write good code. I follow standard design patterns of coding, I do use data structures but at the superficial level as in java exposed APIs like ArrayLists, LinkedLists etc. But the companies usually focused on the intricate aspects of Data Structures like pointer based memory manipulation and time complexities. Probably because of my java-ish background, Back then, I understood code efficiency and logic only when talked in terms of Object Oriented Programming like Objects, instances, etc but I never drilled down into the level of bits and bytes. I did not want people to look down upon me for this knowledge deficit of mine in Data Structures. So really why all this emphasis on Data Structures? Does, Not having knowledge in Data Structures really effect one's career in programming? Or is the knowledge in this subject really a sufficient basis to differentiate a good and a bad programmer?

    Read the article

  • Data structure for pattern matching.

    - by alvonellos
    Let's say you have an input file with many entries like these: date, ticker, open, high, low, close, <and some other values> And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...). Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on. Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link) What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data. What's the first thing that comes to mind? I chose a doubly-linked circular linked list that has the following methods: push_front() push_back() pop_front() pop_back() [] //overloaded, can be used with negative parameters But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it. So, I don't know if I made my question very clear -- but the main points are: What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access? What kind of data structures should be considered when classifying extrema of a set of data points?

    Read the article

  • Object pools for efficient resource management

    - by GameDevEnthusiast
    How can I avoid using default new() to create each object? My previous demo had very unpleasant framerate hiccups during dynamic memory allocations (usually, when arrays are resized), and creating lots of small objects which often contain one pointer to some DirectX resource seems like an awful lot of waste. I'm thinking about: Creating a master look-up table to refer to objects by handles (for safety & ease of serialization), much like EntityList in source engine Creating a templated object pool, which will store items contiguously (more cache-friendly, fast iteration, etc.) and the stored elements will be accessed (by external systems) via the global lookup table. The object pool will use the swap-with-last trick for fast removal (it will invoke the object's ~destructor first) and will update the corresponding indices in the global table accordingly (when growing/shrinking/moving elements). The elements will be copied via plain memcpy(). Is it a good idea? Will it be safe to store objects of non-POD types (e.g. pointers, vtable) in such containers? Related post: Dynamic Memory Allocation and Memory Management

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • Three Master Data Management Deployment Tips

    - by david.butler(at)oracle.com
    MDM is all about data quality and data governance. We now know that improved data quality raises all operational and analytical boats. But it's not just about deploying data quality tools. It's about deploying data quality tools within and across the IT landscape - from a thousand points of data entry to a single version of the truth. Here are three tips to deploying MDM across your applications and enterprise.   #1: Identify a tactical, high-value business problem where MDM can materially help. §  Support a customer acquisition and retention program with a 'customer' master data solution. §  Accelerate new products and services to market with a 'product' master data solution. §  Reduce supplier exceptions or support spend control initiatives with a 'supplier' master data solution. §  Support new store (branch, campus, restaurant, hospital, office, well head) location analysis with a 'site' master data solution. §  Fix long standing Chart of Accounts and Cost Center problems with a 'financial' master data solution. §  Support M&A activity, application upgrades, an SOA initiative, a cloud computing program, or a new business intelligence deployment by implementing a mix of master data solutions.   #2: Incrementally expand to a full information architecture. Quite often, the measurable return on interest from tactical MDM initiatives will fund future deployments. Over time, the MDM solution expands into its full architecture to cover the entire IT landscape. Operations and analytics are united, IT flexibility is restored, and sustainable competitive advantage is achieved.   #3: Bring business into every MDM deployment. To be successful, MDM must work hand in hand with data governance. In fact, Oracle MDM incorporates data governance tools for business users. IT can insure data quality, but only after the business side has defined what quality means. The business establishes the rules for governing the master data, and then IT enforces the rules via the MDM applications. Without this business/IT collaboration, MDM initiatives seldom achieve their full potential.   It is not very often that a technology comes along that can measurably assist organizations across a wide variety of top IT initiatives. Reducing costs, increasing flexibility, getting more out of existing assets, and aligning business and IT are not easy tasks for any CIO. But with MDM, success is achievable. IT can regain its place as a center for innovation.   For more information on this topic, take a look at my article Master Data Management Deployment Tips in the Opinion Section of Oracle's Profit Online magazine.

    Read the article

  • Address Regulatory Mandates for Data Encryption Without Changing Your Applications

    - by Troy Kitch
    The Payment Card Industry Data Security Standard, US state-level data breach laws, and numerous data privacy regulations worldwide all call for data encryption to protect personally identifiable information (PII). However encrypting PII data in applications requires costly and complex application changes. Fortunately, since this data typically resides in the application database, using Oracle Advanced Security, PII can be encrypted transparently by the Oracle database without any application changes. In this ISACA webinar, learn how Oracle Advanced Security offers complete encryption for data at rest, in transit, and on backups, along with built-in key management to help organizations meet regulatory requirements and save money. You will also hear from TransUnion Interactive, the consumer subsidiary of TransUnion, a global leader in credit and information management, which maintains credit histories on an estimated 500 million consumers across the globe, about how they addressed PCI DSS encryption requirements using Oracle Database 11g with Oracle Advanced Security. Register to watch the webinar now.

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • Oracle - A Leader in Gartner's MQ for Master Data Management for Customer Data

    - by Mala Narasimharajan
      The Gartner MQ report for Master Data Management of Customer Data Solutions is released and we're proud to say that Oracle is in the leaders' quadrant.  Here's a snippet from the report itself:  " “Oracle has a strong, though complex, portfolio of domain-specific MDM products that include prepackaged data models. Gartner estimates that Oracle now has over 1,500 licensed MDM customers, including 650 customers managing customer data. The MDM portfolio includes three products that address MDM of customer data solution needs: Oracle Fusion Customer Hub (FCH), Oracle CDH and Oracle Siebel UCM. These three MDM products are positioned for different segments of the market and Oracle is progressively moving all three products onto a common MDM technology platform..." (Gartner, Oct 18, 2012)  For more information on Oracle's solutions for customer data in Master Data Management, click here.  

    Read the article

  • Almost Realtime Data and Web application

    - by Chris G.
    I have a computer that is recording 100 different data points into an OPC server. I've written a simple OPC client that can read all of this data. I have a front-end website on a different network that I would like to consume this data. I could easily set the OPC client to send the data to a SQL server and the website could read from it, but that would be a lot of writes. If I wanted the data to be updated every 10 seconds I'd be writing to the database every 10 seconds. (I could probably just serialize the 100 points to get 1 write / 10 seconds but that would also limit my ability to search the data later). This solution wouldn't scale very well. If I had 100 of these computers the situation would quickly grow out of hand. Obviously I am well out of my league here and I have no experience with working with a large amount of data like this. What are my options and what should I research?

    Read the article

  • Form Elements in ASP.NET Master Pages and Content Pages

    - by Rob Cooper
    OK, another road bump in my current project. I have never had form elements in both my master and content pages, I tend to have all the forms in the content where relevant. In the current project however, we have a page where they want both. A login form at the top right, and a questions form in the content. Having tried to get this in, I have run in to the issue of ASP.NET moaning about the need for a single form element in a master page. TBH, I really dont get why this is a requirement on ASP.NET's part, but hey ho. Does anyone know if/how I can get the master and content pages to contain form elements that work independantly? If not, can you offer advice on how to proceed to get the desired look/functionality?

    Read the article

  • Pass data to Master Page with ASP.NET MVC

    - by Brian David Berman
    I have a hybrid ASP.NET WebForms/MVC project. In my Master Page, I have a "menu" User Control and a "footer" User Control. Anyways. I need to pass some data (2 strings) to my "menu" User Control on my Master Page (to select the current tab in my menu navigation, etc.) My views are strongly-typed to my data model. How can I push data from my controller to my menu or at least allow my master page to access some data pre-defined in my controller? Note: I understand this violates pure ASP.NET MVC, but like I said, it is a hybrid project. The main purpose of my introduction to ASP.NET MVC into my project was to have more control over my UI for certain situations only.

    Read the article

  • How to check if a variable is defined in a Master file in ASP.NET MVC

    - by Mortanis
    I've got a Site.Master file I've created to be my template for the majority of the site, with a navigation. This navigation is dynamically created, based on a recursive Entity (called Page) - Pages with a parentID of 0 are top level, and naturally each child carries it's parent's Id in that field. I've created a quick little HTML Helper that accepts the ID of an Page and generates the nav by doing a foreach on the children that have a parentId matching the passed Id. On the majority of the site, I want the Site.Master to use a parentId of 0, but if I'm on a strongly typed View displaying a Page, I naturally want to use the Id of the page. Is there a way to do such conditional logic in a Site.Master (and, does that violate MVC rules)? "If I'm on a strongly typed Page of /Page/{Id}, use the Id render nav, else use 0"

    Read the article

  • BIND master/slave does not respond for queries for its slave

    - by Savas
    Systems are all Centos 6.2 Lets say I have a masterdns with IP 10.2.1.2, authoritative for the 10.2.1.X subnet and let say it is domain example.com I have another two subnets, 10.2.2.X and 10.1.2.X Each one has its own DNS server, dns2 and dns1 respectively and let say these are domains dom2.example.com and dom2.example.com respectively. The masterdns server has slave zones for dns1, dns2 and respond to requests OK. The dns1, dns2 have the masterdns zones as slaves two, and respond to requests OK. So, the masterdns has as slave zones all the subordinate domains of example.com Each of dns1 and dns2 use masterdns as a forwards (which uses another dns cache/proxy server) for dns resolution of internet public domain names. It works OK that too. The problem is, and I cannot figure it out. Why queries for example at dns1 for hostnames of dom2.example.com do not resolve? If i use nslookup - masterdns at dns1 server, resolve (i use directly the dns facility of masterdns). If I use nslookup locally, meaning queries are sent to dns1, for hosts that are at dom2.example.com, they do not resolve. Everything other works OK.

    Read the article

  • Setting up MySQL Linux slave with a Windows master

    - by philwilks
    I'm running a MySQL 5.0 database server on Windows Server 2008. The total size of the database is about 1Gb. I make daily backups, but I'd like to step up to having a slave server for extra protection. My thinking was that I wouldn't need the expense of a Windows machine to do this, and a Linux "cloud server" from RackSpace would do the job well for quite a low cost. However I have little experience with Linux, so I have a few questions... Does this sound like a good idea? Is there anything wrong with linking Windows and Linux MySQL servers? Does Linux have the equivalent of Remote Desktop Connection? If so can I use this from a Windows machine? Would a particular Linux distro be well suited to this task? RackSpace offer ArchLinux, CentOS, Debian, Fedora and Ubuntu. My immediate thinking is to go with Ubuntu as I've heard it's more friendly for people coming from a Windows background. Any comments you have would be very appreciated! Phil

    Read the article

  • Identity Management as a Controls Infrastructure

    - by Darin Pendergraft
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Identity systems are indispensable to managing online resources, and are becoming increasingly more complex as businesses adapt their current infrastructures to support a broad user population across a wide range of devices. Adding point products to solve problems addresses the short term need, but complicates the longer term management outlook. Download the latest whitepaper HERE to see how Oracle is taking a platform approach to building a scalable and secure controls infrastructure that enables businesses to engage customers and gives employees secure access to corporate resources from anywhere.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >