Search Results

Search found 69877 results on 2796 pages for 'ibm data studio'.

Page 13/2796 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Visual Studio 2010 editions - switching from Premium (not a trial) to Ultimate trial and back again

    - by Bernard Vander Beken
    I have installed Visual Studio 2010 Premium RTM (not a trial) and would like to run the Visual Studio 2010 Ultimate Trial for a while. What is the fastest way to switch to Ultimate trial and back again to the Premium? My best idea: not uninstalling the Premium edition. running the Microsoft Visual Studio 2010 Ultimate Trial - Web Install testing the Ultimate trial uninstalling the Ultimate trial repairing the Premium installation

    Read the article

  • Visual Studio Multi-Targeting (maintaing backwards compatability)

    - by Phillip Benages
    I know in Visual Studio 2008 you can target a specific framework with your projects, but from what I have been told if you open a project originally created in Visual Studio 2003 or 2005 in Visual Studio 2008, it requires you to upgrade the project to a 2008 project to work on it. Does Visual Studio 2010 have this same type of restriction for multi-targeting? It would be very nice being able to use features of 2010 when working in our projects that target different frameworks, but we do not want to force all of our developers to upgrade in order to continue working in these projects.

    Read the article

  • Creating a tool dockable window for visual studio

    - by Morgeh
    So I have a web service system for mananging development projects, what I would like to do is create a visual studio plugin that accesses the web service and returns a list of tasks for the current users (via some sort of login). Looking round the internet I can't find any good examples or tutorials on how to create a visual studio plugin that can be docked to the bottom of the screen (same place as error list, test results, etc) Does anyone know of a good website I can look at for examples or tutorials of the basics of creating a visual studio plugin, specifically for VS2008?

    Read the article

  • Visual Studio Multi-Targeting (maintaining backwards compatibility)

    - by Phillip Benages
    I know in Visual Studio 2008 you can target a specific framework with your projects, but from what I have been told if you open a project originally created in Visual Studio 2003 or 2005 in Visual Studio 2008, it requires you to upgrade the project to a 2008 project to work on it. Does Visual Studio 2010 have this same type of restriction? It would be very nice being able to use features of 2010 when working in our projects that target different frameworks, but we do not want to force all of our developers to upgrade in order to continue working in these projects.

    Read the article

  • Visual Studio 2012 Very Slow Typing

    - by DaoCacao
    I have a problem. After SP1 update, passing some time, VS 2012 becomes very-very slow when typing text. Solution size is not big, PC is quite powerful, it has 16GB of RAM, SSD drive, and i7-2600. I have attached using another VS and I see in debugger a lot of exceptions: First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC (KernelBase.dll) in devenv.exe: 0xE0434352 (parameters: 0x80131509, 0x00000000, 0x00000000, 0x00000000, 0x64BF0000). First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC (KernelBase.dll) in devenv.exe: 0xE0434352 (parameters: 0x80131509, 0x00000000, 0x00000000, 0x00000000, 0x64BF0000). The thread 0x288c has exited with code 0 (0x0). Anyone have any ideas on what CVcsException is? Googling it gives almost nothing. How do I get rid of this problem?

    Read the article

  • Visual Studio hangs Windows

    - by Kronikarz
    I have: Windows XP Pro SP3 with latest updates, drivers, .NET, etc. Pentium 4 2.8GHz 2GB RAM 150GB HD ATI Radeon HD 3400 Recently (as early as a week ago) Visual C++ (both 2005 Pro and 2008 Express) started hanging up my computer. Whenever they are run, after 5-20 minutes of work the computer freezes. Everything becomes unresponsive, including the mouse cursor. No combination of keys does anything. What's strange, is that Winamp/Firefox continues to play whatever it was playing at the time (internet radio, mp3 playlist, etc). The only thing I can do is a hard reboot. I've run CCleaner and a full AVG antivirus scan, both of which found nothing suspicious. Does anyone know of a solution to this problem?

    Read the article

  • Where to set Visual studio 2013 property macros

    - by marcp
    I'm a new VS user. I've received some sample C++ projects working with a 3rd party API. They were saved in VS2012 format, but I have VS 2013. After conversion I find that there is an API specific macro defined in the project properties in the "Linker|General|Additional Library Directories" category. If I click on 'edit' I can replace the macro with an actual path, but how do I establish what the macro points to? In other words, how does one create a macro usable in multiple projects?

    Read the article

  • Decrease the height of title bar in Visual Studio 2012 on secondary screen

    - by matcheek
    I have two screens on my VS2012. No problems with title bar on the main screen, on the secondary however, the title bar takes up lots of space - see screenshot attached. In VS2010, for example, the title bar on secondary screen is a lot thinier. I guess this change was made to address touch interfaces (??) but it is highly inconvenient to waste some much space just because of that. Anybody knows how the change just the height of the title bar on the secondary screen?

    Read the article

  • How can I set the BIOS/EFI security password on IBM System x servers by script/ASU?

    - by christian123
    I want to deploy IBM System x servers (like IBM System x 3550 M2) automatically and need to set a security password in the bios (actually it's uefi). I found this nice tool named ASU: http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?brandind=5000008&lndocid=MIGR-55021 Unfortunately I cannot see an option to set the password. Forum searches only show me people who want to reset the password using this tool. Does anybody know how to automatically deploy system passwords on IBM Intel-based servers?

    Read the article

  • IBM MQ corrupted messages

    - by Anand
    Hi I posted the question below in the forum and now I am asking another question in the hope that I get some pointers to my answers. my previous post Ok lets begin: Now the problem is like this: OS: Linux 1. I post messages to the IBM MQ 2. The some random messages in the queue get randomly corrupted as posted in the previous stackoverflow question OS: Windows 1. I post messages to the IBM MQ 2. The some random messages in the queue get randomly corrupted as posted in the previous stackoverflow question OS: Windows 1. I post messages to the IBM MQ 2. Now I read the messages and write them to a file just to observe them 3. Also I allow the messages to pass through as is after writing them to file Now everything goes through fine How can I resolve this problem

    Read the article

  • NuGet package manager in Visual Studio 2012

    - by sreejukg
    NuGet is a package manager that helps developers to automate the process of installing and upgrading packages in Visual Studio projects. It is free and open source. You can see the project in codeplex from the below link. http://nuget.codeplex.com/ Now days developers needed to work with several packages or libraries from various sources, a typical e.g. is jQuery. You will hardly find a website that not uses jQuery. When you include these packages as manually copying the files, it is difficult to task to update these files as new versions get released. NuGet is a Visual studio add on, that comes by default with Visual Studio 2012 that manages such packages. So by using NuGet, you can include new packages to you project as well as update existing ones with the latest versions. NuGet is a Visual Studio extension, and happy news for developers, it is shipped with Visual Studio 2012 by default. In this article, I am going to demonstrate how you can include jQuery (or anything similar) to a .Net project using the NuGet package manager. I have Visual Studio 2012, and I created an empty ASP.Net web application. In the solution explorer, the project looks like following. Now I need to add jQuery for this project, for this I am going to use NuGet. From solution explorer, right click the project, you will see “Manage NuGet Packages” Click on the Manage NuGet Packages options so that you will get the NuGet Package manager dialog. Since there is no package installed in my project, you will see “no packages installed” message. From the left menu, select the online option, and in the Search box (that is available in the top right corner) enter the name of the package you are looking for. In my case I just entered jQuery. Now NuGet package manager will search online and bring all the available packages that match my search criteria. You can select the right package and use the Install button just next to the package details. Also in the right pane, it will show the link to project information and license terms, you can see more details of the project you are looking for from the provided links. Now I have selected to install jQuery. Once installed successfully, you can find the green icon next to it that tells you the package has been installed successfully to your project. Now if you go to the Installed packages link from the left menu of package manager, you can see jQuery is installed and you can uninstall it by just clicking on the Uninstall button. Now close the package manager dialog and let us examine the project in solution explorer. You can see some new entries in your project. One is Scripts folder where the jQuery got installed, and a packages.config file. The packages.config is xml file that tells the NuGet package manager, the id and the version of the package you install. Based on this file NuGet package manager will identify the installed packages and the corresponding versions. Installing packages using NuGet package manager will save lot of time for developers and developers can get upgrades for the installed packages very easily.

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle Data Integration 12c: Perspectives of Industry Experts, Customers and Partners

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 As you may have seen from our recent blog posts on Oracle Data Integrator 12c and Oracle GoldenGate 12c, we are very excited to share with you the great new features the 12c release brings to Oracle’s data integration solutions. And, fortunately we are not alone in this sentiment. Since the press announcement October 17th, which incorporates our customers' and experts' testimonials, we have seen positive comments in leading technology publications and social media as well. Here are some examples: In CIO and PCWorld you can find Joab Jackson’s article, Oracle Data Integrator 12c ready for real-time analysis, where wrote about the tight integration between Oracle Data Integrator and Oracle GoldenGate . He noted “Heeding the call from enterprise customers who clamor for more immediacy in their data-driven reports, Oracle has updated its data-integration software portfolio so that it can more rapidly deliver data to data warehouses and analysis applications.” Integration Developer News’ Vance McCarthy wrote the article Oracle Ships ‘Future Proofs’ Integration Tools for Traditional, Cloud, Big Data, Real-Time Projects and mentioned that “Oracle Data Integrator 12c and Oracle GoldenGate 12c sport a wide range of improvements to let devs more easily deliver data integration for cloud, analytics, big data and other new projects that leverage multiple datasets for business.“ InformationWeek’s Doug Henschen gave a great overview to several key features including the new flow-based UI in Oracle Data Integrator. Doug said “Oracle Data Integrator 12c introduces a complete makeover of the job-building experience, while real-time oriented GoldenGate 12c introduces performance gains “. In Database Trends and Applications’ article Oracle Strengthens Data Integration with Release of Oracle Data Integrator 12c and Oracle GoldenGate 12c highlighted the productivity aspect of the new solution with his remarks: “tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training”. We are also thrilled about what our customers and partners have to say about our products and the new release. And we are equally excited to share those perspectives with you in our upcoming launch video webcast on November 12th. SolarWorld Industries America’s Senior Database Manager, Russ Toyama will join our executives in our studio in Redwood Shores to discuss GoldenGate’s core benefits and the new release, while Surren Partharb, CTO of Strategic Technology Services for BT, and Mark Rittman, CTO of Rittman Mead, will provide their comments via the interviews conducted in the UK. This interactive panel discussion in the video webcast will unveil the new release with the expertise of our development executives and the great insight from our customers and partners. In addition, our product experts will be available online to answer chat questions. This is really a great opportunity to learn how Oracle's data integration offering has changed the integration and replication technology space with the new release, and established itself as the new leader. If you have not registered for this free event yet, you can do so via this link. We will run the live event at 8am PT/4pm GMT, followed by a replay of the event with live chat for Q&A  at 10am PT/6pm GMT. The replay will be available on-demand for those who register but cannot attend either session on November 12th. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • GUI to include a .prop file in a VS 2010 project?

    - by jwfearn
    Visual Studio 2010 has no longer uses .vsprops files and instead uses .props files. To include a .vsprops file in a Visual Studio 2008 project, one could right-click the project icon in the Solution Explorer panel, choose Properties, go to the Configuration Properties | General section, and modify the Inherited Project Property Sheets property to contain a list of .vsprops paths. One could also modify the Visual Studio 2008 project file directly. Is there a way in the Visual Studio 2010 GUI to include .props files to a project? The Inherited Project Property Sheets property seems to have been removed. If manual editing of the project file is the only way to include .props files, where can one find documentation on doing it? I'm not talking about adding a .props file to the list of files in the project, I mean how do I tell the project to use a .props file.

    Read the article

  • How can I make VS2010 behave like VS2008 w/r/t indentation?

    - by Portman
    Situation I have a plain text file where indentation is important. line 1 line 1.1 (indented two spaces) line 1.2 (indented two spaces) line 1.2.3 (indented four spaces) In Visual Studio 2008, when I pressed enter, the next line would also be indented four spaces. However, in Visual Studio 2010, when I press enter, the next line is indented one tab. Question Does anybody know where, in the mountain of preferences under Tools Options, I can return to the way that Visual Studio 2008 worked? Under Options Text Editor Plain Text Tabs, I see the following: If I select "None", then I get no indentation when I move to the next line. If I select "Block", then I get TAB indentation (even though the previous line is spaces). In Visual Studio 2008, my indentation is set to "Block", and I get spaces. I have no idea what "Smart" indenting is, or why it is disabled.

    Read the article

  • GUI to add a .props file to a VS 2010 project?

    - by jwfearn
    Visual Studio 2010 has no longer uses .vsprops files and instead uses .props files. To add a .vsprops file to a Visual Studio 2008 project, one could right-click the project icon in the Solution Explorer panel, choose Properties, go to the Configuration Properties | General section, and modify the Inherited Project Property Sheets property to contain a list of .vsprops paths. One could also modify the Visual Studio 2008 project file directly. Is there a way in the Visual Studio 2010 GUI to add .props files to a project? The Inherited Project Property Sheets property seems to have been removed. If manual editing of the project file is the only way to include .props files, where can one find documentation on doing it?

    Read the article

  • GUI to include a .prop` file to a VS 2010 project?

    - by jwfearn
    Visual Studio 2010 has no longer uses .vsprops files and instead uses .props files. To include a .vsprops file in a Visual Studio 2008 project, one could right-click the project icon in the Solution Explorer panel, choose Properties, go to the Configuration Properties | General section, and modify the Inherited Project Property Sheets property to contain a list of .vsprops paths. One could also modify the Visual Studio 2008 project file directly. Is there a way in the Visual Studio 2010 GUI to include .props files to a project? The Inherited Project Property Sheets property seems to have been removed. If manual editing of the project file is the only way to include .props files, where can one find documentation on doing it? I'm not talking about adding a .props file to the list of files in the project, I mean how do I tell the project to use a .props file.

    Read the article

  • GUI to include a `.props` file to a VS 2010 project?

    - by jwfearn
    Visual Studio 2010 has no longer uses .vsprops files and instead uses .props files. To add a .vsprops file to a Visual Studio 2008 project, one could right-click the project icon in the Solution Explorer panel, choose Properties, go to the Configuration Properties | General section, and modify the Inherited Project Property Sheets property to contain a list of .vsprops paths. One could also modify the Visual Studio 2008 project file directly. Is there a way in the Visual Studio 2010 GUI to add .props files to a project? The Inherited Project Property Sheets property seems to have been removed. If manual editing of the project file is the only way to include .props files, where can one find documentation on doing it?

    Read the article

  • Pinning Projects and Solutions with Visual Studio 2010

    - by ScottGu
    This is the twenty-fourth in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. Today’s blog post covers a very small, but still useful, feature of VS 2010 – the ability to “pin” projects and solutions to both the Windows 7 taskbar as well VS 2010 Start Page.  This makes it easier to quickly find and open projects in the IDE. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] VS 2010 Jump List on Windows 7 Taskbar Windows 7 added support for customizing the taskbar at the bottom of your screen.  You can “pin” and re-arrange your application icons on it however you want. Most developers using Visual Studio 2010 on Windows 7 probably already know that they can “pin” the Visual Studio icon to the Windows 7 taskbar – making it always present.  What you might not yet have discovered, though, is that Visual Studio 2010 also exposes a Taskbar “jump list” that you can use to quickly find and load your most recently used projects as well. To activate this, simply right-click on the VS 2010 icon in the task bar and you’ll see a list of your most recent projects.  Clicking one will load it within Visual Studio 2010: Pinning Projects on the VS 2010 Jump List with Windows 7 One nice feature also supported by VS 2010 is the ability to optionally “pin” projects to the jump-list as well – which makes them always listed at the top.  To enable this, simply hover over the project you want to pin and then click the “pin” icon that appears on the right of it: When you click the pin the project will be added to a new “Pinned” list at the top of the jumplist: This enables you to always display your own list of projects at the top of the list.  You can optionally click and drag them to display in any order you want. VS 2010 Start Page and Project Pinning VS 2010 has a new “start page” that displays by default each time you launch a new instance of Visual Studio.  In addition to displaying learning and help resources, it also includes a “Recent Projects” section that you can use to quickly load previous projects that you have recently worked on: The “Recent Projects” section of the start page also supports the concept of “pinning” a link to projects you want to always keep in the list – regardless of how recently they’ve been accessed. To “pin” a project to the list you simply select the “pin” icon that appears when you hover over an item within the list: Once you’ve pinned a project to the start page list it will always show up in it (at least until you “unpin” it). Summary This project pinning support is a small but nice usability improvement with VS 2010 and can make it easier to quickly find and load projects/solutions.  If you work with a lot of projects at the same time it offers a nice shortcut to load them. Hope this helps, Scott

    Read the article

  • Communicating via Command Mode with IBM HS22 IMM via AMM

    - by MikeyB
    On previous model blades that contained a BMC, I was able to communicate from our external management station via pass-through commands to the BMC to do things such as power blades on/off, set VPD parameters, reboot the BMC, etc. Now on the HS22, a bunch of things happen differently. For example, we can no longer use the same pass-through commands to write VPD information pages and have them persist across reboots of the IMM - it looks as though those VPD pages are populated from information contained in the IMM. How do we use the Advanced Settings Utility from an external host to communicate with HS22 IMMs? Alternatively, what TCP Command Mode commands do we need to send to the AMM to communicate with the IMM? For our purposes, we specifically cannot communicate with the IMM from the blade itself. Specific example: When I send a pass-thru IPMI command via the AMM to the blade BMC to write information (such as MTM, Serial) into VPD page 0x10, it persists on blades with a BMC (HS21 for example). I can send the same IPMI command to write data to the VPD page on the HS22, however it does not persist across reboots of the IMM. What IPMI commands do I need to send to the IMM? What IPMI commands are asu sending when it sets the MTM & Serial?

    Read the article

  • Upgrading memory on an IBM Power 710 Express (8231-E2B)

    - by cairnz
    We have a Power 710 Express server that was loaded with 4x4 GB memory on a single riser card. I have replaced the 4 chips with 4x8GB and put in another riser card and loaded it with 4x8GB more for a total of 64GB memory. The firmware is AL730_078. When i power it on, the service processor boots up and i can access the ASMi. From here I can look at "Memory Serial Presence Data" and see that the system in some way detects 8x8 GB. However when i look at Hardware Deconfiguration and specifically Memory Deconfiguration, it is still listed with old values, 16384MB, and claims there are 4x4 chips in the C17 riser. How do i proceed to make the server recognize properly the amount of memory installed? I get a FSPSP04 and B181B50F progress code on booting because (i think) it hasn't been told the memory has changed. It then does not proceed to booting the operating system (VIOS) when turned on. Are there any steps I have overlooked here? Can I do some commands, either on the service processor, or otherwise, to tell the system to configure with the proper amount of memory? PS: This is a stand alone server, not configured with HMC or SDMC.

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >