Search Results

Search found 65101 results on 2605 pages for 'big data'.

Page 18/2605 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Data structure for file search

    - by poly
    I've asked this question before and I got a few answers/idea, but I'm not sure how to implement them. I'm building a telecom messaging solution. Currently, I'm using a database to save my transaction/messages for the network stack I've built, and as you know it's slower than using a data structure (hash, linkedlist, etc...). My problem is that the data can be really huge, and it won't fit in the memory. I was thinking of saving the records in a file and the a key and line number in a hash, then if I want to access some record then I can get the line number from the hash, and get it from the file. I don't know how efficient is this; I think the database is doing a way better job than this on my behalf. Please share whatever you have in mind.

    Read the article

  • Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by Elke Phelps (Oracle Development)
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.  You can use the Oracle Data Masking Pack with Oracle Enterprise Manager today to scramble sensitive data in cloned environments. Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  Using the Data Masking Pack in E-Business Suite environments is now easier with the release of new set of templates for E-Business Suite databases: Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack (Patch13898999) This template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  Is there a charge for this? Yes. You must purchase licenses for Oracle Enterprise Manager and the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.   References For additional information on the Oracle E-Business Suite Template for Data Masking Pack please refer to the following: Masking Sensitive Data for Non-production Use in the Oracle Enterprise Manager Concepts 11g Using the Oracle E-Business Suite, Release 12.1.3 Template for the Data Masking Pack, Note 1437485.1 Related Articles Webcast Replay Available: E-Business Suite Data Protection Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    Read the article

  • E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    - by Elke Phelps (Oracle Development)
    Following up on our prior announcement for EM 11g, we're pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12c. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  You may scramble data in E-Business Suite cloned environments with EM12c using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 14407414) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.  What's new with EM 12c? Some of the execution steps may also be performed with EM Command Line Interface (EM CLI).  Support of EM CLI is a new feature with the E-Business Suite Release 12.1.3 template for EM 12c.  Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1.0.2 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • C# - Data Clustering approach

    - by Brett
    Hi all, I am writing a program in C# in which I have a set of 200 points displayed on an image. However, the points tend to cluster in various regions, and I am looking to find a way to "cluster." In other words, maybe draw a circle/ellipse around the clustered points. Has anyone seen any way to do this? I have heard about K-means clustering, but I am not sure how to implement it in C#. Any favorite implementations out there? Cheers, Brett

    Read the article

  • How to show or direct a business analyst to a data modelling subject?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • How to show or direct a business analyst to do data modelling?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Graph data structures and journal format for mini-IDE

    - by matec
    Background: I am writing a small/partial IDE. Code is internally converted/parsed into a graph data structure (for fast navigation, syntax-check etc). Functionality to undo/redo (also between sessions) and restoring from crash is implemented by writing to and reading from journal. The journal records modifications to the graph (not to the source). Question: I am hoping for advice on a decision on data-structures and journal format. For the graph I see two possible versions: g-a Graph edges are implemented in the way that one node stores references to other nodes via memory address g-b Every node has an ID. There is an ID-to-memory-address map. Graph uses IDs (instead of addresses) to connect nodes. Moving along an edge from one node to another each time requires lookup in ID-to-address map. And also for the journal: j-a There is a current node (like current working directory in a shell + file-system setting). The journal contains entries like "create new node and connect to current", "connect first child of current node" (relative IDs) j-b Journal uses absolute IDs, e.g. "delete edge 7 - 5", "delete node 5" I could e.g. combine g-a with j-a or combine g-b with j-b. In principle also g-b and j-a should be possible. [My first/original attempt was g-a and a version of j-b that uses addresses, but that turned out to cause severe restrictions: nodes cannot change their addresses (or journal would have to keep track of it), and using journal between two sessions is a mess (or even impossible)] I wonder if variant a or variant b or a combination would be a good idea, what advantages and disadvantages they would have and especially if some variant might be causing troubles later.

    Read the article

  • Turning a problem into a data

    - by Fogmeister
    OK, I have an app that I'm creating but I'm just really not sure about how to approach the problem. The idea is fairly simple. I'm just not sure how to wrap it in a data model (or even if I should). TBH I feel like I'm making it more complicated than it needs to be. How it works. The app will have circles along the top in a row that need to be connected to circles along the bottom in a row. 10 circles at the top. 10 at the bottom. One connection per pair of dots. Anyway, I can get the dots to connect I'm just not sure how to wrap it in a data model so that I can analyse what has been connected and see if it is right or not. The circles will be questions and answers. I can make an array of question objects with a question and answer properties. I can then display these as the dot pairs. I'm just not sure how to record which questions have been connected to which answers. It is valid for a user to connect a wrong answer as they all get checked at the end. I was thinking of using SpriteKit but this isn't a restriction. I could use UIKit or something else. TBH, this question is fairly language free as I'm just after a way of modelling it.

    Read the article

  • Enforcing Constraints Upon Data Documents of Various Formats

    - by Christopher Berman
    This seems like the sort of problem that must have been solved elegantly long ago, but I haven't the foggiest how to google it and find it. Suppose you're maintaining a large legacy system, which has a large collection of data (tens of GB) of various formats, including XML and two different internal configuration formats. Suppose further that there are abstract rules governing the values these files may or may not contain. EXAMPLE: File A defines the raw, mathematical data pertaining to the aerodynamics of a car for consumption of the physics component of the system. File B contains certain values from File A in an easily accessible, XML hierarchy for consumption of a different component of the system. There exists, therefore, an abstract rule (or constraint) such that the values from File B must match the values from File A. This is probably the simplest constraint that can be specified, but in practice, the constraints between files can become very complicated indeed. What is the best method for managing these constraints between files of arbitrary formats, short of migrating it over to an RDBMS (which simply isn't feasible for the foreseeable future)? Has this problem been solved already? To be more specific, I would expect the solution to at least produce notifications of violated constraints; the solution need not resolve the constraints. ============================== Sample file structures File A (JeepWrangler2011.emv): MODEL JeepWrangler2011 { EsotericMathValueX 11.1 EsotericMathValueY 22.2 EsotericMathValueZ 33.3 } File B (JeepWrangler2011.xml): <model name="JeepWrangler2011"> <!--These values must correspond File A's EsotericMathValues--> <modelExtent x="11.1" y="22.2" z="33.3"/> [...] </model>

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Unmet Dependencies with kdelibs5-data

    - by Jitesh
    I was trying to install Amarok 1.4 on Ubuntu 12.04 (Gnome-classic), by following this instructions. Problem started after giving these two commands dpkg -i kdelibs5-data_4.6.2-0ubuntu4_all.deb dpkg -i kdelibs-data_3.5.10.dfsg.1-5ubuntu2_all.deb Now, immediately after these commands, Ubuntu Updater popped up and gave me an error that the package catalog is broken and needs to be repaired. Nothing can be installed or removed till then. It also offered a suggestion to run apt-get install -f. I tried that, but again got the same error.Also tried apt-get clean followed by apt-get install -f. Again got the following output: jitesh@jitesh-desktop:~$ sudo apt-get clean jitesh@jitesh-desktop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: kdelibs5-data The following packages will be upgraded: kdelibs5-data 1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 2 not fully installed or removed. Need to get 2,832 kB of archives. After this operation, 2,998 kB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http: //in.archive.ubuntu.com/ubuntu/precise-updates/main kdelibs5-data all 4:4.8.4a-0ubuntu0.2 [2,832 kB] Fetched 2,832 kB in 32s (86.6 kB/s) dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. No apport report written because MaxReports is reached already dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: kdelibs5-data kdelibs-data W: Duplicate sources.list entry http://archive.canonical.com/ubuntu/ precise/partner i386 Packages (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_precise_partner_binary-i386_Packages) W: You may want to run apt-get update to correct these problems E: Sub-process /usr/bin/dpkg returned an error code (1) As I thought the error was related to configuring kdelibs, I tried to configure using dpkg. But got the following errors: jitesh@jitesh-desktop:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: kdelibs5-data kdelibs-data jitesh@jitesh-desktop:~$ Now I dont have any idea how to proceed. I am unable to install anything from Software Centre or using Terminal now. Some basic info: Core2Duo, dual booting Ubuntu 12.04 with Win7. Fresh install of Ubuntu 12.04 (not upgrade). Incidentally, I had first upgraded from 10.04 and had succesfully installed Amarok 1.4 following this same method. But due to other issues, i had to format and do a clean install of 12.04. Now when I tried to install Amarok 1.4, I'm getting these errors. I also have digiKam and k3b installed, if that can be of any help. I use digiKam a lot, so removing KDE is not feasible for me. Any help on this issue will be highly appreciated. Thanks in advance.

    Read the article

  • ASP.NET Dynamic Data Deployment Error

    - by rajbk
    You have an ASP.NET 3.5 dynamic data website that works great on your local box. When you deploy it to your production machine and turn on debug, you get the YSD Server Error in '/MyPath/MyApp' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Unknown server tag 'asp:DynamicDataManager'. Source Error: Line 5:  Line 6:  <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> Line 7:      <asp:DynamicDataManager ID="DynamicDataManager1" runat="server" AutoLoadForeignKeys="true" /> Line 8:  Line 9:      <h2><%= table.DisplayName%></h2> Probable Causes The server does not have .NET 3.5 SP1, which includes ASP.NET Dynamic Data, installed. Download it here. The third tagPrefix shown below is missing from web.config <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.DynamicData" assembly="System.Web.DynamicData, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls></pages>     Hope that helps!

    Read the article

  • SQL Server 2012 : The Data Tools installer is now available

    - by AaronBertrand
    Last week when RC0 was released, the updated installer for "Juneau" (SQL Server Data Tools) was not available. Depending on how you tried to get it, you either ended up on a blank search page, or a page offering the CTP3 bits. Important note: the CTP3 Juneau bits are not compatible with SQL Server 2012 RC0. If you already have Visual Studio 2010 installed (meaning Standard/Pro/Premium/Ultimate), you will need to install Service Pack 1 before continuing. You can get to the installer simply by opening...(read more)

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Link Model Objects Across Designs

    - by thatjeffsmith
    The third post in our “What’s New in SQL Developer Data Modeler v3.3” series, SQL Developer Data Modeler now allows you to link objects across models. If you need to catch up on the earlier posts, here are the first two: New and Improved Search Collaborative Design via Excel Today’s post is a very simple and straightforward discussion on how to share objects across models and designs. In previous releases you could easily copy and paste objects between models and designs. Simply select your object, right-click and select ‘Copy’ Once copied, paste it into your other designs and then make changes as required. Once you paste the object, it is no longer associated with the source it was copied from. You are free to make any changes you want in the new location without affecting the source material. And it works the other way as well – make any changes to the source material and the new object is also unaffected. However. What if you want to LINK a model object instead of COPYING it? In version 3.3, you can now do this. Simply drag and drop the object instead of copy and pasting it. Select the object, in this case a relational model table, and drag it to your other model. It’s as simple as it sounds, here’s a little animated GIF to show you what I’m talking about. Drag and drop between models/designs to LINK an object Notes The ‘linked’ object cannot be modified from the destination space Updating the source object will propagate the changes forward to wherever it’s been linked You can drag a linked object to another design, so dragging from A - B and then from B - C will work Linked objects are annotated in the model with a ‘Chain’ bitmap, see below This object has been linked from another design/model and cannot be modified. A very simple feature, but I like the flexibility here. Copy and paste = new independent object. Drag and drop = linked object.

    Read the article

  • Data structures in functional programming

    - by pwny
    I'm currently playing with LISP (particularly Scheme and Clojure) and I'm wondering how typical data structures are dealt with in functional programming languages. For example, let's say I would like to solve a problem using a graph pathfinding algorithm. How would one typically go about representing that graph in a functional programming language (primarily interested in pure functional style that can be applied to LISP)? Would I just forget about graphs altogether and solve the problem some other way?

    Read the article

  • PASS Data Architecture VC presents Neil Hambly on Improve Data Quality & Integrity using Constraints

    On Tuesday June 19th 12PM noon Central, Neil Hambly will discuss "Leveraging the power of constraints to improve both data quality and performance of your databases." What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • Fokedvenc BI és DW blogjaim 7: Oracle Data Warehousing

    - by Fekete Zoltán
    A következo tartalmas blogot ajánlom a nyájas olvasó figyelmébe: The Data Warehouse Insider: http://blogs.oracle.com/datawarehousing/ Az adattárház általános fogalmaitól és a bevezetések és tervezés "best practice" legjobb gyakorlati tapasztalatokig. Témák: csillagsémák, particionálás, OLAP, 3NF, párhuzamos feldolgozások, adatbetöltés, ETL-ELT, adatmodellek, rendezvények, Exadata, Database Machine, tömörítés, adatbányászat, ügyféltörténetek,...

    Read the article

  • When does 'optimizing code' == 'structuring data'?

    - by NewAlexandria
    A recent article by ycombinator lists a comment with principles of a great programmer. #7. Good programmer: I optimize code. Better programmer: I structure data. Best programmer: What's the difference? Acknowledging subjective and contentious concepts - does anyone have a position on what this means? I do, but I'd like to edit this question later with my thoughts so-as not to predispose the answers.

    Read the article

  • Single Key Multiple Values Data Structure for one to many mapping

    - by nijhawan.saurabh
    Dictionaries are good, they are great to store Key / Value pairs but what if you want to store multiple values for a single key? Dictionaries would not allow duplicate keys. I came across a nice way to represent such a Data Structure using one of the Extension Method (ToLookup) present in System.Linq Namespace which converts an IEnumerable<T> to an ILookup<TKey, TElement>.   Now, there are two parameters this method expects (The other overload expects 3 parameters): IEnumerable<TSource> - This list would contain the actual data. Func<TSource, TKey> keySelector - The Delegate which which computes the keys   The method returns the following: ILookup<TKey, TElement>   This DS would store Keys and multiple values along those keys.   Let's see a small example:        12  using System;    13     using System.Collections.Generic;    14     using System.Linq;    15     16     /// <summary>    17     /// </summary>    18     internal class Program    19     {    20         #region Methods    21     22         /// <summary>    23         /// </summary>    24         /// <param name="args">    25         /// The args.    26         /// </param>    27         private static void Main(string[] args)    28         {    29             // Create an array of strings.    30             var list = new List<string> { "IceCream1", "Chocolate Moose", "IceCream2" };    31     32             // Generate a lookup Data Structure    33             ILookup<int, string> lookupDs = list.ToLookup(item => item.Length);    34     35           // Enumerate groupings.    36             foreach (var group in lookupDs)    37             {    38                 foreach (string element in group)    39                 {    40                     Console.WriteLine(element);    41                 }    42             }    43         } Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Data Education: Great Classes Coming to a City Near You

    - by Adam Machanic
    In case you haven't noticed, Data Education (the training company I started a couple of years ago) has expanded beyond the US northeast; we're currently offering courses with top trainers in both St. Louis and Chicago , as well as the Boston area. The courses are starting to fill up fast—not surprising when you consider we’re talking about experienced instructors like Kalen Delaney , Rob Farley , and Allan Hirt —but we have still have some room. We’re very excited about bringing the highest quality...(read more)

    Read the article

  • Extracting the Layout of all the Data Forms from the Relational Database

    - by RahulS
    Today I came across a question from one of our clients that: "what members are used on each data form WITHOUT having to go through the report generated out of our Planning app". We worked with client on this and reached to a simple query. All the form related information is stored in the following tables: HSP_FORM HSP_FORMOBJ_DEF HSP_FORMOBJ_DEF_MBR HSP_FORM_ATTRIBUTES HSP_FORM_CALCS HSP_FORM_DV_CONDITION HSP_FORM_DV_PM_RULE HSP_FORM_DV_RULE HSP_FORM_DV_USER_IN_PM_RULE HSP_FORM_LAYOUT HSP_FORM_MENUS HSP_FORM_VARIABLES If we want to retrieve just the members included, we can concentrate on: HSP_OBJECT to get the Object_ID for form, Object_Type is 7 for forms. (Ex: Select * from HSP_OBJECT where OBJECT_TYPE = 7) HSP_FORMOBJ_DEF Find the OBJDEF_ID for a particular form HSP_FORMOBJ_DEF_MBR Use the above OBJDEF_ID to find the members: Here the Mbr_ID is the Id of the member and Query_Type is the Function like Idesc, Level0 etc and Sequce is you sequence, And the final table we can use is HSP_FORM_LAYOUT: Layout_Type: 0->Pov 1-> Page, 2->Row, 3->Col, DIM_ID is the dimension ID and Ordinal is position. Here is the Query: SELECT HSP_OBJECT.OBJECT_NAME AS 'Form',  HSP_OBJECT_2.OBJECT_NAME AS 'Dimension',  HSP_OBJECT_1.OBJECT_NAME AS 'Member',  HSP_FORMOBJ_DEF_MBR.QUERY_TYPE FROM  <DatabaseName>.dbo.HSP_FORM_LAYOUT HSP_FORM_LAYOUT,  <DatabaseName>.dbo.HSP_FORMOBJ_DEF HSP_FORMOBJ_DEF,  <DatabaseName>.dbo.HSP_FORMOBJ_DEF_MBR HSP_FORMOBJ_DEF_MBR,  <DatabaseName>.dbo.HSP_MEMBER HSP_MEMBER,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT_1,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT_2 WHERE  HSP_OBJECT.OBJECT_ID = HSP_FORMOBJ_DEF.FORM_ID AND  HSP_FORMOBJ_DEF_MBR.OBJDEF_ID = HSP_FORMOBJ_DEF.OBJDEF_ID AND  HSP_MEMBER.MEMBER_ID = HSP_FORMOBJ_DEF_MBR.MBR_ID AND  HSP_OBJECT_1.OBJECT_ID = HSP_MEMBER.MEMBER_ID AND  HSP_OBJECT_2.OBJECT_ID = HSP_MEMBER.DIM_ID AND  HSP_FORM_LAYOUT.DIM_ID = HSP_MEMBER.DIM_ID AND  HSP_FORM_LAYOUT.FORM_ID = HSP_FORMOBJ_DEF.FORM_ID AND  ((HSP_OBJECT.OBJECT_TYPE=7)) ORDER BY HSP_OBJECT.OBJECT_NAME  Concentrate on Test1 data form and Actual Layout of it as follows: Corresponding Query_type for few of the functions: 9  for Idesc, 3  for Ancestors, -9 for ILvl0Des, 8  for Desc, 4  for IAncestors Its just a basic idea you can do lot on the basis of this. Cheers..!!! Rahul S. http://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Sorting data in the SSIS Pipeline (Video)

    In this post I want to show a couple of ways to order the data that comes into the pipeline.  a number of people have asked me about this primarily because there are a number of ways to do it but also because some components in the pipeline take sorted inputs.  One of the methods I show is visually easy to understand and the other is less visual but potentially more performant.

    Read the article

  • Application Crash cleared the content of the Folder

    - by Ameya
    Recently while working on the LinuxDC++ over the network the application crashed while downloading files. Now my Downloads folder which had at least 60-80GB of data is completely cleaned but the system is not reporting the available the correct free space. Is there way to restore the contents of the folder only as the solution available are for the whole partition. I just want to recover the contents from one folder.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >