Search Results

Search found 60903 results on 2437 pages for 'data mapping'.

Page 41/2437 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • Is information a subset of data?

    - by Jason Baker
    I apologize as I don't know whether this is more of a math question that belongs on mathoverflow or if it's a computer science question that belongs here. That said, I believe I understand the fundamental difference between data, information, and knowledge. My understanding is that information carries both data and meaning. One thing that I'm not clear on is whether information is data. Is information considered a special kind of data, or is it something completely different?

    Read the article

  • MySQL data type: Text,,, Erroring: Data Too Long

    - by nobosh
    I have a field as follows in MySQL: Type: Text Length: 0 Decimals: 0 And when I try to insert data around the size of 4 pages of MS Word, Coldfusion errors with: Data Too Long from the DB. I thought TEXT data type was able to expand and handle this size of data? What am I missing and what can I do?

    Read the article

  • Can't Compile Correct Mapping File

    - by NoOne
    Hello, Im now developping one application in C# with Remoting objects and NHibernate. So here is a image explaining how my projects are divided. Views Layer This layer will be responsible for the users interface. This layer will always use the Controls Layer to create and edit objects; Control Layer This is my persistence layer, here Ill have all the NHibernate configuration. This is my critic point, because ListSingleton Project will have only my RemoteObject. (Here I have the App.config file) Models Layer Entity layer. Here I only have the entities classes and their respective mapping. I’ve done a test solution with no remoting and only with the projects of the Control and Model Layers. It was all working ok. Now that I added the Views Layer and set the solution to start with projects Client and Server (Client calls the Control Layer and this would do a try to persist a object) I’m getting a error at : Configuration cfg = new Configuration(); cfg.AddXmlFile("mapping/User.hbm.xml"); InnerException: {"Could not compile the mapping document: mapping/User.hbm.xml"} InnerException.InnerException: {"Could not find the dialect in the configuration"} StackTrace in the InnerException.InnerException " em NHibernate.Dialect.Dialect.GetDialect(IDictionary`2 props)\r\n em NHibernate.Cfg.Configuration.AddValidatedDocument(NamedXmlDocument doc)" But I know that there is no error in the mapping file because I used in my test application.

    Read the article

  • Hibernate - Problem in parsing mapping file (.hbm.xml)

    - by Yatendra Goel
    I am new to Hibernate. I have an exception while running an Hibernate-based application. The exception is as follows: 16 [main] INFO org.hibernate.cfg.Environment - Hibernate 3.3.2.GA 16 [main] INFO org.hibernate.cfg.Environment - hibernate.properties not found 16 [main] INFO org.hibernate.cfg.Environment - Bytecode provider name : javassist 31 [main] INFO org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling 94 [main] INFO org.hibernate.cfg.Configuration - configuring from resource: /hibernate.cfg.xml 94 [main] INFO org.hibernate.cfg.Configuration - Configuration resource: /hibernate.cfg.xml 219 [main] INFO org.hibernate.cfg.Configuration - Reading mappings from resource : app/data/City.hbm.xml 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(12) Attribute "coloumn" must be declared for element type "property". 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(13) Attribute "coloumn" must be declared for element type "property". 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(14) Attribute "coloumn" must be declared for element type "property". It seems that it is not finding coloumn attribute of the property element in the mappings file but my mappings file do have the coloumn attribute. Below is the mappings file (City.hbm.xml) <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="app.data"> <class name="City" table="CITY"> <id column="CITY_ID" name="cityId"> <generator class="native"/> </id> <property name="cityDisplyaName" coloumn="CITY_DISPLAY_NAME" /> <property coloumn="CITY_MEANINGFUL_NAME" name="cityMeaningFulName" /> <property coloumn="CITY_URL" name="cityURL" /> </class> </hibernate-mapping>

    Read the article

  • Data Access Layer - static list objects and caching

    - by Truegilly
    Hello, i am devloping a site using .net MVC i have a data access layer which basically consists of static list objects that are created from data within my database. The method that rebuilds this data first clears all the list objects. Once they are empty it then add the data. Here is an example of one of the lists im using. its a method which generates all the UK postcodes. there are about 50 methods similar to this in my application that return all sorts of information, such as towns, regions, members, emails etc. public static List<PostCode> AllPostCodes = new List<PostCode>(); when the rebuild method is called it first clears the list. ListPostCodes.AllPostCodes.Clear(); next it re-bulilds the data, by calling the GetAllPostCodes() method /// <summary> /// static method that returns all the UK postcodes /// </summary> public static void GetAllPostCodes() { using (fab_dataContextDataContext db = new fab_dataContextDataContext()) { IQueryable AllPostcodeData = from data in db.PostCodeTables select data; IDbCommand cmd = db.GetCommand(AllPostcodeData); SqlDataAdapter adapter = new SqlDataAdapter(); adapter.SelectCommand = (SqlCommand)cmd; DataSet dataSet = new DataSet(); cmd.Connection.Open(); adapter.FillSchema(dataSet, SchemaType.Source); adapter.Fill(dataSet); cmd.Connection.Close(); // crete the objects foreach (DataRow row in dataSet.Tables[0].Rows) { PostCode postcode = new PostCode(); postcode.ID = Convert.ToInt32(row["PostcodeID"]); postcode.Outcode = row["OutCode"].ToString(); postcode.Latitude = Convert.ToDouble(row["Latitude"]); postcode.Longitude = Convert.ToDouble(row["Longitude"]); postcode.TownID = Convert.ToInt32(row["TownID"]); AllPostCodes.Add(postcode); postcode = null; } } } The rebuild occurs every 1 hour. this ensures that every 1 hour the site will have fresh set of cached data. the issue ive got is that occasionally if during a rebuild, the server will be hit by a request and an exception is thrown. The exception is "Index was outside the bounds of the array." it is due to when a list is being cleared. ListPostCodes.AllPostCodes.Clear(); - // throws exception - although its not always in regard to this list. Once this exception is thrown application dies, All users are affected. I have to restart the server to fix it. i have 2 questions... If i utilise caching instead of static objects would this help ? Is there any way i can say "while the rebuild is taking place, wait for it to complete until accepting requests" any help is most appricaiated ;) truegilly

    Read the article

  • Get value of "data"

    - by Nicole Loyal-Windham
    Hi, I need to figure out the value of data strings with jquery, for example like this: { label: "Beginner", data: 2}, { label: "Advanced", data: 12}, { label: "Expert", data: 22}, to add them up. Something like: var sum = data1+data2+data3; alert(sum); So the result for this example would be 36. Appreciate your help! Nicole

    Read the article

  • jQuery loop through data() object

    - by bartclaeys
    Is it possible to loop through a data() object? Suppose this is my code: $('#mydiv').data('bar','lorem'); $('#mydiv').data('foo','ipsum'); $('#mydiv').data('cam','dolores'); How do I loop through this? Can each() be used for this?

    Read the article

  • Prepopulate jQuery Data in html

    - by Mikael
    jQuery has the very cool feature/method ".data", i wonder if there is a way to have the data in the code so that jQuery can use it when the rendering of html is done. Suppose i have a repeater and looping out children, and i want to add some data to those children without using classes etc. Will i have to add javascript to that repeater just to add stuff to the "data of jquery" or is there some better way?

    Read the article

  • Serializing persistent/functional data structures

    - by Rob
    Persistent data structures depend on the sharing of structure for efficiency. For an example, see here. How can I preserve the structure sharing when I serialize the data structures and write them to a file or database? If I just naively traverse the datastructures, I'll store the correct values, but I'll lose the structure sharing. I'd like to be able to save data-structures with shared components to a file, restore them, and still have most of the structure shared in the restored data.

    Read the article

  • Use SQL query to populate property in nHibernate mapping file

    - by brainimus
    I have an object which contains a property that is the result of an SQL statement. How do I add the SQL statement to my nHibernate mapping file? Example Object: public class Library{ public int BookCount { get; set; } } Example Mapping File: <hibernate-mapping> <class name="Library" table="Libraries"> <property name="BookCount" type="int"> <- This is where I want the SQL query to populate the value. -> </class> </hibernate-mapping> Example SQL Query: SELECT COUNT(*) FROM BOOKS WHERE BOOKS.LIBRARY_ID = LIBRARIES.ID

    Read the article

  • DTOs Collections mapping Problem

    - by the_knight5000
    I'm working now on a multi-tier project which has layers as following : DAL BLL GUI Layer and Shared DTOs between BLL and GUI layers. I'm facing a problem in mapping the Objects from DAO To DTO, No problem in the simple objects. The problem is in the Objects who have child collections of another objects. ex: Author Category --Categories --Authors the execution goes in an infinite loop of mapping and it get more complex when I want model Self-join tables ex: Safe Safe --TransferSafe(Collection<Safe>) --TransferSafe(Collection<Safe>) the execution goes in an infinite loop of mapping any suggestions about a good solution or a practical mapping pattern?

    Read the article

  • UPLOAD DATA IN DATA BASE

    - by rajson
    HI friends, i wish to store image and resume of user in the data base. i am using mysql data base and php5. i want to know that which field i set means data type.and how i set the range (maximum size) for uploaded data.

    Read the article

  • How does Core Data work

    - by Jason
    Ive tried to gather information on as to how core data works, but can someone give me a clear explanation of all the stuff required...For instance NSDataContext, Fetchcontroller, NSDataModel, Presistent... Perhaps all the steps involved to get a data...Now I'm also unclear about an SQLite file, like how do we load the data into the core data , once we have created our entities etc.. Thanks

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • Enterprise Data Quality - New and Improved on Oracle Technology Network

    - by Mala Narasimharajan
    Looking for Enterprise Data Quality technical and developer resources on your projects? Wondering where the best place is to go for finding the latest documentations, downloads and even code samples and libraries?  Check out the new and improved Oracle Technical Network pages for Oracle Enterprise Data Quality.  This section features developer forums as well for EDQ and Master Data Management so that you can connect with other technical professionals who have submitted concerns or posted tips and tricks and learn from them.  Here are the links to bookmark:    Oracle Technology Network website * NEW *   Installation Guide for Enterprise Data Quality Address Verification  Enterprise Data Quality Forum For more information on Oracle's software offerings for data quality and master data management visit:  http://www.oracle.com/us/products/applications/master-data-management/index.html http://www.oracle.com/us/products/middleware/data-integration/enterprise-data-quality/overview/index.html   

    Read the article

  • Blank Cacti Graphs

    - by tortib
    I'm running Ubuntu 14.04 LTS and I'm having an issue with Cacti 0.8.8b not displaying any data in graphs. The graphs are being created and I see files in /var/lib/cacti/rra. My crontab entry for root is the following: */1 * * * * sudo -u www-data php -q /usr/share/cacti/site/poller.php > /dev/null The output of ls -la /var/lib/cacti/rra is the following: # ls -la /var/lib/cacti/rra/ total 1008 drwxrwx--- 2 www-data www-data 4096 Aug 20 19:27 . drwxr-xr-x 3 www-data www-data 4096 Aug 17 01:41 .. -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:23 tortib_com_cpu_nice_34.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:24 tortib_com_cpu_system_35.rrd -rw-rw-r-- 1 www-data www-data 47992 Aug 20 19:25 tortib_com_cpu_user_36.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:27 tortib_com_hdd_used_43.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:23 tortib_com_hdd_used_44.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:27 tortib_com_load_15min_38.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:26 tortib_com_load_1min_37.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:23 tortib_com_load_5min_39.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:24 tortib_com_mem_buffers_40.rrd -rw-rw-r-- 1 www-data www-data 47992 Aug 20 19:25 tortib_com_mem_cache_41.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:26 tortib_com_mem_free_42.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:24 tortib_com_traffic_in_45.rrd -rw-rw-r-- 1 www-data www-data 94816 Aug 20 19:25 tortib_com_traffic_in_46.rrd -rw-r--r-- 1 www-data www-data 94816 Aug 20 19:26 tortib_com_traffic_in_47.rrd -rw-r--r-- 1 www-data www-data 47992 Aug 20 19:27 tortib_com_users_48.rrd I tried to run the poller as root from the command line but it doesn't output anything useful nor does it graph any data. The device in cacti shows that that it's able to query snmp and ping is alive. The graphs are still empty though. snmpwalk 127.0.0.1 -v2c -c public works as it should. It walks all MIBs. I'm quite perplexed as to why this isn't working any longer. It was graphing data but then it just stopped. And when it was graphing data it was graphing it intermittently. Thank you for reading this problem and helping.

    Read the article

  • Saving game data to server [on hold]

    - by Eugene Lim
    What's the best method to save the player's data to the server? Method to store the game saves Which one of the following method should I use ? Using a database structure(e.g.. mySQL) to store the game data as blobs? Using the server hard disk to store the saved game data as binary data files? Method to send saved game data to server What method should I use ? socketIO web socket a web-based scripting language to receive the game data as binary? for example, a php script to handle binary data and save it to file Meta-data I read that some games store saved game meta-data in database structures. What kind of meta data is useful to store?

    Read the article

  • Web services, J2EE, Spring, DB integration project ideas - maybe data mining related?

    - by saral jain
    I am a graduate Computer Science student (Data Mining and Machine Learning) and have good exposure to core Java (3 years). I have read up on a bunch of stuff on the following topics: Design patterns, J2EE Web services (SOAP and REST), Spring, and Hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff -- over my free time of course -- to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + SVN, maven). Any good project ideas would be really appreciated. I just want to build this stuff to learn, so I don't really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus as it fits with my research but is absolutely not necessary since this project is more to learn to do large scale software development.

    Read the article

  • Closing the Gap: 2012 IOUG Enterprise Data Security Survey

    - by Troy Kitch
    The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise. Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise. Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools. Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise. IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following. Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls. Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases. Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality. Read the IOUG Data Security Survey Now.

    Read the article

  • Video Presentation and Demo of Oracle Advanced Analytics & Data Mining

    - by Mike.Hallett(at)Oracle-BI&EPM
    For a video presentation and demonstration of Oracle Advanced Analytics & Data Mining  click here. (This plays a large MP4 file in a browser: access is from Google.docs, and this works best with Google CHROME). This one hour session focuses primarily on the Oracle Data Mining component of the Oracle Advanced Analytics Option along with Oracle R Enterprise and is tied to the Oracle SQL Developer Days virtual and onsite events and is presented by Oracle’s Director for Advanced Analytics, Charlie Berger, covering: Big Data + Big Data Analytics Competing on analytics & value proposition What is data mining? Typical use cases Oracle Data Mining high performance in-database SQL based data mining functions Exadata "smart scan" scoring Oracle Data Miner GUI (an Extension that ships with SQL Developer) Oracle Business Intelligence EE + Oracle Data Mining results/predictions in dashboards Applications "powered by Oracle Data Mining" for factory installed predictive analytics methodologies Oracle R Enterprise Please contact [email protected] should you have any questions. 

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >