Search Results

Search found 91220 results on 3649 pages for 'data type equivalent'.

Page 44/3649 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • How to infer the type of a derived class in base class?

    - by enzi
    I want to create a method that allows me to change arbitrary properties of classes that derive from my base class, the result should look like this: SetPropertyValue("size.height", 50); – where size is a property of my derived class and height is a property of size. I'm almost done with my implementation but there's one final obstacle that I want to solve before moving on, to describe this I will first have to explain my implementation a bit: Properties that can be modified are decorated with an attribute There's a method in my base class that searches for all derived classes and their decorated properties For each property I generate a "property modifier", a class that contains 2 delegates: one to set and one to get the value of the property. Property Modifiers are stored in a dictionary, with the name of the property as key In my base class, there is another dictionary that contains all property-modifier-dictionaries, with the Type of the respective class as key. What the SetPropertyValue method does is this: Get the correct property-modifier-dictionary, using the concrete type of the derived class (<- yet to solve) Get the property modifier of the property to change (e.g. of the property size) Use the get or set delegate to modify the property's value Some example code to clarify further: private static Dictionary<RuntimeTypeHandle, object> EditableTypes; //property-modifier-dictionary protected void SetPropertyValue<T>(EditablePropertyMap<T> map, string property, object value) { var property = map[property]; // get the property modifier property.Set((T)this, value); // use the set delegate (encapsulated in a method) } In the above code, T is the Type of the actual (derived) class. I need this type for the get/set delegates. The problem is how to get the EditablePropertyMap<T> when I don't know what T is. My current (ugly) solution is to pass the map in an overriden virtual method in the derived class: public override void SetPropertyValue(string property, object value) { base.SetPropertyValue((EditablePropertyMap<ExampleType>)EditableTypes[typeof(ExampleType)], property, value); } What this does is: get the correct dictionary containing the property modifiers of this class using the class's type, cast it to the appropiate type and pass it to the SetPropertyValue method. I want to get rid of the SetPropertyValue method in my derived class (since there are a lot of derived classes), but don't know yet how to accomplish that. I cannot just make a virtual GetEditablePropertyMap<T> method because I cannot infer a concrete type for T then. I also cannot acces my dictionary directly with a type and retrieve an EditablePropertyMap<T> from it because I cannot cast to it from object in the base class, since again I do not know T. I found some neat tricks to infere types (e.g. by adding a dummy T parameter), but cannot apply them to my specific problem. I'd highly appreciate any suggestions you may have for me.

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • Need deleted data back from Pen Drive, Please help

    - by Manav Sharma
    All, I am using a Pendrive to transfer files from one system to another. I think I deleted some files from the Pendrive that I now need back. Also, there are chances that I might have overwritten the memory where the deleted data might have existed. Is there any software to do something about that? I understand that logically whatever is overwritten in memory cannot to fetched back. But still I need to trust the advances in the technology. Any help is appreciated. Thanks

    Read the article

  • best data+partition recovery software

    - by Pennf0lio
    I accidentally formatted my Drive D that contained all my Backups and Documents. I separated my files to my Drive D hoping I will not harm my files. Since I use Acronis Recovery to Install a new OS with some pre-installed application to my HDD I didn't realize I also formated/erase my Drive D. Now my drive D is unpartitioned. I am really in really in deep trouble and would need some urgent help, Please recommend a Software that at least can restore my Old Drive that contained my files. I'm assuming most of you think this is a duplicate of some old questions here, But I'm not looking for data recovery, I need to recover the whole partition with the files. I used to use "Recuva" but It only recovers files not the whole folders with the files in it. Please advice. Thank You!

    Read the article

  • Restoring MBR, partition table, and boot sector of memory card without data loss ("USBC")

    - by Synetech
    Abstract I have a FAT32 memory card that when inserted into a computer causes Windows to prompt to format it. The card is definitely not supposed to be blank and has a bunch of files on it. Symptoms Using a hex-editor/disk-viewer, I examined the card and found that several sectors/clusters have been overwritten with something that has a signature of USBC at the start of the sector. Specifically, the master boot record (and partition table) is gone (hence Windows thinking the card is blank and needing to be formatted), as are the boot sectors (they have the USBC signature and a volume label of NO NAME and partition type of FAT32). Fortunately, it looks like both copies of the FAT are almost entirely intact (a few FAT entries at the start of a cluster here and there seem to be overwritten by USBC). The root directory is also nearly intact—I can see the volume label entry and subdirectory listings, but one sector is overwritten. (There are no more instances of USBC after the last one in the FAT2.) Hypothesis These observations seem to indicate some sort of virus that erases a few key filesystem structures, and then overwrites a few extra sectors here and there. Googling it seems to corroborate the idea of a virus, except that others report a file called USBC which does not apply here, and in fact, could not be possible since there is no filesystem to even see files. I cannot find any information about a virus with these symptoms, nor a removal tool. (I can't help but wonder if it is actually due to an autorun virus prevention tool.) Question I can likely fix the FAT corruption since they are mostly contiguous chains and maybe even the lost sector of the root directory, but does anyone know of a convenient way to restore or (re)create the MBR/partition table and boot sectors (without formatting or overwriting the data)?

    Read the article

  • Data Validation of a Comma Delimited List

    - by Brad
    I need a simple way of taking a comma seperated list in a cell, and providing a drop down box to select one of them. For Example, the cell could contain: 24, 32, 40, 48, 56, 64 And in a further cell, using Data Validation, I want to provide a drop-down list to select ONE of those values I need to do this without VBA or Macros please. Apolgies, I want this to work with Excel 2010 and later. I have been playing around with counting the number of commas in the list and then trying to split this into a number of rows of single numbers etc with no joy yet.

    Read the article

  • Recovering data from a Silicon Image SiI3114 RAID

    - by Isaac Truett
    I have a set of 3 disks in RAID 5 originally created with a Silicon Image SiI3114 on-board RAID controller. The old motherboard is dead. The new motherboard (which has a different raid controller) won't boot from the array. I have no reason to believe that the drives are damaged or corrupted. I'm 99% sure that the problem is that the new controller isn't compatible or I'm not setting it up properly. Is it possible to recover data from the drives using a different controller? Would a PCI card like this one allow me to read from the array again?

    Read the article

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • SQLAuthority News – Download SQL Azure Labs Codename “Data Explorer” Client

    - by pinaldave
    Microsoft SQL Azure labs has recently released Data Explorer client. I was looking forward to visualizing tool for quite a while and I am delighted to see this tool. I will be trying out this tool in coming week and will post here my experience. I have listed few of the resources which are related to Data Explorer at the end. Please let me know if I have missed any and I will add the same. With “Data Explorer” you can: Identify the data you care about from the sources you work with (e.g. Excel spreadsheets, files, SQL Server databases). Discover relevant data and services via automatic recommendations from the Windows Azure Marketplace. Enrich your data by combining it and visualizing the results. Collaborate with your colleagues to refine the data. Publish the results to share them with others or power solutions. The Data Explorer Client package contains the Data Explorer workspace as well as an Office plugin that integrates Data Explorer into Excel. Resources: Download Data Explorer Data Explorer Blog Desktop Client Video of  Contoso Bikes and Frozen Yogurt (Data Explorer) Please note that this is not the final release of the product. Please do not attempt this on production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Azure, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • Data and Secularism

    - by kaleidoscope
    Ever since we’ve been using Data we’ve been religious. Religious about the way we represent it and equally religious about the way we access it. Be it plain old SQL, DAO, ADO, ADO.Net and I am just referring to religions in MSFT world. A peek outside and I’d need a separate book to list out the Data faiths. Various application areas in networked computing are converging under the HTTP umbrella with a plausible transition to purist HTTP and in turn REST fuelled by the Web2.0 storm. It was time the Data access faiths also gave up the religious silos wrapped around our long worshipped data publishing and access methods. OData is the secular solution we have at hand today. It is an open protocol for sharing data. It can be exposed via REST. It is Open as in the Microsoft Open Specification Promise. This allows virtually everyone to build Data Services for any runtime. OData is one of the key standards for Data publishing/subscribing on Microsoft Codename Dallas. For us .Netters OData data sources can be exposed/consumed via WCF Data Services and the process is very simple, elegant and intuitive. Applications exposing OData Services Sharepoint 2010 IBM Web Sphere Microsoft SQL Azure Windows Azure Table Storage SQL Server Reporting Services   Live OData Services Netflix Open Science Data Initiative Open Government Data Initiatives Northwind database exposed as OData Service and many others Some may prefer to call it commoditization of data, unification of data access strategies or any other sweet name. I for one will stick to my secular definition. :) Technorati Tags: Sarang,OData,MOSP

    Read the article

  • SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator

    - by pinaldave
    Today is Monday let us start this week with interesting puzzle. Yesterday I had also posted quick question here: SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers Run following code: SELECT SUM(data) FROM (SELECT NULL AS DATA) t It will throw following error. Msg 8117, Level 16, State 1, Line 1 Operand data type void type is invalid for sum operator. I can easily fix if I use ISNULL Function as displayed following. SELECT SUM(data) FROM (SELECT ISNULL(NULL,0) AS DATA) t Above script will not throw an error. However, there is one more method how this can be fixed. Can you come up with another method which will not generate error? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to enable SSIS as data source type on SQL Server Reporing Services SSRS 2008 R2

    when you create a data source in SSRS 2008 R2 (Nov CTP), you won't be able to get SSIS listed as a data source type. Therefore applications that are already using it as a data source or applications that require it as a data source get stuck. Let's learn how to enable and get SSIS listed back as a data source in SSRS 2008 R2. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • How to migrate user settings and data to new machine?

    - by torbengb
    I'm new to Ubuntu and recently started using it on my PC. I'm going to replace that PC with a new machine. I want to transfer my data and settings to the nettop. What aspects should I consider? Obviously I want to move my data over. What things am I missing if I only copy the entire home folder? This is a home pc (not corporate) so user rights and other security issues are not a concern, except that the files should be accessible on the new machine! Please take into account that the new machine is a nettop that doesn't have an optical drive and doesn't allow me to hook the old SATA disk into it, so any data transfer must be handled via home network (I can have both the old and the new machine turned on and connected to the home LAN) and I have an USB thumbdrive with limited capacity (2GB). This sounds like it might limit the general applicability, but it would in fact make it more general. I'll make this a wiki topic because there could be several "right" answers. Update: Or so I thought. I don't see a choice for that.

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Reading DATA from an OBJECT asp.net MVC C#

    - by kalyan
    Hi, I am new to the MVC and I am stuck with a wierd situation. I have to read the Data from the type object and I tried different ways and I couldn't get a solution.Please help. IList<User> u = new UserRepository().Getuser(Name.ToUpper(), UserName.ToUpper(), UserCertNumber.ToUpper(), Date.ToUpper(), UserType.ToUpper(), Company.ToUpper(), PageNumber, Orderby, SearchALL.ToUpper(), PrintAllPages.ToUpper()); object[] users = new object[u.Count]; for (int i = 0; i < u.Count; i++) { users[i] = new { Id = u[i].UserId, Title = u[i].Title, FirstName = u[i].FirstName, LastName = u[i].LastName, Privileges = (from apps in u[i].UserPrivileges select new { PrivilegeId = apps.Privilege.PrivilegeId, PrivilegeName = apps.Privilege.Name, DeactiveDate = apps.DeactiveDate }), Status = (from status in u[i].UserStatus select new { StatusId = status.Status.StatusId, StatusName = status.Status.StatusName, DeactiveDate = status.DeactiveDate }), ActiveDate = u[i].ActiveDate, UserName = u[i].Email, UserCN = (from cert in u[i].UserCertificates select new { CertificateNumber = cert.CertificateNumber, DeactiveDate = cert.DeactiveDate }), Company = u[i].Company.Name }; } string x = ""; string y = ""; var report = users; foreach (var r in report) { x = r[0].....; i want to assign the values from the report to something else and I am not able to read the data from the report object. Please help. } Thank you.

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >