Search Results

Search found 60636 results on 2426 pages for 'data export'.

Page 12/2426 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • How can I make Maya export a mesh as double-sided?

    - by bobobobo
    I'm exporting from Maya 2009 to OBJ. The mesh I'm exporting has in it's Render Stats "Double Sided" checked, but when the polygon is exported, only a single side is actually exported. What really needs to happen is for each polygon that is double sided, two polygons need to be exported, facing in opposite directions.. I can do this manually, but is there a way to make the OBJ exporter do it for me?

    Read the article

  • Export with VB to Excel and update file

    - by Filipe Costa
    Hello. This is the code that i have to export data to Excel. Dim oExcel As Object Dim oBook As Object Dim oSheet As Object oExcel = CreateObject("Excel.Application") oBook = oExcel.Workbooks.Add oSheet = oBook.Worksheets(1) oSheet.Range("A1").Value = "ID" oSheet.Range("B1").Value = " Nome" oSheet.Range("A1:B1").Font.Bold = True oSheet.Range("A2").Value = CStr(Request("ID")) oSheet.Range("B2").Value = "John" oBook.SaveAs("C:\Book1.xlsx") oExcel.Quit() I can create and save the excel file, but i can't update the contents. How can i do it? Thanks.

    Read the article

  • Export Failed Element Log(s) after deliver?

    - by cogmios
    When a deliver has been performed I can rightlick an entry in the GUI version/element log and it displays me a popup with the element log. Quite handy. I now have a delivery with about 25 failed ones and a couple of hundred OK's. Sadly I can not sort on the column "status" so I have to make screenshots wherever I find a [!] to sit together with the specific team to find out if it is really ok to not deliver those. It would however be handy to have this list of element logs from a deliver so that I do not have to make screenshots or copy the contents of the popup box one by one but just have a list of the failed ones with the element log. Is there a way to export the element logs from a deliver (so the the ones that show up when you rightclick and choose "Display Element Log") and/or only the ones that gave failures?

    Read the article

  • ASP.NET Export to Excel and Word using VB.NET and C#

    In most ASP.NET web applications there is a need to export data. This is particularly useful if the information will be used for further analysis and archiving purposes offline. This tutorial will illustrate how you can export your data from your ASP.NET webpage example if it is coming from a MSSQL database to one of the most common file export formats in Windows MS Excel and MS Word.... DNS Configured Correctly? Test Your Internal DNS With Our Free DNS Advisor Tool From Infoblox.

    Read the article

  • Python what's the data structure for triple data

    - by Paul
    I've got a set of data that has three attributes, say A, B, and C, where A is kind of the index (i.e., A is used to look up the other two attributes.) What would be the best data structure for such data? I used two dictionaries, with A as the index of each. However, there's key errors when the query to the data doesn't match any instance of A.

    Read the article

  • Can not export JARS in lwjgl!

    - by NerdyLegend
    I did it before but for some reason it's doing the stupid problem again. I want to export as a regular Jar file, not a folder full of files. I export it like in Oskar Veerhoak. cmd says Exception in thread "main" java.lang.RuntimeException: Resource not found: res/F lubberFlap.png at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoa der.java:69) at com_FlubberSpace.MainFS.main(MainFS.java:118) I tested it out with my other project with the same code pretty much. This is how I load my Textures wood = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/wood.png")); of course it works fine in eclipse but not after export. I havn't tried giving the previous game its' own project, I have it in a package. Can someone record how to export properly? I want a jar file that you just double click to start it. I also want it so nobody can extract the files and see my classes and res.

    Read the article

  • Export SQL Binary/BLOB Data?

    - by davemackey
    Recently a software application we utilize upgraded from ASP to ASP.NET. In the process they abandoned the old web-based product and rewrote the entire UI, using new DB tables. The old DB tables still exist in the database and contain legacy files in binary or blob formats. I'm wondering if there is an easy way to export all these legacy files from the database to the filesystem (NTFS)? Then we could delete these old unused tables and save a few GB of space in the DB backups, etc.

    Read the article

  • PHP export to excel

    - by user1865240
    I'm having a trouble that I can't export japanese texts to excel (xls). I used the following codes: header('Content-type: application/ms-excel;charset=UTF-8'); header('Content-Disposition: attachment; filename='.$filename); header("Pragma: no-cache"); echo $contents; But in the excel file, the text changed to funny characters like this: é™?定ç‰? ã?¨ã??ã?¯ã??ã?£ã?†ã?ªã?¢å??犬ã?®ã?Œæ??ã? ’è??ã??ã?Ÿã?†ã?£ã??ã??ã??ã?? ï?? Currently, I'm using hostingmanager and I tried on the different server using the same codes and there's no problem. What could be the problem. Because of the PHP version?? Please help me. Thank you, Aino

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • data structure for counting frequencies in a database table-like format

    - by user373312
    i was wondering if there is a data structure optimized to count frequencies against data that is stored in a database table-like format. for example, the data comes in a (comma) delimited format below. col1, col2, col3 x, a, green x, b, blue ... y, c, green now i simply want to count the frequency of col1=x or col1=x and col2=green. i have been storing the data in a database table, but in my profiling and from empirical observation, database connection is the bottle-neck. i have tried using in-memory database solutions too, and that works quite well; the only problem is memory requirements and quirky init/destroy calls. also, i work mainly with java, but have experience with .net, and was wondering if there was any api to work with "tabular" data in a linq way using java. any help is appreciated.

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • The Ins and Outs of Effective Smart Grid Data Management

    - by caroline.yu
    Oracle Utilities and Accenture recently sponsored a one-hour Web cast entitled, "The Ins and Outs of Effective Smart Grid Data Management." Oracle and Accenture created this Web cast to help utilities better understand the types of data collected over smart grid networks and the issues associated with mapping out a coherent information management strategy. The Web cast also addressed important points that utilities must consider with the imminent flood of data that both present and next-generation smart grid components will generate. The three speakers, including Oracle Utilities' Brad Williams, focused on the key factors associated with taking the millions of data points captured in real time and implementing the strategies, frameworks and technologies that enable utilities to process, store, analyze, visualize, integrate, transport and transform data into the information required to deliver targeted business benefits. The Web cast replay is available here. The Web cast slides are available here.

    Read the article

  • What Works in Data Integration?

    - by dain.hansen
    TDWI just recently put out this paper on "What Works in Data Integration". I invite you especially to take a look at the section on "Accelerating your Business with Real-time Data Integration" and the DIRECTV case study. The article discusses some of the technology considerations for BI/DW and how data integration plays a role to deliver timely, accessible, and high-quality data. It goes on to outline the three key requirements for how to deliver high performance, low impact, and reliability and how that can translate to faster results. The DIRECTV webinar is something you definitely want to take a look at, you'll hear how DIRECTV successfully transformed their data warehouse investments into a competitive advantage with Oracle GoldenGate.

    Read the article

  • Are there sources of email marketing data available?

    - by Gortron
    Are sources of email marketing data available to the public? I would like to see email marketing data to see what kind of content a business sends out, the frequency of sending, the number of people emailed, especially the resulting open rates and click through rates. Are businesses willing to share data on their previous email marketing campaigns without divulging their contact list? I would like to use this data to create an application to help businesses create better newsletters by using this data as a benchmark, basically sharing what works and what doesn't for each industry.

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • Extending SSIS with custom Data Flow components (Presentation)

    Download the slides and sample code from my Extending SSIS with custom Data Flow components presentation, first presented at the SQLBits II (The SQL) Community Conference. Abstract Get some real-world insights into developing data flow components for SSIS. This starts with an introduction to the data flow pipeline engine, and explains the real differences between adapters and the three sub-types of transformation. Understanding how the different types of component behave and manage data is key to writing components of your own, and probably should but be required knowledge for anyone building packages at all. Using sample code throughout, I will show you how to write components, as well as highlighting best practice and lessons learned. The sample code includes fully working example projects for source, destination and transformation components. Presentation & Samples (358KB) Extending SSIS with custom Data Flow components.zip

    Read the article

  • How to use OO for data analysis? [closed]

    - by Konsta
    In which ways could object-orientation (OO) make my data analysis more efficient and let me reuse more of my code? The data analysis can be broken up into get data (from db or csv or similar) transform data (filter, group/pivot, ...) display/plot (graph timeseries, create tables, etc.) I mostly use Python and its Pandas and Matplotlib packages for this besides some DB connectivity (SQL). Almost all of my code is a functional/procedural mix. While I have started to create a data object for a certain collection of time series, I wonder if there are OO design patterns/approaches for other parts of the process that might increase efficiency?

    Read the article

  • Markup format or script for data files?

    - by Aaron
    The game I'm designing will be mainly written in a high level scripting language (leaning towards either Lua or Squirrel) with a C++ core. In addition to scripts I'm also going to need different data files. Many data files will be for static information such as graphical assets and monster types. I'd also want to create and update data files at runtime for user information like option settings and game saves. Can I get away with using plain script files (i.e. .lua or .nut files) for my data files, or is it better to use dedicated markup formats like XML or YAML? If I use script files, loaded separately from my true scripts, then I wouldn't need an extra library to read those files. Scripting languages like Lua also have table syntax that lend themselves towards data definition. On the other hand I'd have to write my own schema check code. These languages also don't seem to support serialization "out of the box" like the markup format libraries do.

    Read the article

  • Winner of the 2012 Government Big Data Solutions Award

    - by Jean-Pierre Dijcks
    Hot off the press: The winner of the 2012 Government Big Data Solutions Aware is the National Cancer Institute!! Read all the details on CTOLabs.com. A short excerpt to wet your appetite: "... This solution, based on the Oracle Big Data Appliance with the Cloudera Distribution of Apache Hadoop (CDH), leverages capabilities available from the Big Data community today in pioneering ways that can serve a broad range of researchers. The promising approach of this solution is repeatable across many other Big Data challenges for bioinfomatics, making this approach worthy of its selection as the 2012 Government Big Data Solution Award." Read the entire post. Congrats to the entire team!!

    Read the article

  • SQL Server and the XML Data Type : Data Manipulation

    The introduction of the xml data type, with its own set of methods for processing xml data, made it possible for SQL Server developers to create columns and variables of the type xml. Deanna Dicken examines the modify() method, which provides for data manipulation of the XML data stored in the xml data type via XML DML statements. Too many SQL Servers to keep up with?Download a free trial of SQL Response to monitor your SQL Servers in just one intuitive interface."The monitoringin SQL Response is excellent." Mike Towery.

    Read the article

  • Getting data from a webpage in a stable and efficient way

    - by Mike Heremans
    Recently I've learned that using a regex to parse the HTML of a website to get the data you need isn't the best course of action. So my question is simple: What then, is the best / most efficient and a generally stable way to get this data? I should note that: There are no API's There is no other source where I can get the data from (no databases, feeds and such) There is no access to the source files. (Data from public websites) Let's say the data is normal text, displayed in a table in a html page I'm currently using python for my project but a language independent solution/tips would be nice. As a side question: How would you go about it when the webpage is constructed by Ajax calls?

    Read the article

  • export data from WCF Service to excel

    - by Dave
    I need to provide an export to excel feature for a large amount of data returned from a WCF web service. The code to load the datalist is as below: List<resultSet> r = myObject.ReturnResultSet(myWebRequestUrl); //call to WCF service myDataList.DataSource = r; myDataList.DataBind(); I am using the Reponse object to do the job: Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=MyExcel.xls"); StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter tw = new HtmlTextWriter(sw); myDataList.RenderControl(tw); Response.Write(sb.ToString()); Response.End(); The problem is that WCF Service times out for large amount of data (about 5000 rows) and the result set is null. When I debug the service, I can see the window for saving/opening the excel sheet appear before the service returns the result and hence the excel sheet is always empty. Please help me figure this out.

    Read the article

  • Lotus Notes - Export emails to plain text file

    - by mbeckish
    I am setting up a Lotus Notes account to accept emails from a client, and automatically save each email as a plain text file to be processed by another application. So, I'm trying to create my very first Agent in Lotus to automatically export the emails to text. Is there a standard, best practices way to do this? I've created a LotusScript Agent that pretty much works. However, there is a bug - once the Body of the memo exceeds 32K characters, it starts inserting extra CR/LF pairs. I am using Lotus Notes 7.0.3. Here is my script: Sub Initialize On Error Goto ErrorCleanup Dim session As New NotesSession Dim db As NotesDatabase Dim doc As NotesDocument Dim uniqueID As Variant Dim curView As NotesView Dim docCount As Integer Dim notesInputFolder As String Dim notesValidOutputFolder As String Dim notesErrorOutputFolder As String Dim outputFolder As String Dim fileNum As Integer Dim bodyRichText As NotesRichTextItem Dim bodyUnformattedText As String Dim subjectText As NotesItem ''''''''''''''''''''''''''''''''''''''''''''''''''''''' 'INPUT OUTPUT LOCATIONS outputFolder = "\\PASCRIA\CignaDFS\CUser1\Home\mikebec\MyDocuments\" notesInputFolder = "IBEmails" notesValidOutputFolder = "IBEmailsDone" notesErrorOutputFolder="IBEmailsError" ''''''''''''''''''''''''''''''''''''''''''''''''''''''' Set db = session.CurrentDatabase Set curview = db.GetView(notesInputFolder ) docCount = curview.EntryCount Print "NUMBER OF DOCS " & docCount fileNum = 1 While (docCount > 0) 'set current doc to Set doc = curview.GetNthDocument(docCount) Set bodyRichText = doc.GetFirstItem( "Body" ) bodyUnformattedText = bodyRichText.GetUnformattedText() Set subjectText = doc.GetFirstItem("Subject") If subjectText.Text = "LotusAgentTest" Then uniqueID = Evaluate("@Unique") Open "\\PASCRIA\CignaDFS\CUser1\Home\mikebec\MyDocuments\email_" & uniqueID(0) & ".txt" For Output As fileNum Print #fileNum, "Subject:" & subjectText.Text Print #fileNum, "Date:" & Now Print #fileNum, bodyUnformattedText Close fileNum fileNum = fileNum + 1 Call doc.PutInFolder(notesValidOutputFolder) Call doc.RemoveFromFolder(notesInputFolder) End If doccount = doccount-1 Wend Exit Sub ErrorCleanup: Call sendErrorEmail(db,doc.GetItemValue("From")(0)) Call doc.PutInFolder(notesErrorOutputFolder) Call doc.RemoveFromFolder(notesInputFolder) End Sub Update Apparently the 32KB issue isn't consistent - so far, it's just one document that starts getting extra carriage returns after 32K.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >