Search Results

Search found 61779 results on 2472 pages for 'data handling'.

Page 107/2472 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • Generate a Word document from list data

    - by PeterBrunone
    This came up on a discussion list lately, so I threw together some code to meet the need.  In short, a colleague needed to take the results of an InfoPath form survey and give them to the user in Word format.  The form data was already in a list item, so it was a simple matter of using the SharePoint API to get the list item, formatting the data appropriately, and using response headers to make the client machine treat the response as MS Word content.  The following rudimentary code can be run in an ASPX (or an assembly) in the 12 hive.  When you link to the page, send the list name and item ID in the querystring and use them to grab the appropriate data. // Clear the current response headers and set them up to look like a word doc.HttpContext.Current.Response.Clear();HttpContext.Current.Response.Charset ="";HttpContext.Current.Response.ContentType ="application/msword";string strFileName = "ThatWordFileYouWanted"+ ".doc";HttpContext.Current.Response.AddHeader("Content-Disposition", "inline;filename=" + strFileName);// Using the current site, get the List by name and then the Item by ID (from the URL).string myListName = HttpContext.Current.Request.Querystring["listName"];int myID = Convert.ToInt32(HttpContext.Current.Request.Querystring["itemID"]);SPSite oSite = SPContext.Current.Site;SPWeb oWeb = oSite.OpenWeb();SPList oList = oWeb.Lists["MyListName"];SPListItem oListItem = oList.Items.GetItemById(myID);// Build a string with the data -- format it with HTML if you like. StringBuilder strHTMLContent = newStringBuilder();// *// Here's where you pull individual fields out of the list item.// *// Once everything is ready, spit it out to the client machine.HttpContext.Current.Response.Write(strHTMLContent);HttpContext.Current.Response.End();HttpContext.Current.Response.Flush();

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • How can data not stored in a DB be accessed from any activity in Android?

    - by jul
    hi, I'm passing data to a ListView to display some restaurant names. Now when clicking on an item I'd like to start another activity to display more restaurant data. I'm not sure about how to do it. Shall I pass all the restaurant data in a bundle through the intent object? Or shall I just pass the restaurant id and get the data in the other activity? In that case, how can I access my restaurantList from the other activity? In any case, how can I get data from the restaurant I clicked on (the view only contains the name)? Any help, pointers welcome! Thanks Jul ListView lv= (ListView)findViewById(R.id.listview); lv.setAdapter( new ArrayAdapter<String>(this,android.R.layout.simple_list_item_1,restaurantList.getRestaurantNames())); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { Intent i = new Intent(Atable.this, RestaurantEdit.class); Bundle b = new Bundle(); //b.putInt("id", ? ); startActivityForResult(i, ACTIVITY_EDIT); } }); RestaurantList.java package org.digitalfarm.atable; import java.util.ArrayList; import java.util.List; public class RestaurantList { private List<Restaurant> restaurants = new ArrayList<Restaurant>(); public List<Restaurant> getRestaurants() { return this.restaurants; } public void setRestaurants(List<Restaurant> restaurants) { this.restaurants = restaurants; } public List<String> getRestaurantNames() { List<String> restaurantNames = new ArrayList<String>(); for (int i=0; i<this.restaurants.size(); i++) { restaurantNames.add(this.restaurants.get(i).getName()); } return restaurantNames; } } Restaurant.java package org.digitalfarm.atable; public class Restaurant { private int id; private String name; private float latitude; private float longitude; public int getId() { return this.id; } public void setId(int id) { this.id = id; } public String getName() { return this.name; } public void setName(String name) { this.name = name; } public float getLatitude() { return this.latitude; } public void setLatitude(float latitude) { this.latitude = latitude; } public float getLongitude() { return this.longitude; } public void setLongitude(float longitude) { this.longitude = longitude; } }

    Read the article

  • What are the algorithms that are used for working with large data in popular web applications

    - by Moss Farmer
    I am looking for some well known algorithms that can be considered while handling very large amount of data.(Edit- By large amount of data I refer to records in a database excluding blobs). These algorithms if not in totality but in parts may be used in big web applications like Twitter, Last.fm , Amazon ,etc. Specifically, I'm looking for names or links to such algorithms. My primary interest lies in developing a very deep understanding on working with large database records and writing efficient code for working with the same.

    Read the article

  • Introduction to Master Data Services in SQL Server 2008

    There are numerous databases housing the same information and it's getting quite difficult to keep everything in line. I've heard from a number of departments who want a more centralized approach to handling customer data, but I don't know where to begin. Can you steer me in the right direction?

    Read the article

  • C# Process Exited event not firing from within webservice

    - by davidpizon
    I am attempting to wrap a 3rd party command line application within a web service. If I run the following code from within a console application: Process process= new System.Diagnostics.Process(); process.StartInfo.FileName = "some_executable.exe"; // Do not spawn a window for this process process.StartInfo.CreateNoWindow = true; process.StartInfo.ErrorDialog = false; // Redirect input, output, and error streams process.StartInfo.UseShellExecute = false; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.RedirectStandardInput = true; process.EnableRaisingEvents = true; process.ErrorDataReceived += (sendingProcess, eventArgs) => { // Make note of the error message if (!String.IsNullOrEmpty(eventArgs.Data)) if (this.WarningMessageEvent != null) this.WarningMessageEvent(this, new MessageEventArgs(eventArgs.Data)); }; process.OutputDataReceived += (sendingProcess, eventArgs) => { // Make note of the message if (!String.IsNullOrEmpty(eventArgs.Data)) if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs(eventArgs.Data)); }; process.Exited += (object sender, EventArgs e) => { // Make note of the exit event if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs("The command exited")); }; process.Start(); process.StandardInput.Close(); process.BeginOutputReadLine(); process.BeginErrorReadLine(); process.WaitForExit(); int exitCode = process.ExitCode; process.Close(); process.Dispose(); if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs("The command exited with code: " + exitCode)); All events, including the "process.Exited" event fires as expected. However, when this code is invoked from within a web service method, all events EXCEPT the "process.Exited" event fire. The execution appears to hang at the line: process.WaitForExit(); Would anyone be able to shed some light as to what I might be missing?

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • jQuery Autocomplete & jTemplates - handling response

    - by Diegos Grace
    Has anyone had any experience with using jTemplates to display autocomplete results. I have the following $("#address-search").autocomplete({ source: "/Address/SearchAddress", minLength: 2, delay: 400, focus: function (event, ui) { $('#address-search').val(ui.item.name); return false; }, parse: function(data) { $("#autocomplete-results").setTemplate($("#templateHolder").html()); $("#autocomplete-results").processTemplate(data); }, select: function (event, ui) { $('#address-search').val(ui.item.name); $('#search-address-id').val(ui.item.id); $('#search-description').html(ui.item.address); }); and the simple jtemplate holder: <script type="text/html" id="templateHolder"> <ul class="autocomplete"> {#foreach $T as data} <li>{$T.name}</li> {#/for} </ul> </script> Above i'm using 'Parse' to format results, I've also tried the autocomplete result method but not having any luck so far. The only success I've had is by using the private method ._renderItem and formatting the data that way but we want to render the output using the jTemplate. Any advice appreciated.

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • How do I bind an iTunes style source list to an NSTableView using Core Data?

    - by Austin
    I have an iTunes style interface in my application: Source list (NSOutlineView) on the left that contains different libraries and playlists with an NSTableView on the right side of the interface displaying information for "Presentations". Similar to iTunes, I am showing the same type of information in the table view whether a library or playlist is selected (title, author, date created, etc). I currently have an NSArrayController connected to my NSTableView and was setting the fetch predicate based on what was selected in the source list. This works fine when selecting a library because I can just set the fetch predicate to filter by the "type" field in my Presentation Core Data entity. When I try to adjust the fetch predicate for the playlist however, it doesn't look like there is any way to set the fetch predicate because I've got a table in between Playlists and Presentations to keep up with the order within the Playlist. According to the Apple docs, these type of predicates are not doable with Core Data (it basically doesn't multiple inner joins). Below is the relevant portion of my Data Model. Is my data model setup incorrectly? Should I drop the NSArrayController and handle connecting the NSTableView up by hand? I'm trying to figure out if there is a simple fix, or really a design flaw.

    Read the article

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • [C#] How to receive uncrackable data or so ? ;P

    - by Prix
    Hi, I am working on an C# application to communicate with my website and retrieve some information from it, using SSL which is working just fine. Now what i want/need is a way to receive encrypted or codified or obfuscated data that if some one cracks my application they will not be able to decrypt the data because it needs something from the server (api, website) but yet the application needs to decrypt it in order to use it... initally i was thinking of an inside RSA pair or keys, to send and receive the encrypt data but let's consider that someone has cracked the application, they could just replace those keys for keys they have made, so i was looking into some methods but havent found or been able to think of any way to harder this... I was learning about RSA, encryption and such and started developing this as a self learning and got involved with it and now i am trying to figure out a way to receive data like that... I have considered obfuscating and compiling my code with packers and etc but this is not about packing it etc... i am more interested in knowing a better way to secure what i described i know it may or is impossible but yet i am looking forward to some approch. I would appreciate advices, suggestions and C# code samples, if you need more information or anything please let me know.

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • faster way to compare rows in a data frame

    - by aguiar
    Consider the data frame below. I want to compare each row with rows below and then take the rows that are equal in more than 3 values. I wrote the code below, but it is very slow if you have a large data frame. How could I do that faster? data <- as.data.frame(matrix(c(10,11,10,13,9,10,11,10,14,9,10,10,8,12,9,10,11,10,13,9,13,13,10,13,9), nrow=5, byrow=T)) rownames(data)<-c("sample_1","sample_2","sample_3","sample_4","sample_5") >data V1 V2 V3 V4 V5 sample_1 10 11 10 13 9 sample_2 10 11 10 14 9 sample_3 10 10 8 12 9 sample_4 10 11 10 13 9 sample_5 13 13 10 13 9 tab <- data.frame(sample = NA, duplicate = NA, matches = NA) dfrow <- 1 for(i in 1:nrow(data)) { sample <- data[i, ] for(j in (i+1):nrow(data)) if(i+1 <= nrow(data)) { matches <- 0 for(V in 1:ncol(data)) { if(data[j,V] == sample[,V]) { matches <- matches + 1 } } if(matches > 3) { duplicate <- data[j, ] pair <- cbind(rownames(sample), rownames(duplicate), matches) tab[dfrow, ] <- pair dfrow <- dfrow + 1 } } } >tab sample duplicate matches 1 sample_1 sample_2 4 2 sample_1 sample_4 5 3 sample_2 sample_4 4

    Read the article

  • CRM@Oracle Series: Showcasing Innovation with Oracle Customer Hub

    - by tony.berk
    When is having too many customers a challenge? It is not something too many people would complain about. But from a data perspective, one challenge is to keep each customer's data consistent across multiple enterprise systems such as CRM, ERP, and all of your other related applications. Buckle your seat belts, we are going a bit technical today... If you have ever tried it, you know it isn't easy. If you haven't, don't go there alone! Customer data integration projects are challenging and, depending on the environment, require sharp, innovative people to succeed. Want to hear from some guys who have done it and succeeded? Here is an interview with Dan Lanir and Afzal Asif from Oracle's Applications IT CRM Systems group on implementing Oracle Customer Hub and innovation. For more interesting discussions on innovation, check out the Oracle Innovation Showcase.

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >