Search Results

Search found 74621 results on 2985 pages for 'oracle platform migration data migration'.

Page 475/2985 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • La migration des applications Android/iOS sur Windows Phone bientôt possible ? Un brevet de Microsoft dévoile un service de transition

    La migration des applications Android/iOS sur Windows Phone bientôt possible ? Un brevet de Microsoft dévoile un nouveau service pour faciliter la transition vers son OS Le succès d'une plateforme mobile passe aussi par la qualité et la quantité d'applications disponibles sur sa galerie. Microsoft est conscient de cela, et a déjà développé plusieurs stratégies pour attirer les développeurs mobiles. La firme avait par exemple publié des outils pour aider les développeurs Android et iOS à porter leurs applications existantes sur Windows Phone, et faciliter la transition d'un écosystème vers sa plateforme. Mais la société n'a pas l'intention de s'arrêter là. Selon un dépôt d...

    Read the article

  • Why is selecting specified columns, and all, wrong in Oracle SQL?

    - by TomatoSandwich
    Say I have a select statement that goes.. select * from animals That gives a a query result of all the columns in the table. Now, if the 42nd column of the table animals is is_parent, and I want to return that in my results, just after gender, so I can see it more easily. But I also want all the other columns. select is_parent, * from animals This returns ORA-00936: missing expression. The same statement will work fine in Sybase, and I know that you need to add a table alias to the animals table to get it to work ( select is_parent, a.* from animals ani), but why must Oracle need a table alias to be able to work out the select?

    Read the article

  • Geo Fence: how to find if a point or a shape is inside a polygon using oracle spatial

    - by Abdul
    Hi Folks, How do i find if a point or a polygon is inside another polygon using oracle spatial SQL query Here is the scenario; I have a table (STATE_TABLE) which contains a spatial type (sdo_geometry) which is a polygon (of say a State), I have another table (UNIVERSITY_TABLE) which contains spatial data (sdo_geometry) (point/polygon) which contains Universities; Now how do i find if a selected University('s) are in a given State using SQL select statement. Primarily i want to locate existence of given object(s) in a geofence. Thanks.

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • d:DesignData issue, Visual Studio 2010 cant build after adding sample design data with Expression Bl

    - by Valko
    Hi, VS 2010 solution and Silverlight project builds fine, then: I open MyView.xaml view in Expression Blend 4 Add sample data from class (I use my class defined in the same project) after I add new sample design data with Expression blend 4, everything looks fine, you see the added sample data in the EB 4 fine, you also see the data in VS 2010 designer too. Close the EB 4, and next VS 2010 build is giving me this errors: Error 7 XAML Namespace http://schemas.microsoft.com/expression/blend/2008 is not resolved. C:\Code\source\...myview.xaml and: Error 12 Object reference not set to an instance of an object. ... TestSampleData.xaml when I open the TestSampleData.xaml I see that namespace for my class used to define sample data is not recognized. However this namespace and the class itself exist in the same project! If I remove the design data from the MyView.xaml: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" it builds fine and the namespace in TestSampleData.xaml is recognized this time?? and then if add: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" I again see in the VS 2010 designer sample data, but the next build fails and again I see studio cant find the namespace in my TestSampleData.xaml containing sample data. That cycle is driving me crazy. Am I missing something here, is it not possible to have your class defining sample design data in the same project you have the MyView.xaml view?? cheers Valko

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point.

    Read the article

  • storing/retrieving data for graph with long continuous stretches

    - by james
    i have a large 2-dimensional data set which i would like to graph. the graph is displayed in a browser and the data is retrieved via ajax. long stretches of this graph will be continuous - e.g., for x=0 through x=1000, y=9, then for x=1001 through x=1100, y=80, etc. the approach i'm considering is to send (from the server) and store (in the browser) only the points where the data changes. so for the example above, i would say data[0] = 9, then data[1001] = 80. then given x=999 for example, retrieving data[999] would actually look up data[0]. the problem that arises is finding a dictionary-like data structure which behaves like this. the approach i'm considering is to store the data in a traditional dictionary object, then also maintain a sorted array of key for that object. when given x=999, it would look at the mid-point of this array, determine whether the nearest lower key is left or right of that midpoint, then repeat with the correct subsection, etc.. does anyone have thoughts on this problem/approach?

    Read the article

  • Dojox Datagrid contains data, but shows up as empty

    - by Vivek
    I'd really appreciate any help on this. There is this Dojox Datagrid that I'm creating programatically and supplying JSON data. As of now, I'm creating this data within JavaScript itself. Please refer to the below code sample. var upgradeStageStructure =[{ cells:[ { field: "stage", name: "Stage", width: "50%", styles: 'text-align: left;' }, { field:"status", name: "Status", width: "50%", styles: 'text-align: left;' } ] }]; var upgradeStageData = [ {id:1, stage: "Preparation", status: "Complete"}, {id:2, stage: "Formatting", status: "Complete"}, {id:3, stage: "OS Installation", status: "Complete"}, {id:4, stage: "OS Post-Installation", status: "In Progress"}, {id:5, stage: "Application Installation", status: "Not Started"}, {id:6, stage: "Application Post-Installation", status: "Not Started"} ]; var stagestore = new dojo.data.ItemFileReadStore({data:{identifier:"id", items: upgradeStageData}}); var upgradeStatusGrid = new dojox.grid.DataGrid({ autoHeight: true, style: "width:400px;padding:0em;margin:0em;", store: stagestore, clientSort: false, rowSelector: '20px', structure: upgradeStageStructure, columnReordering: false, selectable: false, singleClickEdit: false, selectionMode: 'none', loadingMessage: 'Loading Upgrade Stages', noDataMessage:'There is no data', errorMessage: 'Failed to load Upgrade Status' }); dojo.byId('progressIndicator').innerHTML=''; dojo.byId('progressIndicator').appendChild(upgradeStatusGrid.domNode); upgradeStatusGrid.startup(); The problem is that I am not seeing anything within the grid upon display (no headers, no data). But I know for sure that the data in the grid does exist and the grid is properly initialized, because I called alert (grid.domNode.innerHTML);. The resultant HTML that is thrown up does show a table containing header rows and the above data. This link contains an image which illustrates what I'm seeing when I display the page. (Can't post images since my account is new here) As you may notice, there are 6 rows for 6 pieces of data I have created but the grid is a mess. Please help out if you think you know what could be going wrong. Thanks in advance, Viv

    Read the article

  • rpy2: Converting a data.frame to a numpy array

    - by Mike Dewar
    I have a data.frame in R. It contains a lot of data : gene expression levels from many (125) arrays. I'd like the data in Python, due mostly to my incompetence in R and the fact that this was supposed to be a 30 minute job. I would like the following code to work. To understand this code, know that the variable path contains the full path to my data set which, when loaded, gives me a variable called immgen. Know that immgen is an object (a Bioconductor ExpressionSet object) and that exprs(immgen) returns a data frame with 125 columns (experiments) and tens of thousands of rows (named genes). robjects.r("load('%s')"%path) # loads immgen e = robjects.r['data.frame']("exprs(immgen)") expression_data = np.array(e) This code runs, but expression_data is simply array([[1]]). I'm pretty sure that e doesn't represent the data frame generated by exprs() due to things like: In [40]: e._get_ncol() Out[40]: 1 In [41]: e._get_nrow() Out[41]: 1 But then again who knows? Even if e did represent my data.frame, that it doesn't convert straight to an array would be fair enough - a data frame has more in it than an array (rownames and colnames) and so maybe life shouldn't be this easy. However I still can't work out how to perform the conversion. The documentation is a bit too terse for me, though my limited understanding of the headings in the docs implies that this should be possible. Anyone any thoughts?

    Read the article

  • How to find the latest row for each group of data

    - by Jason
    Hi All, I have a tricky problem that I'm trying to find the most effective method to solve. Here's a simplified version of my View structure. Table: Audits AuditID | PublicationID | AuditEndDate | AuditStartDate 1 | 3 | 13/05/2010 | 01/01/2010 2 | 1 | 31/12/2009 | 01/10/2009 3 | 3 | 31/03/2010 | 01/01/2010 4 | 3 | 31/12/2009 | 01/10/2009 5 | 2 | 31/03/2010 | 01/01/2010 6 | 2 | 31/12/2009 | 01/10/2009 7 | 1 | 30/09/2009 | 01/01/2009 There's 3 query's that I need from this. I need to one to get all the data. The next to get only the history data (that is, everything but exclude the latest data item by AuditEndDate) and then the last query is to obtain the latest data item (by AuditEndDate). There's an added layer of complexity that I have a date restriction (This is on a per user/group basis) where certain user groups can only see between certain dates. You'll notice this in the where clause as AuditEndDate<=blah and AuditStartDate=blah Foreach publication, select all the data available. select * from Audits Where auditEndDate<='31/03/10' and AuditStartDate='06/06/2009'; foreach publication, select all the data but Exclude the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009' /* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend!=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ Foreach publication, select only the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009'/* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ So at the moment, query 1 and 3 work fine, but query 2 just returns all the data instead of the restriction. Can anyone help me? Thanks jason

    Read the article

  • Multiple calls to data service from SL3?

    - by Chris
    I have an SL3 that makes asynchronous calls to a data service. Basically, there is a treeview that is bound to a collection of objects. The idea is that as a user selects a specific treeviewitem, a call is made to the data service, with a parameter specific to the selected treeviewitem being passed to the corresponding web method in the data service. The data service returns data back to the SL3 client, and the client presents the data to the user. This works well. The problem is that when users start to navigate through the treeview using the arrow keys on their keyboard, they could press the down arrow key, for example, 10 times, and 10 calls will be made to the data service, and then each of the 10 items will be displayed to the user momentarily, until finishing with the data for the most recently selected treeview item. So - onto the question. How can I put in some form of delay, to allow someone to navigate quickly through a treeview, then, once then stop at a certain treeviewitem, a call is made to the data service? Thanks for any suggestions. Chris

    Read the article

  • Unable to set up ODBC after installing ODAC (Xcopy)

    - by rwilson513
    We are trying to use ODAC Xcopy to minimize the footprint of installing Oracle 11g Client. Currently, we use the Oracle 11g Admin install (~700mb). I've tried using the ODAC Xcopy, and that works. However, the only issue I now have is that I cannot set up an ODBC on the target system by just installing the ODAC Xcopy. After installing ODAC (Windows XP fyi), I go to Control Panel--Admin Tools--Data Sources (ODBC)--System DSN--Add--Microsoft ODBC for Oracle. I get the following error: "The Oracle(tm) client and networking components were not found. These components are supplied by Oracle and are part of the Oracle Version 7.3 (or greater) client software installation. You will be unable to use this driver until these components have been installed." I've tried editing the registry and creating the same keys that the Oracle Admin install creates, but still no luck. I'm not sure how to get past this. Any suggestions? Thanks in advance.

    Read the article

  • memcache is not storing data accross requests

    - by morpheous
    I am new to using memcache, so I may be doing something wrong. I have written a wrapper class around memcache. The wrapper class has only static methods, so is a quasi singleton. The class looks something like this: class myCache { private static $memcache = null; private static $initialized = false; public static function init() { if (self::$initialized) return; self::$memcache = new Memcache(); if (self::configure()) //connects to daemon { self::store('foo', 'bar'); } else throw ConnectionError('I barfed'); } public static function store($key, $data, $flag=MEMCACHE_COMPRESSED, $timeout=86400) { if (self::$memcache->get($key)!== false) return self::$memcache->replace($key, $data, $flag, $timeout); return self::$memcache->set($key, $data, $flag, $timeout); } public static function fetch($key) { return self::$memcache->get($key); } } //in my index.php file, I use the class like this require_once('myCache.php'); myCache::init(); echo 'Stored value is: '. myCache::fetch('foo'); The problem is that the myCache::init() method is being executed in full everytime a page is requested. I then remembered that static variables do not maintain state accross page requests. So I decided instead, to store the flag that indicates whether the server contains the start up data (for our purposes, the variable 'foo', with value 'bar') in memcache itself. Once the status flag is stored in memcache itself, It solves the problem of the initialisation data being loaded for every page request (which quite frankly, defeats the purpose of memcache). However, having solved that problem, when I come to fetch the data in memcache, it is empty. I dont understand whats going on. Can anyone clarify how I can store my data once and retrieve it accross page requests? BTW, (just to clarify), the get/set is working correctly, and if I allow memcache to load the initialisation data for each page request, (which is silly), then the data is available in memcache.

    Read the article

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

  • Migrating from a single entity to an abstract parent entity with child entities, NSEntityMigrationPolicy not called.

    - by Jimmy Selgen Nielsen
    Hi. I'm trying to upgrade my current application to use an abstract parent entity, with specialized sub entities. I've created a custom NSEntityMigrationPolicy, and in the mapping model I've set the Custom Policy to the name of my class. I'm initializing my persistent store like this, which should be fairly standard : NSError *error=nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: [self managedObjectModel]]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, nil]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { NSLog(@"Error adding persistent store : %@",[error description]); NSAssert(error==nil,[error localizedDescription]); } When i run the app i get the following error : Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'The operation couldn’t be completed. (Cocoa error 134140.)' [error userInfo] contains "reason=Can't find mapping model for migration" I've verified that version 1 of the data model will open, and if i set NSInferMappingModelAutomaticallyOption i get a migration, although my entities are not migrated correctly (as expected). I've verified that the mapping model (cdm) is in the application bundle, but somehow it refuses to find it. I've also set breakpoints and NSLog() statements in the custom migration policy, and none of it runs, with or without NSInferMappingModelAutomaticallyOption Any hints as to why it seems unable to find the mapping model ?

    Read the article

  • AJAX Post Not Sending Data?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost, so forgive me, but I have new data. I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • How can I store large amount of data from a database to XML (memory problem)?

    - by Andrija
    First, I had a problem with getting the data from the Database, it took too much memory and failed. I've set -Xmx1500M and I'm using scrolling ResultSet so that was taken care of. Now I need to make an XML from the data, but I can't put it in one file. At the moment, I'm doing it like this: while(rs.next()){ i++; xmlStringBuilder.append("\n\t<row>"); xmlStringBuilder.append("\n\t\t<ID>" + Util.transformToHTML(rs.getInt("id")) + "</ID>"); xmlStringBuilder.append("\n\t\t<JED_ID>" + Util.transformToHTML(rs.getInt("jed_id")) + "</JED_ID>"); xmlStringBuilder.append("\n\t\t<IME_PJ>" + Util.transformToHTML(rs.getString("ime_pj")) + "</IME_PJ>"); //etc. xmlStringBuilder.append("\n\t</row>"); if (i%100000 == 0){ //stores the data to a file with the name i.xml storeKBR(xmlStringBuilder.toString(),i); xmlStringBuilder= null; xmlStringBuilder= new StringBuilder(); } and it works; I get 12 100 MB files. Now, what I'd like to do is to do is have all that data in one file (which I then compress) but if just remove the if part, I go out of memory. I thought about trying to write to a file, closing it, then opening, but that wouldn't get me much since I'd have to load the file to memory when I open it. P.S. If there's a better way to release the Builder, do let me know :)

    Read the article

  • How to create a platform ontop of CSLA? <-- if in case it make sense

    - by Peejay
    Hi all! here is the senario, i'm developing an application using csla 3.8 / c#.net, the application will have different modules. its like an ERP, it will have accounting, daily time record, recruitment etc as modules. Now the requirement is to check for common entities per module and build a "platform" (<- the boss calls it that way) from it. for example, DTR will have an entity "employee", Recruitment will have "Applicant" so one common entity that you can derive from both that can be put in the platform is "Person". "Person" will contain typical info like name, address, contact info etc. I know it sounds like OOP 101. the thing is, i dont know how i am to go about it. how i wish it was just a simple inheritance but the requirement is like to create an API of some sort to be used by the modules using CSLA. in csla you create smart objects right, inheriting from the base classes of csla like businessListbase, readonlylistbase etc. right? what if for example i created a businessbase Applicant class, it will have properties like salary demand, availability date etc. now for the personal info i will need the "Person" from the "platform" and implement it to the applicant class. so in summary i have several questions: how to create such platform? if such platform is possible, how will it be implemented on each module's entities? (im already inheriting from base clases of csla) if incase 1 and 2 are possible, does it have advantages on development and maintenace of the app? the reason why i'm asking #3 is because the way i see it, even if i am able to create a platform for that, i will be needing to define properties of the platform entity on my module entities so to have validation and all. im sorry if i'm typing nonesense i'm really confused. hope someone could enlighten me. thank you all!

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

  • Load some data from database and hide it somewhere in a web page

    - by kwokwai
    Hi all, I am trying to load some data (which may be up to a few thousands words) from the database, and store the data somewhere in a html web page for comparing the data input by users. I am thinking to load the data to a Textarea under Div tag and hide the the data: <Div id="reference" style="Display:none;"> <textarea rows="2" cols="20" id="database"> html, htm, php, asp, jsp, aspx, ctp, thtml, xml, xsl... </textarea> </Div> <table border=0 width="100%"> <tr> <td>Username</td> <td> <div id="username"> <input type="text" name="data" id="data"> </div> </td> </tr> </table> <script> $(document).ready(function(){ //comparing the data loaded from database with the user's input if($("#data").val()==$("#database").val()) {alert("error");} }); </script> I am not sure if this is the best way to do it, so could you give me some advice and suggest your methods please.

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >