Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 44/2640 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Build ATOM Feed Reader for ADO.net DATA Services feed

    - by khalil
    Hi, I have built an ADO.net data services to expose data in a SQL server database as XML. What I want to be able to do is create a feed reader for this ATOM feed in .net or may be a user control which subscribes to this URI based ATOM Feed from ADO.net data service & publishes the latest information on our website

    Read the article

  • How to return plain XML from ADO.NET data service

    - by KHALIL
    Hi, I was wondering how to return plain XML from ADO.net data services I have exposed an ADO.net data service to different DEPARTMENTS in our company who are not so technical. The data returned is ATOM FEED which is kind a hard to read / interpret with its format, too much information is returned people from various departments would execute different queries ( HTTP Request) and i wanted them to display simple XML or atleast something more user friendly like HTML I have tried ACCEPT attribute of the request to be plain XML and it still returns ATOM Thanks -- Khalil

    Read the article

  • Can Core Data be used on Linux?

    - by glenc
    This might be a stupid question, but I was wondering whether or not you can use the Core Data libraries on Linux at all? I'm planning how to build the server side of an iPhone app that I'm working on, and have found that you can use PyObjC to get access to Core Data in a Python environment, e.g. use Core Data in a TurboGears web application. At this point I'm thinking that you would have to run the web server on Mac OSX, because I can't find any evidence on the internet that you can access the Objective-C libraries on Linux. I've always written webapps on Linux but will obviously make the jump to an OSX server if it allows me to use the same datastore implementation on the iPhone and the server, the only job remaining being the Core Data <- Web Services XML translation that has to happen on the wire.

    Read the article

  • Data Collection (Offline - no internet) and then syncing it to generate reports from server

    - by Nishant
    So, I have a new project I am planning on taking, and needed to know what skills will be required to achieve this project. The project is to do intensive data collection in the field where they don't have internet access. As part of the data collection, images will be uploaded as part of the data collection which will have to be resized, etc. Once the data collection occurs, this data needs to be consolidated and reported on. I am thinking there are two ways of generating the report. 1. Into a PDF that can be designed. 2. Is there a way to generate an executable file (since the PDF will be huge due to multiple images, etc) and the executable file is navigation friendly with drop-downs etc. It might not be an executable file, but could be a web page or some way that this can be delivered to the client in a friendly professional way. The PDF will have to be generated somehow so that it can be printed as a hard copy. What languages and skill sets will I need to accomplish this project?

    Read the article

  • Parse and transform XML with missing elements into table structure

    - by dnlbrky
    I'm trying to parse an XML file. A simplified version of it looks like this: x <- '<grandparent><parent><child1>ABC123</child1><child2>1381956044</child2></parent><parent><child2>1397527137</child2></parent><parent><child3>4675</child3></parent><parent><child1>DEF456</child1><child3>3735</child3></parent><parent><child1/><child3>3735</child3></parent></grandparent>' library(XML) xmlRoot(xmlTreeParse(x)) ## <grandparent> ## <parent> ## <child1>ABC123</child1> ## <child2>1381956044</child2> ## </parent> ## <parent> ## <child2>1397527137</child2> ## </parent> ## <parent> ## <child3>4675</child3> ## </parent> ## <parent> ## <child1>DEF456</child1> ## <child3>3735</child3> ## </parent> ## <parent> ## <child1/> ## <child3>3735</child3> ## </parent> ## </grandparent> I'd like to transform the XML into a data.frame / data.table that looks like this: parent <- data.frame(child1=c("ABC123",NA,NA,"DEF456",NA), child2=c(1381956044, 1397527137, rep(NA, 3)), child3=c(rep(NA, 2), 4675, 3735, 3735)) parent ## child1 child2 child3 ## 1 ABC123 1381956044 NA ## 2 <NA> 1397527137 NA ## 3 <NA> NA 4675 ## 4 DEF456 NA 3735 ## 5 <NA> NA 3735 If each parent node always contained all of the possible elements ("child1", "child2", "child3", etc.), I could use xmlToList and unlist to flatten it, and then dcast to put it into a table. But the XML often has missing child elements. Here is an attempt with incorrect output: library(data.table) ## Flatten: dt <- as.data.table(unlist(xmlToList(x)), keep.rownames=T) setnames(dt, c("column", "value")) ## Add row numbers, but they're incorrect due to missing XML elements: dt[, row:=.SD[,.I], by=column][] column value row 1: parent.child1 ABC123 1 2: parent.child2 1381956044 1 3: parent.child2 1397527137 2 4: parent.child3 4675 1 5: parent.child1 DEF456 2 6: parent.child3 3735 2 7: parent.child3 3735 3 ## Reshape from long to wide, but some value are in the wrong row: dcast.data.table(dt, row~column, value.var="value", fill=NA) ## row parent.child1 parent.child2 parent.child3 ## 1: 1 ABC123 1381956044 4675 ## 2: 2 DEF456 1397527137 3735 ## 3: 3 NA NA 3735 I won't know ahead of time the names of the child elements, or the count of unique element names for children of the grandparent, so the answer should be flexible.

    Read the article

  • Selecting element by data attribute

    - by zizo
    Is there an easy and straight-forward method to select elements based on their data attribute? For example, select all anchors that has data attribute named customerID which has value of 22. I am kind of hesitant to use rel or other attributes to store such information, but I find it much harder to select an element based on what data is stored in it. Thanks!

    Read the article

  • R: convert data.frame columns from factors to characters

    - by Mike Dewar
    Hi, I have a data frame. Let's call him bob: > head(bob) phenotype exclusion GSM399350 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399351 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399352 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399353 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399354 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399355 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- I'd like to concatenate the rows of this data frame (this will be another question). But look: > class(bob$phenotype) [1] "factor" Bob's columns are factors. So, for example: > as.character(head(bob)) [1] "c(3, 3, 3, 6, 6, 6)" "c(3, 3, 3, 3, 3, 3)" [3] "c(29, 29, 29, 30, 30, 30)" I don't begin to understand this, but I guess these are indices into the levels of the factors of the columns (of the court of king caractacus) of bob? Not what I need. Strangely I can go through the columns of bob by hand, and do bob$phenotype <- as.character(bob$phenotype) which works fine. And, after some typing, I can get a data.frame whose columns are characters rather than factors. So my question is: how can I do this automatically? How do I convert a data.frame with factor columns into a data.frame with character columns without having to manually go through each column? Bonus question: why does the manual approach work?

    Read the article

  • Suggest Cassandra data model for an existing schema

    - by Andriy Bohdan
    Hello guys! I hope there's someone who can help me suggest a suitable data model to be implemented using nosql database Apache Cassandra. More of than I need it to work under high loads and large amounts of data. Simplified I have 3 types of objects: Product Tag ProductTag Product: key - string key name - string .... - some other fields Tag: key - string key name - unique tag words ProductTag: product_key - foreign key referring to product tag_key - foreign key referring to tag rating - this is rating of tag for this product Each product may have 0 or many tags. Tag may be assigned to 1 or many products. Means relation between products and tags is many-to-many in terms of relational databases. Value of "rating" is updated "very" often. I need to be run the following queries Select objects by keys Select tags for product ordered by rating Select products by tag order by rating Update rating by product_key and tag_key The most important is to make these queries really fast on large amounts of data, considering that rating is constantly updated.

    Read the article

  • Shipping Java code with data baked into the .jar

    - by Andrew
    I need to ship some Java code that has an associated set of data. It's a simulator for a device, and I want to be able to include all of the data used for the simulated records in the one .JAR file. In this case, each simulated record contains four fields (calling party, called party, start of call, call duration). What's the best way to do that? I've gone down the path of generating the data as Java statements, but IntelliJ doesn't seem particularly happy dealing with a 100,000 line Java source file! Is there a smarter way to do this? In the C#/.NET world I'd create the data as a separate file, embed it in the assembly as a resource, and then use reflection to pull that out at runtime and access it. I'm unsure of what the appropriate analogy is in the Java world. FWIW, Java 1.6, shipping for Solaris.

    Read the article

  • Get control instance in asp.net dynamic data

    - by Ashwani K
    Hello All: I am creating a web application using Asp.net dynamic data. I am using GridView to show data from the database. In the grid view I am having following code for columns <Columns> <asp:DynamicField DataField="UserId" UIHint="Label" /> <asp:DynamicField DataField="Address" UIHint="Address"/> <asp:DynamicField DataField="CreatedDate" UIHint="Label" /> </Columns> But, before displaying I want to do some processing in C# code for each row. In normal ASP.net grid view we can handle OnRowDataBound method, and using FindControl("controlid") we can get the control instance, but in case of dynamic data, I am not getting any id attribute for columns, so I am not able to get the control instance to show updated data in that control depending on some conditions. Thanks, Ashwani

    Read the article

  • Neo4j Reading data / performing shortest path calculations on stored data

    - by paddydub
    I'm using the Batch_Insert example to insert Data into the database How can i read this data back from the database. I can't find any examples of how i do this. public static void CreateData() { // create the batch inserter BatchInserter inserter = new BatchInserterImpl( "var/graphdb", BatchInserterImpl.loadProperties( "var/neo4j.props" ) ); Map<String,Object> properties = new HashMap<String,Object>(); properties.put( "name", "Mr. Andersson" ); properties.put( "age", 29 ); long node1 = inserter.createNode( properties ); properties.put( "name", "Trinity" ); properties.remove( "age" ); long node2 = inserter.createNode( properties ); inserter.createRelationship( node1, node2, DynamicRelationshipType.withName( "KNOWS" ), null ); inserter.shutdown(); } I would like to store graph data in the database, graph.makeEdge( "s", "c", "cost", (double) 7 ); graph.makeEdge( "c", "e", "cost", (double) 7 ); graph.makeEdge( "s", "a", "cost", (double) 2 ); graph.makeEdge( "a", "b", "cost", (double) 7 ); graph.makeEdge( "b", "e", "cost", (double) 2 ); Dijkstra<Double> dijkstra = getDijkstra( graph, 0.0, "s", "e" ); What is the best method to store this kind data with 10000's of edges. Then run the Dijskra algorighm to find shortest path calculations using the stored graph data.

    Read the article

  • Using MySQL as data source in Microsoft SQL Server Analysis Services

    - by coldilocks
    Hi, I have installed the latest .net connector (http://www.mysql.com/downloads/connector/net/), I can add MySQL databases as Data Sources, I can even browse through the data from Business Intelligence Studio. The problem is that I CANNOT create a datasource view, or if I do create one without tables, trying to add them after the fact gives me the same error. Specifically it looks like the data source view wizard tries to submit queries against the MySQL database using square brackets/braces, and the query bombs. I get an error message like: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[my_db].[cheatType]' at line 2 So, in summary, has anyone been able to create a data source view using MySQL tables and, if so, can they please show me how this can be done. Thanks for any help!

    Read the article

  • Oracle Data Pump import to a sql file error :ORA-31655 no data or metadata objects

    - by Francisco Quiñones
    Hello, I'm using Data Pump to export/import data, one requirement is to import data to a sql file. The OS is window. I made the follow export : expdp system/password directory=dpump_dir dumpfile=tablesdump.dmp content=DATA_ONLY tables=user.tablename and it works, I can see the file TABLESDUMP.DMP in the directory path. then when I tried to import it to a sql file: impdp system/password directory=dpump_dir dumpfile=tablesdump.dmp sqlfile=tables_export.sql the log show : ..... ORA-31655 no data or metadata objects selected for job ..... and the sql file is created empty in the directory path. I'm not DBA, I'm a Java developer , Can you help me? Thks

    Read the article

  • Generating video or images of geometrical objects from data

    - by Jonathan Barbero
    Hello, I'm working in a course's project to predict the velocity and position of the solar system planets (and other objects). It will be really cool if I can visualize the predicted objects data, if it's possible generating 3D images, if in video that's amazing. Do you know any library that lets me to use this data to generate an image or video? (I don't care in which language) Data: - simulation step (time line step for a video) - positions of the objects - radius and/or colours of the objects Thanks in advance, any suggestion is welcome.

    Read the article

  • Flex 3: should I provide prepared data to my component or make it to process data before display?

    - by grapkulec
    I'm starting to learn a little Flex just for fun and maybe to prove that I still can learn something new :) I have some idea for a project and one of its parts is a tree component which could display data in different ways depending on configuration. The idea There is list of objects having properties like id, date, time, name, description. And sometimes list should be displayed like this: first level: date second level: time third level: name and sometimes like this: first level: year second level: month third level: day fourth level: time and name By level I mean level of nesting of course. So, we can have years, that have months, that have days, that have hours and so forth. The problem What could be the best way to do it? I mean, should I prepare data for different ways of nesting outside of component or even outside of flex? I can do it at web service level in C# where I plan to have database access layer and send to flex nice and ready to display XML or array of objects. But I wonder if that won't cause additional and maybe unneccessary network traffic. I tried to hack some code in my component to convert my data objects into XML or ArrayCollection but I don't know enough of Flex and got stuck on elimination of duplicates or getting specific data by some key value. Usually to do such things I have STL with maps, sets and vectors and I find Flex arrays and even Dictionary a little bit confusing (I've read language reference and googled without any significant luck). The question So, to sum things up: should I give my tree component data prepared just for chosen type of display or should I try to do it internally inside component (or some helper class written in ActionScript)?

    Read the article

  • Mock Object Data

    - by Nissan Fan
    I'd like to mock up object data, not the objects themselves. In other words, I would like to generate a collection of n objects and pass it into a function which generates random data strings and numbers. Is there anything to do this? Think of it as a Lorem Ipsum for object data. Constraints around numerical ranges etc. are not necessary, but would be a bonus.

    Read the article

  • REST service with binary data

    - by user179437
    Hi I want to create Restful service which can accept binary data. I've implemented javax.xml.ws.Provider interface, but i can't get content of request. If I use javax.xml.ws.Dispatch then its send only XML data, but I need transfer binary data. Please give some solution, but I don't prefer to use JAX-RS or Restlets. Thanks.

    Read the article

  • subset a data.frame with multiple conditions

    - by pslice
    Suppose my data looks like this: 2372 Kansas KS2000111 HUMBOLDT, CITY OF ATRAZINE 1.3 05/07/2006 9104 Kansas KS2000111 HUMBOLDT, CITY OF ATRAZINE 0.34 07/23/2006 9212 Kansas KS2000111 HUMBOLDT, CITY OF ATRAZINE 0.33 02/11/2007 2094 Kansas KS2000111 HUMBOLDT, CITY OF ATRAZINE 1.4 05/06/2007 16763 Kansas KS2000111 HUMBOLDT, CITY OF ATRAZINE 0.61 05/11/2009 1076 Kansas KS2000111 HUMBOLDT, CITY OF METOLACHLOR 0.48 05/12/2002 1077 Kansas KS2000111 HUMBOLDT, CITY OF METOLACHLOR 0.3 05/07/2006 I want to be able to subset by the Analyte and a partial match on the date(namely I just want the year). I have been trying this, but I know it isn't quite right. data[data$Analyte=="ATRAZINE" & grep("2006",as.character(data$Date)),] Any suggestions?

    Read the article

  • Essential skills of a Data Scientist

    - by harshsinghal
    I would like to know more about the relevant skills in the arsenal of a Data Scientist, and with new technologies coming in every day, how one picks and chooses the essentials. A few ideas germane to this discussion: Knowing SQL and the use of a DB such as MySQL, PostgreSQL was great till the advent of NoSql and non-relational databases. MongoDB, CouchDB etc. are becoming popular to work with web-scale data. Knowing a stats tool like R is enough for analysis, but to create applications one may need to add Java, Python, and such others to the list. Data now comes in the form of text, urls, multi-media to name a few, and there are different paradigms associated with their manipulation. What about cluster computing, parallel computing, the cloud, Amazon EC2, Hadoop ? OLS Regression now has Artificial Neural Networks, Random Forests and other relatively exotic machine learning/data mining algos. for company Thoughts?

    Read the article

  • PHP - Redirect and send data via POST

    - by Jeremy Rudd
    I have an online gateway which requires an HTML form to be submitted with hidden fields. I need to do this via a PHP script without any HTML forms (I have the data for the hidden fields in a DB) To do this sending data via GET: header('Location: http://www.provider.com/process.jsp?id=12345&name=John'); And to do this sending data via POST?

    Read the article

  • How do I know an array of structures was allocated in the Large Object Heap (LOH) in .NET?

    - by AMissico
    After some experimenation using CLR Profiler, I found that: Node[,] n = new Node[100,23]; //'84,028 bytes, is not placed in LOH Node[,] n = new Node[100,24]; //'86,428 bytes, is public struct Node { public int Value; public Point Point; public Color Color; public bool Handled; public Object Tag; } During run-time, how do I know an array of structures (or any array) was allocated in the Large Object Heap (LOH)?

    Read the article

  • Producer Consumer Issue with Core Data

    - by Mugunth Kumar
    I've a Core Data application. In the producer thread, I pull data from a web service and store it in my object and call save. My consumer object is a table view controller that displays the same. However, the app crashes and I get NSFetchedResultsController Error: expected to find object (entity: FeedEntry; id: 0xf46f40 ; data: ) in section (null) for deletion on the console. When I debug it, everything works fine. So I understood that it's like a race issue. How is these kind of problem solved? What's the best way to design a producer-consumer app with core-data?

    Read the article

  • SQL Server: export data via SQL query?

    - by rlb.usa
    I have FK and PK all over my db and table data needs to be specified in a certain order or else I get FK/PK insertion errors. I'm tired of executing the wizard again and again to transfer data one table at a time. In the SQL Server export data wizard there is an option to "Write a query to specify the data to transfer". I'd like to write the query myself and specify the correct order. Will this solve my problem? How do I do this? Can you provide a sample query (or link to one) The databases are on two different servers - SQL Server 2008 on each ; The database names & permissions are the same ; each table name & col is the same ; I need Identity Insert for each table.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >