Search Results

Search found 58609 results on 2345 pages for 'data extraction'.

Page 34/2345 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Serial port : Read data problem, not reading complete data

    - by Anuj Mehta
    Hi I have an application where I am sending data via serial port from PC1 (Java App) and reading that data in PC2 (C++ App). The problem that I am facing is that my PC2 (C++ App) is not able to read complete data sent by PC1 i.e. from my PC1 I am sending 190 bytes but PC2 is able to read close to 140 bytes though I am trying to read in a loop. Below is code snippet of my C++ App Open the connection to serial port serialfd = open( serialPortName.c_str(), O_RDWR | O_NOCTTY | O_NDELAY); if (serialfd == -1) { /* * Could not open the port. */ TRACE << "Unable to open port: " << serialPortName << endl; } else { TRACE << "Connected to serial port: " << serialPortName << endl; fcntl(serialfd, F_SETFL, 0); } Configure the Serial Port parameters struct termios options; /* * Get the current options for the port... */ tcgetattr(serialfd, &options); /* * Set the baud rates to 9600... */ cfsetispeed(&options, B38400); cfsetospeed(&options, B38400); /* * 8N1 * Data bits - 8 * Parity - None * Stop bits - 1 */ options.c_cflag &= ~PARENB; options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; options.c_cflag |= CS8; /* * Enable hardware flow control */ options.c_cflag |= CRTSCTS; /* * Enable the receiver and set local mode... */ options.c_cflag |= (CLOCAL | CREAD); // Flush the earlier data tcflush(serialfd, TCIFLUSH); /* * Set the new options for the port... */ tcsetattr(serialfd, TCSANOW, &options); Now I am reading data const int MAXDATASIZE = 512; std::vector<char> m_vRequestBuf; char buffer[MAXDATASIZE]; int totalBytes = 0; fcntl(serialfd, F_SETFL, FNDELAY); while(1) { bytesRead = read(serialfd, &buffer, MAXDATASIZE); if(bytesRead == -1) { //Sleep for some time and read again usleep(900000); } else { totalBytes += bytesRead; //Add data read to vector for(int i =0; i < bytesRead; i++) { m_vRequestBuf.push_back(buffer[i]); } int newBytesRead = 0; //Now keep trying to read more data while(newBytesRead != -1) { //clear contents of buffer memset((void*)&buffer, 0, sizeof(char) * MAXDATASIZE); newBytesRead = read(serialfd, &buffer, MAXDATASIZE); totalBytes += newBytesRead; for(int j = 0; j < newBytesRead; j++) { m_vRequestBuf.push_back(buffer[j]); } }//inner while break; } //while

    Read the article

  • Join and sum not compatible matrices through data.table

    - by leodido
    My goal is to "sum" two not compatible matrices (matrices with different dimensions) using (and preserving) row and column names. I've figured this approach: convert the matrices to data.table objects, join them and then sum columns vectors. An example: > M1 1 3 4 5 7 8 1 0 0 1 0 0 0 3 0 0 0 0 0 0 4 1 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 > M2 1 3 4 5 8 1 0 0 1 0 0 3 0 0 0 0 0 4 1 0 0 0 0 5 0 0 0 0 0 8 0 0 0 0 0 > M1 %ms% M2 1 3 4 5 7 8 1 0 0 2 0 0 0 3 0 0 0 0 0 0 4 2 0 0 0 0 0 5 0 0 0 0 0 0 7 0 0 0 0 1 0 8 0 0 0 0 0 0 This is my code: M1 <- matrix(c(0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0), byrow = TRUE, ncol = 6) colnames(M1) <- c(1,3,4,5,7,8) M2 <- matrix(c(0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0), byrow = TRUE, ncol = 5) colnames(M2) <- c(1,3,4,5,8) # to data.table objects DT1 <- data.table(M1, keep.rownames = TRUE, key = "rn") DT2 <- data.table(M2, keep.rownames = TRUE, key = "rn") # join and sum of common columns if (nrow(DT1) > nrow(DT2)) { A <- DT2[DT1, roll = TRUE] A[, list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1), by = rn] } That outputs: rn X1 X3 X4 X5 X7 X8 1: 1 0 0 2 0 0 0 2: 3 0 0 0 0 0 0 3: 4 2 0 0 0 0 0 4: 5 0 0 0 0 0 0 5: 7 0 0 0 0 1 0 6: 8 0 0 0 0 0 0 Then I can convert back this data.table to a matrix and fix row and column names. The questions are: how to generalize this procedure? I need a way to automatically create list(X1 = X1 + X1.1, X3 = X3 + X3.1, X4 = X4 + X4.1, X5 = X5 + X5.1, X7, X8 = X8 + X8.1) because i want to apply this function to matrices which dimensions (and row/columns names) are not known in advance. In summary I need a merge procedure that behaves as described. there are other strategies/implementations that achieve the same goal that are, at the same time, faster and generalized? (hoping that some data.table monster help me) to what kind of join (inner, outer, etc. etc.) is assimilable this procedure? Thanks in advance. p.s.: I'm using data.table version 1.8.2 EDIT - SOLUTIONS @Aaron solution. No external libraries, only base R. It works also on list of matrices. add_matrices_1 <- function(...) { a <- list(...) cols <- sort(unique(unlist(lapply(a, colnames)))) rows <- sort(unique(unlist(lapply(a, rownames)))) out <- array(0, dim = c(length(rows), length(cols)), dimnames = list(rows,cols)) for (m in a) out[rownames(m), colnames(m)] <- out[rownames(m), colnames(m)] + m out } @MadScone solution. Used reshape2 package. It works only on two matrices per call. add_matrices_2 <- function(m1, m2) { m <- acast(rbind(melt(M1), melt(M2)), Var1~Var2, fun.aggregate = sum) mn <- unique(colnames(m1), colnames(m2)) rownames(m) <- mn colnames(m) <- mn m } BENCHMARK (100 runs with microbenchmark package) Unit: microseconds expr min lq median uq max 1 add_matrices_1 196.009 257.5865 282.027 291.2735 549.397 2 add_matrices_2 13737.851 14697.9790 14864.778 16285.7650 25567.448 No need to comment the benchmark: @Aaron solution wins. I'll continue to investigate a similar solution for data.table objects. I'll add other solutions eventually reported or discovered.

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • OCR with Neural network: data extraction

    - by Sebastian Hoitz
    I'm using the AForge library framework and its neural network. At the moment when I train my network I create lots of images (one image per letter per font) at a big size (30 pt), cut out the actual letter, scale this down to a smaller size (10x10 px) and then save it to my harddisk. I can then go and read all those images, creating my double[] arrays with data. At the moment I do this on a pixel basis. So once I have successfully trained my network I test the network and let it run on a sample image with the alphabet at different sizes (uppercase and lowercase). But the result is not really promising. I trained the network so that RunEpoch had an error of about 1.5 (so almost no error), but there are still some letters left that do not get identified correctly in my test image. Now my question is: Is this caused because I have a faulty learning method (pixelbased vs. the suggested use of receptors in this article: http://www.codeproject.com/KB/cs/neural_network_ocr.aspx - are there other methods I can use to extract the data for the network?) or can this happen because my segmentation-algorithm to extract the letters from the image to look at is bad? Does anyone have ideas on how to improve it?

    Read the article

  • Data Structures for Logic Games / Deduction Rules / Sufficient Set of Clues?

    - by taserian
    I've been cogitating about developing a logic game similar to Einstein's Puzzle , which would have different sets of clues for every new game replay. What data structures would you use to handle the different entities (pets, colors of houses, nationalities, etc.), deduction rules, etc. to guarantee that the clues you provide point to a unique solution? I'm having a hard time thinking about how to get the deduction rules to play along with the possible clues; any insight would be appreciated.

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Why the static data members have to be defined outside the class separately in C++ (unlike Java)?

    - by iammilind
    class A { static int foo () {} // ok static int x; // <--- needed to be defined separately in .cpp file }; I don't see a need of having A::x defined separately in a .cpp file (or same file for templates). Why can't be A::x declared and defined at the same time? Has it been forbidden for historical reasons? My main question is, will it affect any functionality if static data members were declared/defined at the same time (same as Java) ?

    Read the article

  • Propel-load-data is causing an error

    - by Jon Winstanley
    I am trying to load fixtures but myproject is erroring at the CLI and starting the indexer process. I have tried: Rebuilding the schema and model Emptying the database and starting again Clearing the cache Validating the YML file and trying much simpler data-dumps My platform is Symfony 1.0 on Windows Some also seems to have had the same issue in the past. C:\web\my_project>symfony propel-load-data backend >> propel load data from "C:\web\my_project\data\fixtures" PHP Warning: session_start(): Cannot send session cookie - headers already sent by (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 Warning: session_start(): Cannot send session cookie - headers already sent by (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 PHP Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77 Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\php\PEAR\symfony\vendor\pake\pakeFunction.php:366) in C:\php\PEAR\symfony\storage\sfSessionStorage.class.php on line 77

    Read the article

  • How does LinqPad support WCF Data Services?

    - by user341127
    LinqPad supports WCF Data Services. If you assign an URL, such as http://services.odata.org/Northwind/Northwind.svc/. It will list all available data objects and you can query them. I guess LinqPad generates all available data classes at run time by reflection.Emit. I am wondering who can show me to how to do so. Or maybe someone has done it before. Any feedback are appreciated. Ying

    Read the article

  • Test data generators / quickest route to generating solid, non-repetitive, but not-real database sam

    - by Jamo
    I need to build a quick feasibility test / proof-of-concept of a remote database for a client, that will be populated with mostly-typical Company and People data (names, addresses, etc); 150K records or so. The sample databases mentioned here were helpful: http://stackoverflow.com/questions/57068/good-databases-with-sample-data ...but, I'd like to be able to generate sample data like this easily on less-typical datasets as well. Anyone have any recommendations for off-the-shelf (or off-the-web) solutions?

    Read the article

  • Resources related to data-mining and gaming on social networks

    - by darren
    Hi all I'm interested in the problem of patterning mining among players of social networking games. For example detecting cheaters of a game, given a company's user database. So far I have been following the usual recipe for a data mining project: construct a data warehouse that aggregates significant information select a classifier, and train it with a subsectio of records from the warehouse validate classifier with another test set lather, rinse, repeat Surprisingly, I've found very little in this area regarding literature, best practices, etc. I am hoping to crowdsource the information gathering problem here. Specifically what I'm looking for: What classifiers have worked will for this type of pattern mining (it seems highly temporal, users playing games, users receiving rewards, users transferring prizes etc). Are there any highly agreed upon attributes specific to social networking / gaming data? What is a practical amount of information that should be considered? One problem I've run into is data overload, where queries and data cleansing may take days to complete. Related to point above, what hardware resources are required to produce results? I've found it difficult to estimate the amount of computing power I will require for production use. It has become apparent that a white box in the corner does not have enough horse-power for such a project. Are companies generally resorting to cloud solutions? Are they buying clusters? Basically, any resources (theoretical, academic, or practical) about implementing a social networking / gaming pattern-mining program would be very much appreciated. Thanks.

    Read the article

  • improve my code for collapsing a list of data.frames

    - by romunov
    Dear StackOverFlowers (flowers in short), I have a list of data.frames (walk.sample) that I would like to collapse into a single (giant) data.frame. While collapsing, I would like to mark (adding another column) which rows have came from which element of the list. This is what I've got so far. This is the data.frame that needs to be collapsed/stacked. > walk.sample [[1]] walker x y 1073 3 228.8756 -726.9198 1086 3 226.7393 -722.5561 1081 3 219.8005 -728.3990 1089 3 225.2239 -727.7422 1032 3 233.1753 -731.5526 [[2]] walker x y 1008 3 205.9104 -775.7488 1022 3 208.3638 -723.8616 1072 3 233.8807 -718.0974 1064 3 217.0028 -689.7917 1026 3 234.1824 -723.7423 [[3]] [1] 3 [[4]] walker x y 546 2 629.9041 831.0852 524 2 627.8698 873.3774 578 2 572.3312 838.7587 513 2 633.0598 871.7559 538 2 636.3088 836.6325 1079 3 206.3683 -729.6257 1095 3 239.9884 -748.2637 1005 3 197.2960 -780.4704 1045 3 245.1900 -694.3566 1026 3 234.1824 -723.7423 I have written a function to add a column that denote from which element the rows came followed by appending it to an existing data.frame. collapseToDataFrame <- function(x) { # collapse list to a dataframe with a twist walk.df <- data.frame() for (i in 1:length(x)) { n.rows <- nrow(x[[i]]) if (length(x[[i]])>1) { temp.df <- cbind(x[[i]], rep(i, n.rows)) names(temp.df) <- c("walker", "x", "y", "session") walk.df <- rbind(walk.df, temp.df) } else { cat("Empty list", "\n") } } return(walk.df) } > collapseToDataFrame(walk.sample) Empty list Empty list walker x y session 3 1 -604.5055 -123.18759 1 60 1 -562.0078 -61.24912 1 84 1 -594.4661 -57.20730 1 9 1 -604.2893 -110.09168 1 43 1 -632.2491 -54.52548 1 1028 3 240.3905 -724.67284 1 1040 3 232.5545 -681.61225 1 1073 3 228.8756 -726.91980 1 1091 3 209.0373 -740.96173 1 1036 3 248.7123 -694.47380 1 I'm curious whether this can be done more elegantly, with perhaps do.call() or some other more generic function?

    Read the article

  • VBA-Sorting the data in a listbox, sort works but data in listbox not changed

    - by Mike Clemens
    A listbox is passed, the data placed in an array, the array is sort and then the data is placed back in the listbox. The part that does work is putting the data back in the listbox. Its like the listbox is being passed by value instead of by ref. Here's the sub that does the sort and the line of code that calls the sort sub. Private Sub SortListBox(ByRef LB As MSForms.ListBox) Dim First As Integer Dim Last As Integer Dim NumItems As Integer Dim i As Integer Dim j As Integer Dim Temp As String Dim TempArray() As Variant ReDim TempArray(LB.ListCount) First = LBound(TempArray) ' this works correctly Last = UBound(TempArray) - 1 ' this works correctly For i = First To Last TempArray(i) = LB.List(i) ' this works correctly Next i For i = First To Last For j = i + 1 To Last If TempArray(i) > TempArray(j) Then Temp = TempArray(j) TempArray(j) = TempArray(i) TempArray(i) = Temp End If Next j Next i ! data is now sorted LB.Clear ! this doesn't clear the items in the listbox For i = First To Last LB.AddItem TempArray(i) ! this doesn't work either Next i End Sub Private Sub InitializeForm() ' There's code here to put data in the list box Call SortListBox(FieldSelect.CompleteList) End Sub Thanks for your help.

    Read the article

  • Problem with core data migration mapping model

    - by dpratt
    I have an iphone app that uses Core Data to do storage. I have successfully deployed it, and now I'm working on the second version. I've run into a problem with the data model that will require a few very simple data transformations at the time that the persistent store gets upgraded, so I can't just use the default inferred mapping model. My object model is stored in an .xcdatamodeld bundle, with versions 1.0 and 1.1 next to each other. Version 1.1 is set as the active version. Everything works fine when I use the default migration behavior and set NSInferMappingModelAutomaticallyOption to YES. My sqlite storage gets upgraded from the 1.0 version of the model, and everything is good except for, of course, the few transformations I need done. As an additional experimental step, I added a new Mapping Model to the core data model bundle, and have made no changes to what xcode generated. When I run my app (with an older version of the data store), I get the following * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Object's persistent store is not reachable from this NSManagedObjectContext's coordinator' What am I doing wrong? Here's my code for to get the managed object model and the persistent store coordinator. - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (_persistentStoreCoordinator != nil) { return _persistentStoreCoordinator; } _persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"gti_store.sqlite"]]; NSError *error; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { NSLog(@"Eror creating persistent store coodinator - %@", [error localizedDescription]); } return _persistentStoreCoordinator; } - (NSManagedObjectModel *)managedObjectModel { if(_managedObjectModel == nil) { _managedObjectModel = [[NSManagedObjectModel mergedModelFromBundles:nil] retain]; NSDictionary *entities = [_managedObjectModel entitiesByName]; //add a sort descriptor to the 'Foo' fetched property so that it can have an ordering - you can't add these from the graphical core data modeler NSEntityDescription *entity = [entities objectForKey:@"Foo"]; NSFetchedPropertyDescription *fetchedProp = [[entity propertiesByName] objectForKey:@"orderedBar"]; NSSortDescriptor* sortDescriptor = [[[NSSortDescriptor alloc] initWithKey:@"index" ascending:YES] autorelease]; NSArray* sortDescriptors = [NSArray arrayWithObjects:sortDescriptor, nil]; [[fetchedProp fetchRequest] setSortDescriptors:sortDescriptors]; } return _managedObjectModel; }

    Read the article

  • Generic Data Structure Description Language

    - by Jon Purdy
    I am wondering whether there exists any declarative language for arbitrarily describing the format and semantics of a data structure, that can be compiled to a specific implementation of that structure in any of a set of target languages. That is, something like a generic data definition language but geared toward describing arbitrary data structures such as vectors, lists, trees, etc., and the semantics of operations on those structures. I ask because I had an idea for a feasible implementation of this concept, and I'm just wondering whether it's worth it, and, consequently, whether it's been done before. Another, slightly more abstract question: is there any real difference between the normative specification of a data structure (what it does) and its implementation (how it does it)?

    Read the article

  • How does cobol store and retrieve data?

    - by controlfreak123
    I'm starting to learn about COBOL. I have some experience writing programs that deal with sql databases and I guess I'm confused how cobol stores and retrieves data that is stored in a mainframe for example. I know that it's not like relational databases but every example program I've seen takes data straight from the command line and I know thats not how real world COBOL programs process the data. Can someone explain or show me a good resource that can explain it?

    Read the article

  • Visibility of Class field-data of Mouse Clicked ImageButton located within WrapPanel

    - by Bill
    I am attempting to obtain the class-data behind an ImageButton that is mouse-clicked; which ImageButton is located within a WrapPanel filled with ImageButtons. The problem I am having is obtaining the visibility of the field data within the class behind the image-button. Although I can see the class, I can neither see nor access the field data. Can anyone please point me in the right direction? // Handles the ImageButton mouseClick event within the WrapPanel. private void SolarSystem_Click(Object sender, RoutedEventArgs e) { FrameworkElement fe = e.OriginalSource as FrameworkElement; SelectedPlanet PlanetSelected = new SelectedPlanet(fe); PlanetSelected.Owner = this; MessageBox.Show(PlanetSelected.PlanetName); } // Used to initiate instance of Class and some field data. public SelectedPlanet(FrameworkElement fe) { InitializeComponent(); string sPlanetName = ((PlanetClass)(fe)).PlanetName; return sPlanetName } // Class Data public class PlanetClass { string planetName; public PlanetClass(string planetName) { PlanetName = planetName; } public string PlanetName { set { planetName = value; } get { return planetName; } } }

    Read the article

  • Core Data vs. SQLitePersistentObjects

    - by Macatomy
    I'm creating an iPhone app and I'm trying to choose between 2 solutions for a persistent store. Core Data, or SQLitePersistentObjects. Basically, all my app needs is a way to store an array of model objects and then load them again to display in a UITableView. Its nothing too complicated. Core Data seems to have a much higher learning curve than the simple to use SQLitePersistentObjects. Are there any obvious benefits of using Core Data over SQLitePersistentObjects in my case?

    Read the article

  • Build ATOM Feed Reader for ADO.net DATA Services feed

    - by khalil
    Hi, I have built an ADO.net data services to expose data in a SQL server database as XML. What I want to be able to do is create a feed reader for this ATOM feed in .net or may be a user control which subscribes to this URI based ATOM Feed from ADO.net data service & publishes the latest information on our website

    Read the article

  • How to return plain XML from ADO.NET data service

    - by KHALIL
    Hi, I was wondering how to return plain XML from ADO.net data services I have exposed an ADO.net data service to different DEPARTMENTS in our company who are not so technical. The data returned is ATOM FEED which is kind a hard to read / interpret with its format, too much information is returned people from various departments would execute different queries ( HTTP Request) and i wanted them to display simple XML or atleast something more user friendly like HTML I have tried ACCEPT attribute of the request to be plain XML and it still returns ATOM Thanks -- Khalil

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >