Search Results

Search found 73253 results on 2931 pages for 'enterprise data quality solutions and news'.

Page 246/2931 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • Why is Core Data not persisting these changes to disk?

    - by scott
    I added a new entity to my model and it loads fine but no changes made in memory get persisted to disk. My values set on the car object work fine in memory but aren't getting persisted to disk on hitting the home button (in simulator). I am using almost exactly the same code on another entity in my application and its values persist to disk fine (core data - sqlite3); Does anyone have a clue what I'm overlooking here? Car is the managed object, cars in an NSMutableArray of car objects and Car is the entity and Visible is the attribute on the entity which I am trying to set. Thanks for you assistance. Scott - (void)viewDidLoad { myAppDelegate* appDelegate = (myAppDelegate*)[[UIApplication sharedApplication] delegate]; NSManagedObjectContext* managedObjectContex = appDelegate.managedObjectContext; NSFetchRequest* request = [[NSFetchRequest alloc] init]; NSEntityDescription* entity = [NSEntityDescription entityForName:@"Car" inManagedObjectContext:managedObjectContex]; [request setEntity:entity]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"Name" ascending:YES]; NSArray* sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [sortDescriptors release]; [sortDescriptor release]; NSError* error = nil; cars = [[managedObjectContex executeFetchRequest:request error:&error] mutableCopy]; if (cars == nil) { NSLog(@"Can't load the Cars data! Error: %@, %@", error, [error userInfo]); } [request release]; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath*)indexPath { Car* car = [cars objectAtIndex:indexPath.row]; if (car.Visible == [NSNumber numberWithBool:YES]) { car.Visible = [NSNumber numberWithBool:NO]; [tableView cellForRowAtIndexPath:indexPath].accessoryType = UITableViewCellAccessoryNone; } else { car.Visible = [NSNumber numberWithBool:YES]; [tableView cellForRowAtIndexPath:indexPath].accessoryType = UITableViewCellAccessoryCheckmark; } }

    Read the article

  • How do I use Core Data with the Cocoa Text Input system?

    - by the Joel
    Hobbyist Cocoa programmer here. Have been looking around all the usual places, but this seems relatively under-explained: I am writing something a little out of the ordinary. It is much simpler than, but similar to, a desktop publishing app. I want editable text boxes on a canvas, arbitrarily placed. This is document-based and I’d really like to use Core Data. Now, The cocoa text-handling system seems to deal with a four-class structure: NSTextStorage, NSLayoutManager, NSTextContainer and finally NSTextView. I have looked into these and know how to use them, sort of. Have been making some prototypes and it works for simple apps. The problem arrives when I get into persistency. I don't know how to, by way of Cocoa Bindings or something else, store the contents of NSTextStorage (= the actual text) in my managed object context. I have considered overriding methods pairs like -words, -setWords: in these objects. This would let me link the words to a String, which I know how to store in Core Data. However, I’d have to override any method that affects the text - and that seems a little much. Thankful for any insights.

    Read the article

  • Update graph in real time from server

    - by user1869421
    I'm trying to update a graph with received data, so that the height of the bars increase as more data is received from the server via a websocket. But my code doesn't render a graph in the browser and plot the data points. I cannot see anything wrong with the code. I really need some help here please. ws = new WebSocket("ws://localhost:8888/dh"); var useData = [] //var chart; var chart = d3.select("body") .append("svg:svg") .attr("class", "chart") .attr("width", 420) .attr("height", 200); ws.onmessage = function(evt) { var distances = JSON.parse(evt.data); data = distances.miles; console.log(data); if(useData.length <= 10){ useData.push(data) } else { var draw = function(data){ // Set the width relative to max data value var x = d3.scale.linear() .domain([0, d3.max(useData)]) .range([0, 420]); var y = d3.scale.ordinal() .domain(useData) .rangeBands([0, 120]); var rect = chart.selectAll("rect") .data(useData) // enter rect rect.enter().append("svg:rect") .attr("y", y) .attr("width", x) .attr("height", y.rangeBand()); // update rect rect .attr("y", y) .attr("width", x) .attr("height", y.rangeBand()); var text = chart.selectAll("text") .data(useData) // enter text text.enter().append("svg:text") .attr("x", x) .attr("y", function (d) { return y(d) + y.rangeBand() / 2; }) .attr("dx", -3) // padding-right .attr("dy", ".35em") // vertical-align: middle .attr("text-anchor", "end") // text-align: right .text(String); // update text text .data(useData) .attr("x", x) .text(String); } useData.length = 0; } } Thanks

    Read the article

  • Most elegant way to break CSV columns into separate data structures using Python?

    - by Nick L
    I'm trying to pick up Python. As part of the learning process I'm porting a project I wrote in Java to Python. I'm at a section now where I have a list of CSV headers of the form: headers = [a, b, c, d, e, .....] and separate lists of groups that these headers should be broken up into, e.g.: headers_for_list_a = [b, c, e, ...] headers_for_list_b = [a, d, k, ...] . . . I want to take the CSV data and turn it into dict's based on these groups, e.g.: list_a = [ {b:val_1b, c:val_1c, e:val_1e, ... }, {b:val_2b, c:val_2c, e:val_2e, ... }, {b:val_3b, c:val_3c, e:val_3e, ... }, . . . ] where for example, val_1b is the first row of the 'b' column, val_3c is the third row of the 'c' column, etc. My first "Java instinct" is to do something like: for row in data: for col_num, val in enumerate(row): col_name = headers[col_num] if col_name in group_a: dict_a[col_name] = val elif headers[col_cum] in group_b: dict_b[col_name] = val ... list_a.append(dict_a) list_b.append(dict_b) ... However, this method seems inefficient/unwieldy and doesn't posses the elegance that Python programmers are constantly talking about. Is there a more "Zen-like" way I should try- keeping with the philosophy of Python?

    Read the article

  • Avoid an "out of memory error" in Java(eclipse), when using large data structure?

    - by gnomed
    OK, so I am writing a program that unfortunately needs to use a huge data structure to complete its work, but it is failing with a "out of memory error" during its initialization. While I understand entirely what that means and why it is a problem, I am having trouble overcoming it, since my program needs to use this large structure and I don't know any other way to store it. The program first indexes a large corpus of text files that I provide. This works fine. Then it uses this index to initialize a large 2D array. This array will have nXn entries, where "n" is the number of unique words in the corpus of text. For the relatively small chunk I am testing it on(about 60 files) it needs to make approximately 30,000x30,000 entries. this will probably be bigger once I run it on my full intended corpus too. It consistently fails every time, after it indexes, while it is initializing the data structure(to be worked on later). Things I have done include: revamp my code to use a primitive "int[]" instead of a "TreeMap" eliminate redundant structures, etc... Also, I have run eclipse with "eclipse -vmargs -Xmx2g" to max out my allocated memory I am fairly confident this is not going to be a simple line of code solution, but is most likely going to require a very new approach. I am looking for what that approach is, any ideas? Thanks, B.

    Read the article

  • Multiple Solution Layout for ASP.NET Web Portal?

    - by Jared S
    At work, we've developed a custom ASP.NET Web Portal (That's very similar to iGoogle). We have "Apps" (self-contained, large web forms) and "Modules" (similar to Google Gadgets). Currently, we use a single-solution model. Right now, we have: 3 core projects 60 application projects 80 module projects To reduce copy and pasting between projects, we're going to factor out common functionality (Data Access, Business Logic) into separate projects. I'd also like to introduce Unit Tests, which is going to increase the number of projects even more. We've already reached the point where Visual Studio is choking on the number of projects. We generally only load the 3 core projects and then whatever app's/module's project we're working on. Would a different solution structure help us out? Our number of projects is only going to increase. In general, an app or module only references the 3 core projects. Soon, apps/modules may start referencing the Data Access/Business Logic projects. But in general, apps and modules do not make references between themselves. So to recap, what is the best practice for solution structure when there are MANY projects that use a small number of core projects?

    Read the article

  • stdio data from write not making it into a file

    - by user1551209
    I'm having a problem with using stdio commands for manipulating data in a file. I short, when I write data into a file, write returns an int indicating that it was successful, but when I read it back out I only get the old data. Here's a stripped down version of the code: fd = open(filename,O_RDWR|O_APPEND); struct dE *cDE = malloc(sizeof(struct dE)); //Read present data printf("\nreading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); printf("\nwriting new values\n"); //Change the values locally cDE->key = //something new cDE->data = //something new //Write them back printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("WriteStatus <%d>\n",write(fd,cDE,deSize)); //Re-read to make sure that it got written back printf("\nre-reading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); Furthermore, here's the dE struct in case you're wondering: struct dE { int key; char data[DataSize]; }; This prints: reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old> writing new values SeekStatus <1072> WriteStatus <32> re-reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old>

    Read the article

  • Passing XML data and user from coldfusion page to .NET page

    - by Mark Rullo
    I'd appreciate some input on this situation, I can't figure out the best way to do this. I have some data that's being prepared for me in a ColdFusion app and in an IFrame within the CF app we want to display some graphs (not strictly an image, it's an entire page) being generated on the .NET side of things. I'd like to pass XML data from the CF side to .NET as well as the user. On the .NET side I'm putting the data in a session so the user can sift through it without the need to have it re-queried and re-passed from CF. What I've tried: Generating XML with CF, putting it in a hidden form field, auto-submitting (with JS) a the form to the .NET side. The issue I'm having with this approach is the encoding being done on the form post. The data has entries like <entry data="hello &amp; goodbye">. It's an issue because it's being URL encdeded, Posted, and when I get it on the .NET side I get <entry data="hello & goodbye"> which isn't properly formed XML. What I'd like to avoid: An intermediary DB approach (dropping the data in a DB on CF, picking it up with .NET) I'd like to only display what is passed to the page. I have security concerns with the data, it's very sensitive. Passing the data to a webservice, returning a GUID, forwarding the user with a URL Parameter to access the passed in data. I think that'd be risky if someone happened on a link to that data. I can't take that risk. I was thinking of passing the data with JSON, but I'm very unfamiliar with it. Thoughts? Thanks for your time folks.

    Read the article

  • How can I read and parse chunks of data into a Perl hash of arrays?

    - by neversaint
    I have data that looks like this: #info #info2 1:SRX004541 Submitter: UT-MGS, UT-MGS Study: Glossina morsitans transcript sequencing project(SRP000741) Sample: Glossina morsitans(SRS002835) Instrument: Illumina Genome Analyzer Total: 1 run, 8.3M spots, 299.9M bases Run #1: SRR016086, 8330172 spots, 299886192 bases 2:SRX004540 Submitter: UT-MGS Study: Anopheles stephensi transcript sequencing project(SRP000747) Sample: Anopheles stephensi(SRS002864) Instrument: Solexa 1G Genome Analyzer Total: 1 run, 8.4M spots, 401M bases Run #1: SRR017875, 8354743 spots, 401027664 bases 3:SRX002521 Submitter: UT-MGS Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403) Sample: Human DLD-1 tissue culture cell line(SRS001843) Instrument: Solexa 1G Genome Analyzer Total: 6 runs, 27.1M spots, 977M bases Run #1: SRR013356, 4801519 spots, 172854684 bases Run #2: SRR013357, 3603355 spots, 129720780 bases Run #3: SRR013358, 3459692 spots, 124548912 bases Run #4: SRR013360, 5219342 spots, 187896312 bases Run #5: SRR013361, 5140152 spots, 185045472 bases Run #6: SRR013370, 4916054 spots, 176977944 bases What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member: $VAR = { 'SRX004541' => ['SRR016086'], # etc } But why my construct doesn't work. And it must be a better way to do it. use Data::Dumper; my %bighash; my $head = ""; my @temp = (); while ( <> ) { chomp; next if (/^\#/); if ( /^\d{1,2}:(\w+)/ ) { print "$1\n"; $head = $1; } elsif (/^Run \#\d+: (\w+),.*/){ print "\t$1\n"; push @temp, $1; } elsif (/^$/) { push @{$bighash{$head}}, [@temp]; @temp =(); } } print Dumper \%bighash ;

    Read the article

  • A data structure based on the R-Tree: creating new child nodes when a node is full, but what if I ha

    - by Tom
    I realize my title is not very clear, but I am having trouble thinking of a better one. If anyone wants to correct it, please do. I'm developing a data structure for my 2 dimensional game with an infinite universe. The data structure is based on a simple (!) node/leaf system, like the R-Tree. This is the basic concept: you set howmany childs you want a node (a container) to have maximum. If you want to add a leaf, but the node the leaf should be in is full, then it will create a new set of nodes within this node and move all current leafs to their new (more exact) node. This way, very populated areas will have a lot more subdivisions than a very big but rarely visited area. This works for normal objects. The only problem arises when I have more than maxChildsPerNode objects with the exact same X,Y location: because the node is full, it will create more exact subnodes, but the old leafs will all be put in the exact same node again because they have the exact same position -- resulting in an infinite loop of creating more nodes and more nodes. So, what should I do when I want to add more leafs than maxChildsPerNode with the exact same position to my tree? PS. if I failed to explain my problem, please tell me, so I can try to improve the explanation.

    Read the article

  • What are your suggestions for best practises for regular data updates in a website database?

    - by bboyle1234
    My shared-hosting asp.net website must automatically run data update routines at regular times of day. Once it has finished running certain update routines, it can run update routines that are dependent on the previous updates. I have done this type of work before, using quite complicated setups. Some features of the framework I created are: A cron job from another server makes a request which starts a data update routine on the main server Each updater is loaded from web.config Each updater overrides a "canRunUpdate" method that determines whether its dependencies have finished updating Each updater overrides a "hasFinishedUpdate" method Each updater overrides a "runUpdate" method Updaters start and run in parallel threads The initial request from the cron job server started each updater in its own thread and then ended. As a result, the threads containing the updaters would be terminated before the updaters were finished. Therefore I had to give the updaters the ability to save partial results and continue the update job next time they are started up. As a result, the cron server had to call the updater many times to ensure the job is done. Sometimes the cron server would continue making update requests long after all the updates were completed. Sometimes the cron server would finish calling the update requests and leave some updates uncompleted. It's not the best system. I'm looking for inspiration. Any ideas please? Thank you :)

    Read the article

  • How do I display core data on second view controller?

    - by jon
    I am working on my first core data iPhone application. I am using a navigation controller, and the root view controller displays 4 rows. Clicking the first row takes me to a second table view controller. However, when I click the back button, repeat the row tap, click the back button again, and tap the row a third time, I get an error. I have been researching this for a week with no success. I can reproduce the error easily: Create a new Navigation-based Application, use Core Data for storage, call it MyTest which creates MyTestAppDelegate and RootViewController. Add new UIViewController subclass, with UITableViewController and xib, call it ListViewController. Copy code from RootViewController.h and .m to ListViewController.h and .m., changing the file names appropriately. To simplify the code, I removed the trailing “_” from all variables. In RootViewController, I added #import ListViewController.h, set up an array to display 4 rows and navigate to ListViewController when clicking the first row. In ListViewController.m, I added #import MyTestAppDelegate.h” and the following code: - (void)viewDidLoad { [super viewDidLoad]; if (managedObjectContext == nil) { managedObjectContext = [(MyTestAppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext]; } .. } The sequence that causes the error is tap row, return, tap row, return, tap row - error. managedObjectContext is synthesized for the third time. I appreciate your patience and your help, as this makes no sense to me. ADDENDUM: I may have a partial solution. http://www.iphonedevsdk.com/forum/iphone-sdk-development/41688-accessing-app-delegates-managed-object-context.html If I do not release the managedObjectContext in the .m file, the error goes away. Is that ok or will that cause me issues? - (void)dealloc { [fetchedResultsController release]; // [managedObjectContext release]; [super dealloc]; } ADDENDUM 2: See solution below. Sorry for the formatting issues - this was my first post.

    Read the article

  • Separating code logic from the actual data structures. Best practices?

    - by Patrick
    I have an application that loads lots of data into memory (this is because it needs to perform some mathematical simulation on big data sets). This data comes from several database tables, that all refer to each other. The consistency rules on the data are rather complex, and looking up all the relevant data requires quite some hashes and other additional data structures on the data. Problem is that this data may also be changed interactively by the user in a dialog. When the user presses the OK button, I want to perform all the checks to see that he didn't introduce inconsistencies in the data. In practice all the data needs to be checked at once, so I cannot update my data set incrementally and perform the checks one by one. However, all the checking code work on the actual data set loaded in memory, and use the hashing and other data structures. This means I have to do the following: Take the user's changes from the dialog Apply them to the big data set Perform the checks on the big data set Undo all the changes if the checks fail I don't like this solution since other threads are also continuously using the data set, and I don't want to halt them while performing the checks. Also, the undo means that the old situation needs to be put aside, which is also not possible. An alternative is to separate the checking code from the data set (and let it work on explicitly given data, e.g. coming from the dialog) but this means that the checking code cannot use hashing and other additional data structures, because they only work on the big data set, making the checks much slower. What is a good practice to check user's changes on complex data before applying them to the 'application's' data set?

    Read the article

  • Fastest way to read/store lots of multidimensional data? (Java)

    - by RemiX
    I have three questions about three nested loops: for (int x=0; x<400; x++) { for (int y=0; y<300; y++) { for (int z=0; z<400; z++) { // compute and store value } } } And I need to store all computed values. My standard approach would be to use a 3D-array: values[x][y][z] = 1; // test value but this turns out to be slow: it takes 192 ms to complete this loop, where a single int-assignment int value = 1; // test value takes only 66 ms. 1) Why is an array so relatively slow? 2) And why does it get even slower when I put this in the inner loop: values[z][y][x] = 1; // (notice x and z switched) This takes more than 4 seconds! 3) Most importantly: Can I use a data structure that is as quick as the assignment of a single integer, but can store as much data as the 3D-array?

    Read the article

  • Excel data into PowerPoint slides

    - by nqw1
    I have already found some helpful sites but I'm still unable to do what I want. My Excel file contains few columns and multiple rows. All the data from one row would be in one slide but data from different cells in that one row should go to a specific elements in PP slide. At first, is it possible to export data from an Excel cell into a specific text box in PP? For example, I would like to have all data from the first column of each row go to a Text box 1. Let's say I have 100 rows so I would have 100 slides and each slide would have Text bow 1 with correct data. Text box of slide 66 would have data from the first column of row 66. Then all data from the second column of each row would go to a text bow 2 and so on. I tried to do some macros with bad success. I also tried to use Word outlines and export them into PP (New slide - Slides from Outline) but there seems to be a bug since I got 250 pages of gibberish. I had only two paragraphs and both had one word. First paragraph used Heading 1 style and second paragraph used Normal style. Sites what I have found, use VB and/or some other programming language to create slides from Excel sheets. I have tried to add those VB codes into my macros but none of them hasn't worked so far. Probably I just don't know how to use them correctly :) Here's some helpful sites: VBA: Create PowerPoint Slide for Each Row in Excel Workbook Creating a Presentation Report Based on Data Question in Stackoverflow I use Office 2011 on Mac. Any help would be appreciated!

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Binding data to subgrid

    - by bhargav
    i have a jqgrid with a subgrid...the databinding is done in javascript like this <script language="javascript" type="text/javascript"> var x = screen.width; $(document).ready(function () { $("#projgrid").jqGrid({ mtype: 'POST', datatype: function (pdata) { getData(pdata); }, colNames: ['Project ID', 'Due Date', 'Project Name', 'SalesRep', 'Organization:', 'Status', 'Active Value', 'Delete'], colModel: [ { name: 'Project ID', index: 'project_id', width: 12, align: 'left', key: true }, { name: 'Due Date', index: 'project_date_display', width: 15, align: 'left' }, { name: 'Project Name', index: 'project_title', width: 60, align: 'left' }, { name: 'SalesRep', index: 'Salesrep', width: 22, align: 'left' }, { name: 'Organization:', index: 'customer_company_name:', width: 56, align: 'left' }, { name: 'Status', index: 'Status', align: 'left', width: 15 }, { name: 'Active Value', index: 'Active Value', align: 'left', width: 10 }, { name: 'Delete', index: 'Delete', align: 'left', width: 10}], pager: '#proj_pager', rowList: [10, 20, 50], sortname: 'project_id', sortorder: 'asc', rowNum: 10, loadtext: "Loading....", subGrid: true, shrinkToFit: true, emptyrecords: "No records to view", width: x - 100, height: "100%", rownumbers: true, caption: 'Projects', subGridRowExpanded: function (subgrid_id, row_id) { var subgrid_table_id, pager_id; subgrid_table_id = subgrid_id + "_t"; pager_id = "p_" + subgrid_table_id; $("#" + subgrid_id).html("<table id='" + subgrid_table_id + "' class='scroll'></table><div id='" + pager_id + "' class='scroll'></div>"); jQuery("#" + subgrid_table_id).jqGrid({ mtype: 'POST', postData: { entityIndex: function () { return row_id } }, datatype: function (pdata) { getactionData(pdata); }, height: "100%", colNames: ['Event ID', 'Priority', 'Deadline', 'From Date', 'Title', 'Status', 'Hours', 'Contact From', 'Contact To'], colModel: [ { name: 'Event ID', index: 'Event ID' }, { name: 'Priority', index: 'IssueCode' }, { name: 'Deadline', index: 'IssueTitle' }, { name: 'From Date', index: 'From Date' }, { name: 'Title', index: 'Title' }, { name: 'Status', index: 'Status' }, { name: 'Hours', index: 'Hours' }, { name: 'Contact From', index: 'Contact From' }, { name: 'Contact To', index: 'Contact To' } ], caption: "Action Details", rowNum: 10, pager: '#actionpager', rowList: [10, 20, 30, 50], sortname: 'Event ID', sortorder: "desc", loadtext: "Loading....", shrinkToFit: true, emptyrecords: "No records to view", rownumbers: true, ondblClickRow: function (rowid) { } }); jQuery("#actiongrid").jqGrid('navGrid', '#actionpager', { edit: false, add: false, del: false, search: false }); } }); jQuery("#projgrid").jqGrid('navGrid', '#proj_pager', { edit: false, add: false, del: false, excel: true, search: false }); }); function getactionData(pdata) { var project_id = pdata.entityIndex(); var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var checkOnlyContact = document.getElementById('ctl00_ContentPlaceHolder2_chkOnlyContact').checked; var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pdata.rows; var npage = pdata.page; var sortindex = pdata.sidx; var sortdir = pdata.sord; var path = "project_brow.aspx/GetActionDetails" $.ajax({ type: "POST", url: path, data: "{'project_id': '" + project_id + "','ChannelContact': '" + ChannelContact + "','HideCompleted': '" + HideCompleted + "','Scm': '" + Scm + "','checkOnlyContact': '" + checkOnlyContact + "','MerchantId': '" + MerchantId + "','nrows': '" + nrows + "','npage': '" + npage + "','sortindex': '" + sortindex + "','sortdir': '" + sortdir + "'}", contentType: "application/json; charset=utf-8", success: function (data, textStatus) { if (textStatus == "success") obj = jQuery.parseJSON(data.d) ReceivedData(obj); }, error: function (data, textStatus) { alert('An error has occured retrieving data!'); } }); } function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } function getData(pData) { var dtDateFrom = document.getElementById('ctl00_ContentPlaceHolder2_dtDateFrom_textBox').value; var dtDateTo = document.getElementById('ctl00_ContentPlaceHolder2_dtDateTo_textBox').value; var Status = document.getElementById('ctl00_ContentPlaceHolder2_ddlStatus').value; var Type = document.getElementById('ctl00_ContentPlaceHolder2_ddlType').value; var Channel = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannel').value; var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var Customers = document.getElementById('ctl00_ContentPlaceHolder2_txtCustomers').value; var KeywordSearch = document.getElementById('ctl00_ContentPlaceHolder2_txtKeywordSearch').value; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var SelectedCustomerId = document.getElementById("<%=hdnSelectedCustomerId.ClientID %>").value var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pData.rows; var npage = pData.page; var sortindex = pData.sidx; var sortdir = pData.sord; PageMethods.GetProjectDetails(SelectedCustomerId, Customers, KeywordSearch, MerchantId, Channel, Status, Type, dtDateTo, dtDateFrom, ChannelContact, HideCompleted, Scm, nrows, npage, sortindex, sortdir, AjaxSucceeded, AjaxFailed); } function AjaxSucceeded(data) { var obj = jQuery.parseJSON(data) if (obj != null) { if (obj.records!="") { ReceivedClientData(obj); } else { alert('No Data Available to Display') } } } function AjaxFailed(data) { alert('An error has occured retrieving data!'); } function ReceivedClientData(data) { var thegrid = jQuery("#projgrid")[0]; thegrid.addJSONData(data); } </script> as u can see projgrid is my parent grid and action grid is my subgrid to be shown onclicking the '+' symbol Projgrid is binded and being displayed but when it comes to subgrid im able to get the data but the problem comes at the time of binding data to subgrid which is done in function named ReceivedData where you can see like this function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } "data" is what i wanted exactly but it cannot be binded to actiongrid which is the subgrid Thanx in advance for help

    Read the article

  • Export data to Excel from Silverlight/WPF DataGrid

    - by outcoldman
    Data export from DataGrid to Excel is very common task, and it can be solved with different ways, and chosen way depend on kind of app which you are design. If you are developing app for enterprise, and it will be installed on several computes, then you can to advance a claim (system requirements) with which your app will be work for client. Or customer will advance system requirements on which your app should work. In this case you can use COM for export (use infrastructure of Excel or OpenOffice). This approach will give you much more flexibility and give you possibility to use all features of Excel app. About this approach I’ll speak below. Other way – your app is for personal use, it can be installed on any home computer, in this case it is not good to ask user to install MS Office or OpenOffice just for using your app. In this way you can use foreign tools for export, or export to xml/html format which MS Office can read (this approach used by JIRA). But in this case will be more difficult to satisfy user tasks, like create document with landscape rotation and with defined fields for printing. At this article I'll show you how to work with Excel object from .NET 4 and Silverlight 4 with dynamic objects and give you an approach which allow you to export data from DataGrid Silverlight and WPF controls. Read more...

    Read the article

  • Changing Endpoint URL for a Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    When you move your application from Development to Production, there is more often then not, a need to change the web service endpoint URL in your ADF application. If you are using a Web Service Data Control(WSDC), you can do this in more than one ways. The following example illustrates how this can be done.At Design TimeIf the application workspace is in your control, you can quickly do this by updating the definition in DataControl.dcx file:Along with this, you will also need to change the endpoint in connections.xml. So invoke the Edit Connections dialog: Then, change the endpoint URL.At DeploymentAnother way to change is changing the endpoint at the ear level, at deployment. So when you select Deploy -> Application Server at the Application level, it will bring up a Deployment Configuration dialog, in which you can edit the WSDL URL:Also, change the Port URL:At Post DeploymentIf your need to change this post deployment, you can do it through Oracle Enterprise Manager. But for this, your application needs to be configured with a writable MDS repository. It is recommended you use a Database MDS store during deployment. So have your application configured (by having an entry in adf-config.xml) and server configured (by having a MDS store registered). Once done, you can configure the ADF Connection in EM for this application:Change the WSDL location here on 'Edit':Also, change the Port using Advance Connection Configuration:Change the Endpoint Address here:Apply Changes and you are done!

    Read the article

  • A KSH adattárháza: Oracle Essbase és Oracle Database alapon

    - by Fekete Zoltán
    A magyar Központi Statisztikai Hivatal metaadat vezérelt adattárháza három fontos Oracle terméken nyugszik. Az interneten elérhetok az adatok a KSH Tájékoztatási adatbázis-ból. Data from KSH in English. Amikor ezeket a sorokat írom, péntek éjjel 21:36-kor 81 online felhasználó kérdezte le az adatokat. :) - Oracle Essbase multidimenziós OLAP szerver, technikai infó - Hyperion Interactive Reporting lekérdezo eszköz, technikai infó - Oracle Database Enterprise Edition Az angol nyelvu customer snapshot, azaz ügyfél történet: Hungarian Central Statistical Office Provides 200,000 External Users with Secure Online Access to Data. A magyar nyelvu sikersztori: A KSH statisztikai adatainak 60 százaléka elérheto böngészo és platform függetlenül évi mintegy 200 000 internetes felhasználó számára. A termék kiválasztásában, a projekt kialakításában és bevezetésében nagy szerepet vállalt a DSS Consulting Kft. és az Oracle Konzultáció. A projekt során elért legfontosabb eredmények: - adattárház: 150-200 egyideju felhasználó, éves szinten 200 000 felhasználót jelent - Essbase memória alapú tárolási struktúrája: közel valósideju hozzáférés - A rendszer platform és böngészo független, ezért a felhasználók széles köre érheti el a statisztikai adatokat. - Natív Java API és XMLA támogatással egyedi karbantartó alkalmazás - A statisztikus munkatársak speciális informatikai eloképzettség nélkül építik fel és gondozzák a multidimenzionális adatbázisokat - Az Oracle Hyperion Interactive Reporting: oszlopos, kereszttáblás, szekcionált, grafikonos, webes lekérdezések Letöltheto a következo KSH eloadás a HOUG konferenciáról 2009-bol: Hyperalea iacta est - a KSH Essbase alapú adattárház rendszere. A most megjelent sikersztori: angolul és magyarul.

    Read the article

  • Now Available: Visual Studio 2010 Release Candidate Virtual Machines with Sample Data and Hands-on-L

    - by John Alexander
    From a message from Brian Keller: “Back in December we posted a set of virtual machines pre-configured with Visual Studio 2010 Beta 2, Visual Studio Team Foundation Server 2010 Beta 2, and 7 hands-on-labs. I am pleased to announce that today we have shipped an updated virtual machine using the Visual Studio 2010 Release Candidate bits, a brand new sample application, and 9 hands-on-labs. This VM is customer-ready and includes everything you need to learn and/or deliver demonstrations of many of my favorite application lifecycle management (ALM) capabilities in Visual Studio 2010. This VM is available in the virtualization platform of your choice (Hyper-V, Virtual PC 2007 SP1, and Windows [7] Virtual PC). Hyper-V is highly recommended because of the performance benefits and snapshotting capabilities. Tailspin Toys The sample application we are using in this virtual machine is a simple ASP.NET MVC 2 storefront called Tailspin Toys. Tailspin Toys sells model airplanes and relies on the application lifecycle management capabilities of Visual Studio 2010 to help them build, test, and maintain their storefront. Major kudos go to Dan Massey for building out this great application for us. Hands-on-Labs / Demo Scripts The 9 hands-on-labs / demo scripts which accompany this virtual machine cover several of the core capabilities of conducting application lifecycle management with Visual Studio 2010. Each document can be used by an individual in a hands-on-lab capacity, to learn how to perform a given set of tasks, or used by a presenter to deliver a demonstration or classroom-style training. Unlike the beta 2 release, 100% of these labs target Tailspin Toys to help ensure a consistent storytelling experience. Software quality: Authoring and Running Manual Tests using Microsoft Test Manager 2010 Introduction to Test Case Management with Microsoft Test Manager 2010 Introduction to Coded UI Tests with Visual Studio 2010 Ultimate Debugging with IntelliTrace using Visual Studio 2010 Ultimate Software architecture: Code Discovery using the architecture tools in Visual Studio 2010 Ultimate Understanding Class Coupling with Visual Studio 2010 Ultimate Using the Architecture Explore in Visual Studio 2010 Ultimate to Analyze Your Code Software Configuration Management: Planning your Projects with Team Foundation Server 2010 Branching and Merging Visualization with Team Foundation Server 2010 “ Check out Brian’s Post for more info including download instructions…

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • Video on Architecture and Code Quality using Visual Studio 2012&ndash;interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

    - by terje
    Find the video HERE. Adam Cogan did a great Web TV interview with Marcel de Vries and myself on the topics of architecture and code quality.  It was real fun participating in this session.  Although we know each other from the MVP ALM community,  Marcel, Adam and I haven’t worked together before. It was very interesting to see how we agreed on so many terms, and how alike we where thinking.  The basics of ensuring you have a good architecture and how you could document it is one thing.  Also, the same agreement on the importance of having a high quality code base, and how we used the Visual Studio 2012 tools, and some others (NDepend for example)  to measure and ensure that the code quality was where it should be.  As the tools, methods and thinking popped up during the interview it was a lot of “Hey !  I do that too!”.  The tools are not only for “after the fact” work, but we use them during the coding.  That way the tools becomes an integrated part of our coding work, and helps us to find issues we may have overlooked.  The video has a bunch of call outs, pinpointing important things to remember. These are also listed on the corresponding web page. I haven’t seen that touch before, but really liked this way of doing it – it makes it much easier to spot the highlights.  Titus Maclaren and Raj Dhatt from SSW have done a terrific job producing this video.  And thanks to Lei Xu for doing the camera and recording job.  Thanks guys ! Also, if you are at TechEd Amsterdam 2012, go and listen to Adam Cogan in his session on “A modern architecture review: Using the new code review tools” Friday 29th, 10.15-11.30 and Marcel de Vries session on “Intellitrace, what is it and how can I use it to my benefit” Wednesday 27th, 5-6.15 The highlights points out some important practices.  I’ll elaborate on a few of them here: Add instructions on how to compile the solution.  You do this by adding a text file with instructions to the solution, and keep it under source control.  These instructions should contain what is needed on top of a standard install of Visual Studio.  I do a lot of code reviews, and more often that not, I am not even able to compile the program, because they have used some tool or library that needs to be installed.  The same applies to any new developer who enters into the team, so do this to increase your productivity when the team changes, or a team member switches computer. Don’t forget to document what you have to configure on the computer, the IIS being a common one. The more automatic you can do this, the better.  Use NuGet to get down libraries. When the text document gets more than say, half a page, with a bunch of different things to do, convert it into a powershell script instead.  The metrics warning levels.  These are very conservatively set by Microsoft.  You rarely see anything but green, and besides, you should have color scales for each of the metrics.  I have a blog post describing a more appropriate set of levels, based on both research work and industry “best practices”.  The essential limits are: Cyclomatic complexity and coupling:  Higher numbers are worse On method levels: Green :  From 0 to 10 Yellow:  From 10 to 20  (some say 15).   Acceptable, but have a look to see if there is something unneeded here. Red: From 20 to 40:   Action required, get these down. Bleeding Red: Above 40   This is the real red alert.  Immediate action!  (My invention, as people have asked what do I do when I have cyclomatic complexity of 150.  The only answer I could think of was: RUN! ) Maintainability index:  Lower numbers are worse, scale from 0 to 100. On method levels: Green:  60 to 100 Yellow:  40 – 60.    You will always have methods here too, accept the higher ones, take a look at those who are down to the lower limit.  Check up against the other metrics.) Red:  20 – 40:  Action required, fix these. Bleeding red:  Below 20.  Immediate action required. When doing metrics analysis, you should leave the generated code out.  You do this by adding attributes, unfortunately Microsoft has “forgotten” to add these to all their stuff, so you might have to add them to some of the code.  It most cases it can be done so that it is not overwritten by a new round of code generation.  Take a look a my blog post here for details on how to do that. Class level metrics might also be useful, at least for coupling and maintenance.  But it is much more difficult to set any fixed limits on those.  Any metric aggregations on higher level tend to be pretty useless, as the number of methods vary pretty much, and there are little science on what number of methods can be regarded as good or bad.  NDepend have a recommendation, but they say it may vary too.  And in these days of data binding, the number might be pretty high, as properties counts as methods.  However, if you take the worst case situations, classes with more than 20 methods are suspicious, and coupling and cyclomatic complexity go red above 20, so any classes with more than 20x20 = 400 for these measures should be checked over. In the video we mention the SOLID principles, coined by “Uncle Bob” (Richard Martin). One of them, the Dependency Inversion principle we discuss in the video.  It is important to note that this principle is NOT on whether you should use a Dependency Inversion Container or not, it is about how you design the interfaces and interactions between your classes.  The Dependency Inversion Container is just one technique which is based on this principle, but which main purpose is to isolate things you would like to change at runtime, for example if you implement a plug in architecture.  Overuse of a Dependency Inversion Container is however, NOT a good thing.  It should be used for a purpose and not as a general DI solution.  The general DI solution and thinking however is useful far beyond the DIC.   You should always “program to an abstraction”, and not to the concreteness.  We also talk a bit about the GRASP patterns, a term coined by Craig Larman in his book Applying UML and design patterns. GRASP patterns stand for General Responsibility Assignment Software Patterns and describe fundamental principles of object design and responsibility assignment.  What I find great with these patterns is that they is another way to focus on the responsibility of a class.  One of the things I most often found that is broken in software designs, is that the class lack responsibility, and as a result there are a lot of classes mucking around in the internals of the other classes.  We also discuss the term “Code Smells”.  This term was invented by Kent Beck and Martin Fowler when they worked with Fowler’s “Refactoring” book. A code smell is a set of “bad” coding practices, which are the drivers behind a corresponding set of refactorings.  Here is a good list of the smells, and their corresponding refactor patterns. See also this.

    Read the article

  • How to deal with configuration style warnings occuring from TexLive 2012 installation?

    - by JJD
    I followed the advice of izx on how to install TexLive 2012 using the texlive-backports PPA. Before I started I removed all TexLive-related packages. The installation finished and everything seems to work fine. The only thing I noticed are some warnings in the output of the installer. Here is an excerpt of the output: Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg There are more of that kind in the rest of the output: $ sudo apt-get install texlive Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym Suggested packages: texlive-doc-en purifyeps chktex latexmk dvipng xindy dvidvi fragmaster lacheck latexdiff t1utils The following NEW packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym 0 upgraded, 29 newly installed, 0 to remove and 17 not upgraded. Need to get 0 B/274 MB of archives. After this operation, 450 MB of additional disk space will be used. Do you want to continue [Y/n]? Preconfiguring packages ... Selecting previously unselected package tex-common. (Reading database ... 290206 files and directories currently installed.) Unpacking tex-common (from .../tex-common_3.13~ubuntu12.04.1_all.deb) ... Selecting previously unselected package lmodern. Unpacking lmodern (from .../lmodern_2.004.1-5~precise1_all.deb) ... Selecting previously unselected package tex-gyre. Unpacking tex-gyre (from .../tex-gyre_2.004.1-4~precise1_all.deb) ... Selecting previously unselected package libgraphite3. Unpacking libgraphite3 (from .../libgraphite3_1%3a2.3.1-0.2build1_amd64.deb) ... Selecting previously unselected package libkpathsea6. Unpacking libkpathsea6 (from .../libkpathsea6_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package libptexenc1. Unpacking libptexenc1 (from .../libptexenc1_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-common. Unpacking texlive-common (from .../texlive-common_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-binaries. Unpacking texlive-binaries (from .../texlive-binaries_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-doc-base. Unpacking texlive-doc-base (from .../texlive-doc-base_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-base. Unpacking texlive-base (from .../texlive-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base. Unpacking texlive-latex-base (from .../texlive-latex-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended. Unpacking texlive-latex-recommended (from .../texlive-latex-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package latex-xcolor. Unpacking latex-xcolor (from .../latex-xcolor_2.11-1_all.deb) ... Selecting previously unselected package pgf. Unpacking pgf (from .../archives/pgf_2.10-1_all.deb) ... Selecting previously unselected package latex-beamer. Unpacking latex-beamer (from .../latex-beamer_3.10-1_all.deb) ... Selecting previously unselected package texlive-generic-recommended. Unpacking texlive-generic-recommended (from .../texlive-generic-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks. Unpacking texlive-pstricks (from .../texlive-pstricks_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package prosper. Unpacking prosper (from .../prosper_1.00.4+cvs.2007.05.01-4_all.deb) ... Selecting previously unselected package ps2eps. Unpacking ps2eps (from .../ps2eps_1.68-1_amd64.deb) ... Selecting previously unselected package ttf-marvosym. Unpacking ttf-marvosym (from .../ttf-marvosym_0.1+dfsg-2_all.deb) ... Selecting previously unselected package texlive-fonts-recommended. Unpacking texlive-fonts-recommended (from .../texlive-fonts-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive. Unpacking texlive (from .../texlive_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-extra-utils. Unpacking texlive-extra-utils (from .../texlive-extra-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-font-utils. Unpacking texlive-font-utils (from .../texlive-font-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-fonts-recommended-doc. Unpacking texlive-fonts-recommended-doc (from .../texlive-fonts-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base-doc. Unpacking texlive-latex-base-doc (from .../texlive-latex-base-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended-doc. Unpacking texlive-latex-recommended-doc (from .../texlive-latex-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks-doc. Unpacking texlive-pstricks-doc (from .../texlive-pstricks-doc_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package tipa. Unpacking tipa (from .../tipa_2%3a1.3-17~precise1_all.deb) ... Processing triggers for doc-base ... Processing 5 added doc-base files... Registering documents with scrollkeeper... Processing triggers for man-db ... Processing triggers for fontconfig ... Processing triggers for install-info ... Setting up tex-common (3.13~ubuntu12.04.1) ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call texlive-base is not ready, skipping fmtutil-sys --all call Setting up lmodern (2.004.1-5~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tex-gyre (2.004.1-4~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up libgraphite3 (1:2.3.1-0.2build1) ... Setting up libkpathsea6 (2012.20120628-1~ubuntu12.04.1) ... Setting up libptexenc1 (2012.20120628-1~ubuntu12.04.1) ... Setting up texlive-common (2012.20120611-3~ubuntu12.04.1) ... Setting up texlive-binaries (2012.20120628-1~ubuntu12.04.1) ... update-alternatives: using /usr/bin/xdvi-xaw to provide /usr/bin/xdvi.bin (xdvi.bin) in auto mode. update-alternatives: using /usr/bin/bibtex.original to provide /usr/bin/bibtex (bibtex) in auto mode. mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Building format(s) --refresh. This may take some time... done. Setting up texlive-doc-base (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up ps2eps (1.68-1) ... Setting up ttf-marvosym (0.1+dfsg-2) ... Setting up texlive-fonts-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-pstricks-doc (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call Setting up texlive-base (2012.20120611-3~ubuntu12.04.1) ... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. /usr/bin/tl-paper: setting paper size for dvips to a4. /usr/bin/tl-paper: setting paper size for dvipdfmx to a4. /usr/bin/tl-paper: setting paper size for xdvi to a4. /usr/bin/tl-paper: setting paper size for pdftex to a4. Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all. This may take some time... done. Processing triggers for tex-common ... Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-generic-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-fonts-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-extra-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-font-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all --cnffile /etc/texmf/fmt.d/10texlive-latex-base.cnf. This may take some time... done. Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-pstricks (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tipa (2:1.3-17~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up prosper (1.00.4+cvs.2007.05.01-4) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Setting up texlive (2012.20120611-3~ubuntu12.04.1) ... Setting up latex-xcolor (2.11-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Setting up pgf (2.10-1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Setting up latex-beamer (3.10-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Processing triggers for libc-bin ... ldconfig deferred processing now taking place What exactly is 10lmodern.cfg good for? How can I prevent this warnings? Here is the output of sudo update-updmap: $ sudo update-updmap Regenerating '/var/lib/texmf/updmap.cfg-DEBIAN'... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg done. Regenerating '/var/lib/texmf/updmap.cfg-TEXLIVEDIST'... done. update-updmap has updated the following file(s): /var/lib/texmf/updmap.cfg-DEBIAN /var/lib/texmf/updmap.cfg-TEXLIVEDIST If you want to enable the map files with this new file, you should run updmap-sys or updmap.

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >