Search Results

Search found 59036 results on 2362 pages for 'fake data'.

Page 204/2362 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Aggregating, restructuring hourly time series data in R

    - by Advait Godbole
    I have a year's worth of hourly data in a data frame in R: > str(df.MHwind_load) # compactly displays structure of data frame 'data.frame': 8760 obs. of 6 variables: $ Date : Factor w/ 365 levels "2010-04-01","2010-04-02",..: 1 1 1 1 1 1 1 1 1 1 ... $ Time..HRs. : int 1 2 3 4 5 6 7 8 9 10 ... $ Hour.of.Year : int 1 2 3 4 5 6 7 8 9 10 ... $ Wind.MW : int 375 492 483 476 486 512 421 396 456 453 ... $ MSEDCL.Demand: int 13293 13140 12806 12891 13113 13802 14186 14104 14117 14462 ... $ Net.Load : int 12918 12648 12323 12415 12627 13290 13765 13708 13661 14009 ... While preserving the hourly structure, I would like to know how to extract a particular month/group of months the first day/first week etc of each month all mondays, all tuesdays etc of the year I have tried using "cut" without result and after looking online think that "lubridate" might be able to do so but haven't found suitable examples. I'd greatly appreciate help on this issue.

    Read the article

  • Scripts to parse and download iTunes Connect and AppStore data

    - by bradhouse
    I'm looking for recommendations of a script or series of scripts that download and parse iTunes Connect sales data and AppStore comments, ratings and rankings data for a defined app. I'm also aware of solutions like: AppViz appsales-mobile iphone-stats Heartbeat.app I'm sure I'll find a few more with more searching. I can't help but feel there must be a really decent set of open source scripts out there to do this, given how many developers are now writing apps for the AppStore. Would be interested to hear any commercial offerings as well (although my personal preference is for open source, so I can at least see what it is doing with my iTunes Connect login credentials). To be clear, I'm really looking for something that hits all of the areas mentioned: App Store (per store) Comments Ratings Category/store rankings iTunes Connect The contents of the sales reports Analysis/graphs of the data is not necessary (but would be a nice to have I guess). I'm not really looking for something like AppSales Mobile above, I would like the raw data so I can do my own analysis and formatting. So far it looks like AppViz (listed above) is the best out there. Any suggestions on what is good/available or should I just go roll my own?

    Read the article

  • WPF tree data binding

    - by Am
    Hi, I have a well defined tree repository. Where I can rename items, move them up, down, etc. Add new and delete. The data is stored in a table as follows: Index Parent Label Left Right 1 0 root 1 14 2 1 food 2 7 3 2 cake 3 4 4 2 pie 5 6 5 1 flowers 8 13 6 5 roses 9 10 7 5 violets 11 12 Representing the following tree: (1) root (14) (2) food (7) (8) flowers (13) (3) cake (4) (5) pie (6) (9) roeses (10) (11) violets (12) or root food cake pie flowers roses violets Now, my problem is how to represent this in a bindable way, so that a TreeView can handle all the possible data changes? Renaming is easy, all I need is to make the label an updatble field. But what if a user moves flowers above food? I can make the relevant data changes, but they cause a complete data change to all other items in the tree. And all the examples I found of bindable hierarchies are good for non static trees.. So my current (and bad) solution is to reload the displayed tree after relocation change. Any direction will be good. Thanks

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • Sending jQuery.ajax data simultaneous to a form submit

    - by dscher
    I have a bit of a conundrum. I have a form which has numerous fields. There is one field for links where you enter a link, click an add button, and the link(using jQuery) gets added to a link_array. I want this array to be sent via the jQuery.ajax method when the form is submitted. If I send the link_array using $.ajax like this: $.ajax({ type: "POST", url: "add_stock", dataType: "json", data: { "links": link_array } }); when the add link button is selected the data goes no problem to the correct place and gets put in the db correctly. If I bind the above function to the submit form button using $(#stock_form).submit(..... then the rest of the form data is sent but not the link_array. I can obviously pass the link array back into a hidden field in HTML but then I'd have to unpack the array into comma separate values and break the comma-separated string apart in PHP. It just seems 100X easier to unpack the Javascript array in PHP without an other fuss. So, how is it that you can send an array from javascript using $.ajax concurrent to the rest of the $_POST data in HTML? Please note that I'm using Kohana 3.0 framework but really that shouldn't make a difference, what I want to do is add this js array to the $_POST array that is already going. Thanks!

    Read the article

  • modifying ajax returned data before showing

    - by Nick
    I'm processing subscribtion form with jQuery/ajax and need to display results with success function (they are different depending on if email exists in database). But the trick is I do not need h2 and first "p" tag. How can I show only div#alertmsg and second "p" tag? I've tried revoming unnecessary elements with method described here, but it didn't work. Thanks in advance. Here is my code: var dataString = $('#subscribe').serialize(); $.ajax({ type: "POST", url: "/templates/willrock/pommo/user/process.php", data: dataString, success: function(data){ var result = $(data); $('#success').append(result); } Here is the data returned: <h2>Subscription Review</h2> <p><a href="url" onClick="history.back(); return false;"><img src="/templates/willrock/pommo/themes/shared/images/icons/back.png" alt="back icon" class="navimage" /> Back to Subscription Form</a></p> <div id="alertmsg" class="error"> <ul> <li>Email address already exists. Duplicates are not allowed.</li> </ul> </div> <p><a href="login.php">Update your records</a></p>

    Read the article

  • How to share data between SSRS Security and Data Processing extension?

    - by user2904681
    I've spent a lot of time trying to solve the issue pointed in title and have no found a solution yet. I use MS SSRS 2012 with custom Security (based on Form Authentication and ClaimsPrincipal) and Data Processing extensions. In Data extension level I need to apply filter programmatically based on one of the claim which I have access in Security extension level only. Here is the problem: I do know how to pass the claim from Security to Data Processing extension code... What I've tried: IAuthenticationExtension.LogonUser(string userName, string password, string authority) { ... ClaimsPrincipal claimsPrincipal = CreateClaimsPrincipal(...); Thread.CurrentPrincipal = claimsPrincipal; HttpContext.Current.User = claimsPrincipal; ... }; But it doesn't work. It seems SSRS overrides it within either GenericPrincipal or FormsIdentity internally. The possible workaround I'm thinking about (but haven't checked it yet): 1. Create HttpModule which will create HttpContext with all required information (minus: will be invoke getting claims each time - huge operation) 2. Write to custom SQL table to store logged users information which is required for Data extension and then read it 3. try somehow to append to cookies due to LogOn and then read each time on IAuthenticationExtension.GetUserInfo and fill HttpContext None of them seems to be a good solution. I would be grateful for any help/advise/comments.

    Read the article

  • POSTing JSON data to WCF REST

    - by Randall Sexton
    I'm trying to send data from a client application using jQuery to a REST WCF service based on the WCF REST starter kit. Here's what I have so far. Service Definition: [WebHelp(Comment = "Save PropertyValues to the database")] [WebInvoke(Method = "POST", UriTemplate = "PropertyValues_Save", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [OperationContract] public bool PropertyValues_Save(Guid assetId, Dictionary<Guid, string> newValues) { ... } Call from the client: $.ajax({ url:SVC_PROPERTYVALUES_SAVE, type: "POST", contentType: "application/json; charset=utf-8", data: jsonData, dataType: "json", error: function(XMLHttpRequest, textStatus, errorThrown) { alert(textStatus + ' ' + errorThrown); }, success: function(data) { if (data) { alert('Values saved'); $("#confirmSubmit").dialog('close'); } else { alert('Values failed to save'); $("#confirmSubmit").dialog('close'); } } }); Example of the JSON being passed: { "assetId": "d70714c3-e403-4cc5-b8a9-9713d05b2ee0", "newValues": [ { "key": "bd01aa88-b48d-47c7-8d3f-eadf47a46680", "value": "0e9fdf34-2d12-4639-8d70-19b88e753ab1" }, { "key": "06e8eda2-a004-450e-90ab-64df357013cf", "value": "1d490aec-f40e-47d5-865c-07fe9624f955" } ] } I'm using Windows Authentication on the virtual directory. When I call operations that are GETs, everything is fine. This code is prompting the browser to log in. When I enter my credentials, I simply get an alert in my browser which says "error undefined". Even if you can't help my specific error, do you see anything that looks wrong from glancing? I've been beating my head on this nearly all day. Thanks in advance.

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • In what order does DOJO handles ajax requests?

    - by mm2887
    Hi, Using a form in a dialog box I am using Dojo in jsp to save the form in my database. After that request is completed using dojo.xhrPost() I am sending in another request to update a dropdown box that should include the added form object that was just saved, but for some reason the request to update the dropdown is executed before saving the form in the database even though the form save is called first. Using Firebug I can see that the getLocations() request is completed before the sendForm() request. This is the code: Add Team sendForm("ajaxResult1", "addTeamForm"); dijit.byId("addTeamDialog").hide(); getLocations("locationsTeam"); </script> function sendForm(target, form){ var targetNode = dojo.byId(target); var xhrArgs = { form: form, url: "ajax", handleAs: "text", load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrPost(xhrArgs); } function getLocations(id) { var targetNode = dojo.byId(id); var xhrArgs = { url: "ajax", handleAs: "text", content: { location: "yes" }, load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrGet(xhrArgs); } Why is this happening? Is there way to make the first request complete first before the second executes? To reduce the possibilities of why this is happening I tried setting the cache property in xhrGet to false but the result is still the same. Please help!

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Non distinct Unique ID in MySQL database table.

    - by Geoff
    First of, a simplified version: I am wondering if I can create a trigger to activate during INSERT (it's actually LOAD DATA INFILE) and NOT enter records for an RMA already in my table? I have a table that has no records that are unique. Some may be duplicates but there is one field that I can use to know if the data has been entered or not. For instance RMA Op Days --------------------- 213 Repair 0.10 213 Test 0.20 213 Repair 0.10 So I could do an index on the three columns together but as you see it's possible for an RMA to be in a step for the same amount of time twice so it's possible to have duplicate records. This data comes from a report that I cannot edit and this is all it provides. The key is that an RMA's data is only in the report once so if my database already has that RMA in it's records I want to skip the loading of that RMA's records from the report. By all means please let me know if that didn't make sense, I'll Explain as needed. I'm sure it's not uncommon but I couldn't find anything on the net.

    Read the article

  • Trying to return data asynchronously using jQuery AND jSon in MVC 2.0

    - by Calibre2010
    Hi, I am trying to use Jquerys getJSON method to return data server side to my browser, what happens is the url the getJSON method points to does get reached but upon the postback the result does not get returned to the browser for some odd reason. I wasen't sure if it was because I was using MVC 2.0 and jQuery 1.4.1 and it differs to the MVC 1.0 version and jQuerys 1.3.2 version. . this is the code sections Controller public JsonResult StringReturn() { NameDTO myName = new NameDTO(); myName.nameID = 1; myName.name= "James"; myName.nameDescription = "Jmaes"; return Json(myName); } View with JQuery <script type="text/javascript"> $(document).ready(function () { $("#myButton").click(function () { $.getJSON("Home/StringReturn/", null, function (data) { alert(data.name); $("#show").append($("<div>" + data.name + "</div>")); }); }); }); </script> HTML <input type="button" value="clickMe" id="myButton"/> <div id="show">d</div>

    Read the article

  • Best strategy for moving data between physical tiers in ASP.net

    - by Pete Lunenfeld
    Building a new ASP.net application, and planning to separate DB, 'service' tier and Web/UI tier into separate physical layers. What is the best/easiest strategy to move serialized objects between the service tier and the UI tier? I was considering serializing POCOs into JSON using simple ASP.net pages to serve the middle tier. Meaning that the UI/Web tier will request data from a (hidden to the outside user) web server that will return a JSON string. This kind of JSON 'emitter' seems easily testable. It also seems easily compressible for efficiently moving data over the WAN between tiers. I know that some folks use .asmx webservices for this kind of task, but this seems like there is excess overhead with SOAP, and the package is not as human readable (testable) as POCOs serialized as JSON. Others are using more complex technology like WCF which we have never used. Does anyone have advice for choosing a method for moving data/objects between the data (db) tier and the web (UI) tier over the WAN using .net technologies? Thanks!!!

    Read the article

  • check if a tree is a binary search tree

    - by TimeToCodeTheRoad
    I have written the following code to check if a tree is a Binary search tree. Please help me check the code: Pair p{ boolean isTrue; int min; int max; } public boo lean isBst(BNode v){ return isBST1(v).isTrue; } public Pair isBST1(BNode v){ if(v==null) return new Pair(true, INTEGER.MIN,INTEGER.MAX); if(v.left==null && v.right==null) return new Pair(true, v.data, v.data); Pair pLeft=isBST1(v.left); Pair pRight=isBST1(v.right); boolean check=pLeft.max<v.data && v.data<= pRight.min; Pair p=new Pair(); p.isTrue=check&&pLeft.isTrue&&pRight.isTrue; p.min=pLeft.min; p.max=pRight.max; return p; } Note: This function checks if a tree is a binary search tree

    Read the article

  • When should I be cautious using data binding in .NET?

    - by Ben McCormack
    I just started working on a small team of .NET programmers about a month ago and recently got in a discussion with our team lead regarding why we don't use databinding at all in our code. Every time we work with a data grid, we iterate through a data table and populate the grid row by row; the code usually looks something like this: Dim dt as DataTable = FuncLib.GetData("spGetTheData ...") Dim i As Integer For i = 0 To dt.Rows.Length - 1 '(not sure why we do not use a for each here)' gridRow = grid.Rows.Add() gridRow(constantProductID).Value = dt("ProductID").Value gridRow(constantProductDesc).Value = dt("ProductDescription").Value Next '(I am probably missing something in the code, but that is basically it)' Our team lead was saying that he got burned using data binding when working with Sheridan Grid controls, VB6, and ADO recordsets back in the nineties. He's not sure what the exact problem was, but he remembers that binding didn't work as expected and caused him some major problems. Since then, they haven't trusted data binding and load the data for all their controls by hand. The reason the conversation even came up was because I found data binding to be very simple and really liked separating the data presentation (in this case, the data grid) from the in-memory data source (in this case, the data table). "Loading" the data row by row into the grid seemed to break this distinction. I also observed that with the advent of XAML in WPF and Silverlight, data-binding seems like a must-have in order to be able to cleanly wire up a designer's XAML code with your data. When should I be cautious of using data-binding in .NET?

    Read the article

  • Cocoa Drag and Drop, reading back the data

    - by kodai
    Ok, I have a NSOutlineView set up, and I want it to capture PDF's if a pdf is dragged into the NSOutlineView. My first question, I have the following code: [outlineView registerForDraggedTypes:[NSArray arrayWithObjects:NSStringPboardType, NSFilenamesPboardType, nil]]; In all the apple Docs and examples I've seen I've also seen something like MySupportedType being an object registered for dragging. What does this mean? Do I change the code to be: [outlineView registerForDraggedTypes:[NSArray arrayWithObjects:@"pdf", NSStringPboardType, NSFilenamesPboardType, nil]]; Currently I have it set up to recognize drag and drop, and I can even make it spit out the URL of the dragged file once the drag is accepted, however, this leads me to my second question. I want to keep a copy of those PDF's app side. I suppose, and correct me if I'm wrong, that the best way to do this is to grab the data off the clipboard, save it in some persistant store, and that's that. (as apposed to using some sort of copy command and literally copying the file to the app director.) That being said, I'm not sure how to do that. I've the code: - (BOOL)outlineView:(NSOutlineView *)ov acceptDrop:(id <NSDraggingInfo>)info item:(id)item childIndex:(NSInteger)childIndex { NSPasteboard *pboard = [info draggingPasteboard]; NSURL *fileURL; if ( [[pboard types] containsObject:NSURLPboardType] ) { fileURL = [NSURL URLFromPasteboard:pboard]; // Perform operation using the file’s URL } NSData *data = [pboard dataForType:@"NSPasteboardTypePDF"]; But this never actually gets any data. Like I said before, it does get the URL, just not the data. Does anyone have any advise on how to get this going? Thanks so much!

    Read the article

  • mySQL not saving data?

    - by tony noriega
    i have a PHP contact form that submits data, and an email...: <?php $dbh=mysql_connect ("localhost", "username", "password") or die ('I cannot connect to the database because: ' . mysql_error()); mysql_select_db ("guest"); if (isset($_POST['submit'])) { if (!$_POST['name'] | !$_POST['email']) { echo"<div class='error'>Error<br />Please provide your Name and Email Address so we may properly contact you.</div>"; } else { $age = $_POST['age']; $name = $_POST['name']; $gender = $_POST['gender']; $email = $_POST['email']; $phone = $_POST['phone']; $comments = $_POST['comments']; $query = "INSERT INTO contact_us (age,name,gender,email,phone,comments) VALUES ('$age','$name','$gender','$email','$phone','$comments')"; mysql_query($query); mysql_close(); $yoursite = "Mysite "; $youremail = $email; $subject = "Website Guest Contact Us Form"; $message = "$name would like you to contact them Contact PH: $phone Email: $email Age: $age Gender: $gender Comments: $comments"; $email2 = "[email protected]"; mail($email2, $subject, $message, "From: $email"); echo"<div class='thankyou'>Thank you for contacting us,<br /> we will respond as soon as we can.</div>"; } } ?> The email is coming through fine, but the data is not storing the dbase... am i missing something? Its the same script as i use on another contact us page, only difference is instead of parsing the data on teh same page, i now send this data to a "thankyou.php" page... i tried changing $_POST to $_GET but that killed the page... what am i doing wrong?

    Read the article

  • Trouble when changing pixel data with alpha on png on iphone --okay on simulator

    - by Ted
    I'm trying to change the color of the pixels (lighten or darken) without changing the value of the alpha channel using CGDataProviderCopyData. I leave every 4th databyte untouched. It work fine of the iphone simulator, however on the real thing the alpha goes white as I increase the values of the other pixels. I've tried changing just the first byte, or the second, or the third. Does anybody have any idea what is going on? The basic code is borrowed from Jorge. I like this simple approach --I'm new to this. But I want to make it work with png images with some transparency. here is most of the code by Jorge : CFDataRef CopyImagePixels(CGImageRef inImage){ return CGDataProviderCopyData(CGImageGetDataProvider(inImage)); } CGImageRef img=originalImage.CGImage; CFDataRef dataref=CopyImagePixels(img); UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref); int length=CFDataGetLength(dataref); for(int index=0;index255){ data[index+i]=255; }else{ data[index+i]+=value; } } } } size_t width=CGImageGetWidth(img); size_t height=CGImageGetHeight(img); size_t bitsPerComponent=CGImageGetBitsPerComponent(img); size_t bitsPerPixel=CGImageGetBitsPerPixel(img); size_t bytesPerRow=CGImageGetBytesPerRow(img); CGColorSpaceRef colorspace=CGImageGetColorSpace(img); CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img); CGImageAlphaInfo alphaInfo = kCGBitmapAlphaInfoMask(img); NSLog(@"bitmapinfo: %d",bitmapInfo); CFDataRef newData=CFDataCreate(NULL,data,length); CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData); CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault); [iv setImage:[UIImage imageWithCGImage:newImg]]; CGImageRelease(newImg); CGDataProviderRelease(provider);

    Read the article

  • Collapsing data frame by selecing one row per group

    - by jkebinger
    I'm trying to collapse a data frame by removing all but one row from each group of rows with identical values in a particular column. In other words, the first row from each group. For example, I'd like to convert this > d = data.frame(x=c(1,1,2,4),y=c(10,11,12,13),z=c(20,19,18,17)) > d x y z 1 1 10 20 2 1 11 19 3 2 12 18 4 4 13 17 Into this: x y z 1 1 11 19 2 2 12 18 3 4 13 17 I'm using aggregate to do this currently, but the performance is unacceptable with more data: > d.ordered = d[order(-d$y),] > aggregate(d.ordered,by=list(key=d.ordered$x),FUN=function(x){x[1]}) I've tried split/unsplit with the same function argument as here, but unsplit complains about duplicate row numbers. Is rle a possibility? Is there an R idiom to convert rle's length vector into the indices of the rows that start each run, which I can then use to pluck those rows out of the data frame?

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • L10N: Trusted test data for Locale Specific Sorting

    - by Chris Betti
    I'm working on an internationalized database application that supports multiple locales in a single instance. When international users sort data in the applications built on top of the database, the database theoretically sorts the data using a collation appropriate to the locale associated with the data the user is viewing. I'm trying to find sorted lists of words that meet two criteria: the sorted order follows the collation rules for the locale the words listed will allow me to exercise most / all of the specific collation rules for the locale I'm having trouble finding such trusted test data. Are such sort-testing datasets currently available, and if so, what / where are they? "words.en.txt" is an example text file containing American English text: Andrew Brian Chris Zachary I am planning on loading the list of words into my database in randomized order, and checking to see if sorting the list conforms to the original input. Because I am not fluent in any language other than English, I do not know how to create sample datasets like the following sample one in French (call it "words.fr.txt"): cote côte coté côté The French prefer diacritical marks to be ordered right to left. If you sorted that using code-point order, it likely comes out like this (which is an incorrect collation): cote coté côte côté Thank you for the help, Chris

    Read the article

  • Adding an ADO.NET Entity Data Model throws build errors

    - by user3726262
    I am using Visual Studio 2013 express. I create a new project and then I add a database to that project. But, when I add an ADO.NET Entity Framework model to that project and then run the program, I get the following four build errors listed below. To try to remedy this myself, I added the namespaces 'System.Data.Entity' and 'System.Data.Entity.Design', but that didn't help. Also, I uninstalled and re-installed the Nuget package. I also uninstalled and re-installed Visual Studio 2013 Express for Windows Desktop. But these measures didn't help the situation either. Please note that I used to use the Entity Data model just fine. But it was around the time that I did a system restore on my computer, and when I updated VS 2013 with an update offered on the start page, and finally, when I signed up for MS Azure, that I started running into the problem described above. Now I would think that uninstalling and reinstalling Visual Studio 2013, and then installing the 'Nuget' Package would solve all problems. What am I missing here? The errors mentioned above are: Error 1 The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' (are you missing an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 14 30 DataLayer Error 2 The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 16 52 DataLayer Error 3 The type or namespace name 'DbModelBuilder' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 23 49 DataLayer Error 4 The type or namespace name 'DbSet' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 28 16 DataLayer Thank you and I realize that my last attempt at this question was rather rough-draftish, John

    Read the article

  • Building a control-flow graph from an AST with a visitor pattern using Java

    - by omegatai
    Hi guys, I'm trying to figure out how to implement my LEParserCfgVisitor class as to build a control-flow graph from an Abstract-Syntax-Tree already generated with JavaCC. I know there are tools that already exist, but I'm trying to do it in preparation for my Compilers final. I know I need to have a data structure that keeps the graph in memory, and I want to be able to keep attributes like IN, OUT, GEN, KILL in each node as to be able to do a control-flow analysis later on. My main problem is that I haven't figured out how to connect the different blocks together, as to have the right edge between each blocks depending on their nature: branch, loops, etc. In other words, I haven't found an explicit algorithm that could help me build my visitor. Here is my empty Visitor. You can see it works on basic langage expressions, like if, while and basic operations (+,-,x,^,...) public class LEParserCfgVisitor implements LEParserVisitor { public Object visit(SimpleNode node, Object data) { return data; } public Object visit(ASTProgram node, Object data) { data = node.childrenAccept(this, data); return data; } public Object visit(ASTBlock node, Object data) { } public Object visit(ASTStmt node, Object data) { } public Object visit(ASTAssignStmt node, Object data) { } public Object visit(ASTIOStmt node, Object data) { } public Object visit(ASTIfStmt node, Object data) { } public Object visit(ASTWhileStmt node, Object data) { } public Object visit(ASTExpr node, Object data) { } public Object visit(ASTAddExpr node, Object data) { } public Object visit(ASTFactExpr node, Object data) { } public Object visit(ASTMultExpr node, Object data) { } public Object visit(ASTPowerExpr node, Object data) { } public Object visit(ASTUnaryExpr node, Object data) { } public Object visit(ASTBasicExpr node, Object data) { } public Object visit(ASTFctExpr node, Object data) { } public Object visit(ASTRealValue node, Object data) { } public Object visit(ASTIntValue node, Object data) { } public Object visit(ASTIdentifier node, Object data) { } } Can anyone give me a hand? Thanks!

    Read the article

  • Array data to display in table cells where col and row are given in array

    - by Saleem
    I have a array Array ( Array ( [id] => 1 , [schedule_id] => 4 , [subject] => Subject 1 , [classroom] => 1 , [time] => 08:00:00 , [col] => 1 , [row] => 1 ), Array ( [id] => 2 , [schedule_id] => 4 , [subject] => Subject 2 , [classroom] => 1 , [time] => 08:00:00 , [col] => 2 , [row] => 1 ), Array ( [id] => 3 , [schedule_id] => 4 , [subject] => Subject 3 , [classroom] => 2 , [time] => 09:00:00 , [col] => 1 , [row] => 2 ), Array ( [id] => 4 , [schedule_id] => 4 , [subject] => Subject 4 , [classroom] => 2 , [time] => 09:00:00 , [col] => 2 , [row] => 2 ) ) I want to display it in table format according to col and row value col 1 & row 1 col 2 $ row 1 1st array data 2nd array data Subject, room, time Subject, room, time 1, 1, 08:00 2, 1, 08:00 col 1 $ row 2 col 2 $ row 2 3rd array data 4th array data Subject, room, time Subject, room, time 3, 2, 09:00 4, 2, 08:00 I am new to arrays and need you support to sort this table. Thanks

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >