Search Results

Search found 58499 results on 2340 pages for 'temporal data'.

Page 201/2340 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • How to share data between SSRS Security and Data Processing extension?

    - by user2904681
    I've spent a lot of time trying to solve the issue pointed in title and have no found a solution yet. I use MS SSRS 2012 with custom Security (based on Form Authentication and ClaimsPrincipal) and Data Processing extensions. In Data extension level I need to apply filter programmatically based on one of the claim which I have access in Security extension level only. Here is the problem: I do know how to pass the claim from Security to Data Processing extension code... What I've tried: IAuthenticationExtension.LogonUser(string userName, string password, string authority) { ... ClaimsPrincipal claimsPrincipal = CreateClaimsPrincipal(...); Thread.CurrentPrincipal = claimsPrincipal; HttpContext.Current.User = claimsPrincipal; ... }; But it doesn't work. It seems SSRS overrides it within either GenericPrincipal or FormsIdentity internally. The possible workaround I'm thinking about (but haven't checked it yet): 1. Create HttpModule which will create HttpContext with all required information (minus: will be invoke getting claims each time - huge operation) 2. Write to custom SQL table to store logged users information which is required for Data extension and then read it 3. try somehow to append to cookies due to LogOn and then read each time on IAuthenticationExtension.GetUserInfo and fill HttpContext None of them seems to be a good solution. I would be grateful for any help/advise/comments.

    Read the article

  • In what order does DOJO handles ajax requests?

    - by mm2887
    Hi, Using a form in a dialog box I am using Dojo in jsp to save the form in my database. After that request is completed using dojo.xhrPost() I am sending in another request to update a dropdown box that should include the added form object that was just saved, but for some reason the request to update the dropdown is executed before saving the form in the database even though the form save is called first. Using Firebug I can see that the getLocations() request is completed before the sendForm() request. This is the code: Add Team sendForm("ajaxResult1", "addTeamForm"); dijit.byId("addTeamDialog").hide(); getLocations("locationsTeam"); </script> function sendForm(target, form){ var targetNode = dojo.byId(target); var xhrArgs = { form: form, url: "ajax", handleAs: "text", load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrPost(xhrArgs); } function getLocations(id) { var targetNode = dojo.byId(id); var xhrArgs = { url: "ajax", handleAs: "text", content: { location: "yes" }, load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrGet(xhrArgs); } Why is this happening? Is there way to make the first request complete first before the second executes? To reduce the possibilities of why this is happening I tried setting the cache property in xhrGet to false but the result is still the same. Please help!

    Read the article

  • POSTing JSON data to WCF REST

    - by Randall Sexton
    I'm trying to send data from a client application using jQuery to a REST WCF service based on the WCF REST starter kit. Here's what I have so far. Service Definition: [WebHelp(Comment = "Save PropertyValues to the database")] [WebInvoke(Method = "POST", UriTemplate = "PropertyValues_Save", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [OperationContract] public bool PropertyValues_Save(Guid assetId, Dictionary<Guid, string> newValues) { ... } Call from the client: $.ajax({ url:SVC_PROPERTYVALUES_SAVE, type: "POST", contentType: "application/json; charset=utf-8", data: jsonData, dataType: "json", error: function(XMLHttpRequest, textStatus, errorThrown) { alert(textStatus + ' ' + errorThrown); }, success: function(data) { if (data) { alert('Values saved'); $("#confirmSubmit").dialog('close'); } else { alert('Values failed to save'); $("#confirmSubmit").dialog('close'); } } }); Example of the JSON being passed: { "assetId": "d70714c3-e403-4cc5-b8a9-9713d05b2ee0", "newValues": [ { "key": "bd01aa88-b48d-47c7-8d3f-eadf47a46680", "value": "0e9fdf34-2d12-4639-8d70-19b88e753ab1" }, { "key": "06e8eda2-a004-450e-90ab-64df357013cf", "value": "1d490aec-f40e-47d5-865c-07fe9624f955" } ] } I'm using Windows Authentication on the virtual directory. When I call operations that are GETs, everything is fine. This code is prompting the browser to log in. When I enter my credentials, I simply get an alert in my browser which says "error undefined". Even if you can't help my specific error, do you see anything that looks wrong from glancing? I've been beating my head on this nearly all day. Thanks in advance.

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • Non distinct Unique ID in MySQL database table.

    - by Geoff
    First of, a simplified version: I am wondering if I can create a trigger to activate during INSERT (it's actually LOAD DATA INFILE) and NOT enter records for an RMA already in my table? I have a table that has no records that are unique. Some may be duplicates but there is one field that I can use to know if the data has been entered or not. For instance RMA Op Days --------------------- 213 Repair 0.10 213 Test 0.20 213 Repair 0.10 So I could do an index on the three columns together but as you see it's possible for an RMA to be in a step for the same amount of time twice so it's possible to have duplicate records. This data comes from a report that I cannot edit and this is all it provides. The key is that an RMA's data is only in the report once so if my database already has that RMA in it's records I want to skip the loading of that RMA's records from the report. By all means please let me know if that didn't make sense, I'll Explain as needed. I'm sure it's not uncommon but I couldn't find anything on the net.

    Read the article

  • Trying to return data asynchronously using jQuery AND jSon in MVC 2.0

    - by Calibre2010
    Hi, I am trying to use Jquerys getJSON method to return data server side to my browser, what happens is the url the getJSON method points to does get reached but upon the postback the result does not get returned to the browser for some odd reason. I wasen't sure if it was because I was using MVC 2.0 and jQuery 1.4.1 and it differs to the MVC 1.0 version and jQuerys 1.3.2 version. . this is the code sections Controller public JsonResult StringReturn() { NameDTO myName = new NameDTO(); myName.nameID = 1; myName.name= "James"; myName.nameDescription = "Jmaes"; return Json(myName); } View with JQuery <script type="text/javascript"> $(document).ready(function () { $("#myButton").click(function () { $.getJSON("Home/StringReturn/", null, function (data) { alert(data.name); $("#show").append($("<div>" + data.name + "</div>")); }); }); }); </script> HTML <input type="button" value="clickMe" id="myButton"/> <div id="show">d</div>

    Read the article

  • When should I be cautious using data binding in .NET?

    - by Ben McCormack
    I just started working on a small team of .NET programmers about a month ago and recently got in a discussion with our team lead regarding why we don't use databinding at all in our code. Every time we work with a data grid, we iterate through a data table and populate the grid row by row; the code usually looks something like this: Dim dt as DataTable = FuncLib.GetData("spGetTheData ...") Dim i As Integer For i = 0 To dt.Rows.Length - 1 '(not sure why we do not use a for each here)' gridRow = grid.Rows.Add() gridRow(constantProductID).Value = dt("ProductID").Value gridRow(constantProductDesc).Value = dt("ProductDescription").Value Next '(I am probably missing something in the code, but that is basically it)' Our team lead was saying that he got burned using data binding when working with Sheridan Grid controls, VB6, and ADO recordsets back in the nineties. He's not sure what the exact problem was, but he remembers that binding didn't work as expected and caused him some major problems. Since then, they haven't trusted data binding and load the data for all their controls by hand. The reason the conversation even came up was because I found data binding to be very simple and really liked separating the data presentation (in this case, the data grid) from the in-memory data source (in this case, the data table). "Loading" the data row by row into the grid seemed to break this distinction. I also observed that with the advent of XAML in WPF and Silverlight, data-binding seems like a must-have in order to be able to cleanly wire up a designer's XAML code with your data. When should I be cautious of using data-binding in .NET?

    Read the article

  • Best strategy for moving data between physical tiers in ASP.net

    - by Pete Lunenfeld
    Building a new ASP.net application, and planning to separate DB, 'service' tier and Web/UI tier into separate physical layers. What is the best/easiest strategy to move serialized objects between the service tier and the UI tier? I was considering serializing POCOs into JSON using simple ASP.net pages to serve the middle tier. Meaning that the UI/Web tier will request data from a (hidden to the outside user) web server that will return a JSON string. This kind of JSON 'emitter' seems easily testable. It also seems easily compressible for efficiently moving data over the WAN between tiers. I know that some folks use .asmx webservices for this kind of task, but this seems like there is excess overhead with SOAP, and the package is not as human readable (testable) as POCOs serialized as JSON. Others are using more complex technology like WCF which we have never used. Does anyone have advice for choosing a method for moving data/objects between the data (db) tier and the web (UI) tier over the WAN using .net technologies? Thanks!!!

    Read the article

  • check if a tree is a binary search tree

    - by TimeToCodeTheRoad
    I have written the following code to check if a tree is a Binary search tree. Please help me check the code: Pair p{ boolean isTrue; int min; int max; } public boo lean isBst(BNode v){ return isBST1(v).isTrue; } public Pair isBST1(BNode v){ if(v==null) return new Pair(true, INTEGER.MIN,INTEGER.MAX); if(v.left==null && v.right==null) return new Pair(true, v.data, v.data); Pair pLeft=isBST1(v.left); Pair pRight=isBST1(v.right); boolean check=pLeft.max<v.data && v.data<= pRight.min; Pair p=new Pair(); p.isTrue=check&&pLeft.isTrue&&pRight.isTrue; p.min=pLeft.min; p.max=pRight.max; return p; } Note: This function checks if a tree is a binary search tree

    Read the article

  • Cocoa Drag and Drop, reading back the data

    - by kodai
    Ok, I have a NSOutlineView set up, and I want it to capture PDF's if a pdf is dragged into the NSOutlineView. My first question, I have the following code: [outlineView registerForDraggedTypes:[NSArray arrayWithObjects:NSStringPboardType, NSFilenamesPboardType, nil]]; In all the apple Docs and examples I've seen I've also seen something like MySupportedType being an object registered for dragging. What does this mean? Do I change the code to be: [outlineView registerForDraggedTypes:[NSArray arrayWithObjects:@"pdf", NSStringPboardType, NSFilenamesPboardType, nil]]; Currently I have it set up to recognize drag and drop, and I can even make it spit out the URL of the dragged file once the drag is accepted, however, this leads me to my second question. I want to keep a copy of those PDF's app side. I suppose, and correct me if I'm wrong, that the best way to do this is to grab the data off the clipboard, save it in some persistant store, and that's that. (as apposed to using some sort of copy command and literally copying the file to the app director.) That being said, I'm not sure how to do that. I've the code: - (BOOL)outlineView:(NSOutlineView *)ov acceptDrop:(id <NSDraggingInfo>)info item:(id)item childIndex:(NSInteger)childIndex { NSPasteboard *pboard = [info draggingPasteboard]; NSURL *fileURL; if ( [[pboard types] containsObject:NSURLPboardType] ) { fileURL = [NSURL URLFromPasteboard:pboard]; // Perform operation using the file’s URL } NSData *data = [pboard dataForType:@"NSPasteboardTypePDF"]; But this never actually gets any data. Like I said before, it does get the URL, just not the data. Does anyone have any advise on how to get this going? Thanks so much!

    Read the article

  • Trouble when changing pixel data with alpha on png on iphone --okay on simulator

    - by Ted
    I'm trying to change the color of the pixels (lighten or darken) without changing the value of the alpha channel using CGDataProviderCopyData. I leave every 4th databyte untouched. It work fine of the iphone simulator, however on the real thing the alpha goes white as I increase the values of the other pixels. I've tried changing just the first byte, or the second, or the third. Does anybody have any idea what is going on? The basic code is borrowed from Jorge. I like this simple approach --I'm new to this. But I want to make it work with png images with some transparency. here is most of the code by Jorge : CFDataRef CopyImagePixels(CGImageRef inImage){ return CGDataProviderCopyData(CGImageGetDataProvider(inImage)); } CGImageRef img=originalImage.CGImage; CFDataRef dataref=CopyImagePixels(img); UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref); int length=CFDataGetLength(dataref); for(int index=0;index255){ data[index+i]=255; }else{ data[index+i]+=value; } } } } size_t width=CGImageGetWidth(img); size_t height=CGImageGetHeight(img); size_t bitsPerComponent=CGImageGetBitsPerComponent(img); size_t bitsPerPixel=CGImageGetBitsPerPixel(img); size_t bytesPerRow=CGImageGetBytesPerRow(img); CGColorSpaceRef colorspace=CGImageGetColorSpace(img); CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img); CGImageAlphaInfo alphaInfo = kCGBitmapAlphaInfoMask(img); NSLog(@"bitmapinfo: %d",bitmapInfo); CFDataRef newData=CFDataCreate(NULL,data,length); CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData); CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault); [iv setImage:[UIImage imageWithCGImage:newImg]]; CGImageRelease(newImg); CGDataProviderRelease(provider);

    Read the article

  • mySQL not saving data?

    - by tony noriega
    i have a PHP contact form that submits data, and an email...: <?php $dbh=mysql_connect ("localhost", "username", "password") or die ('I cannot connect to the database because: ' . mysql_error()); mysql_select_db ("guest"); if (isset($_POST['submit'])) { if (!$_POST['name'] | !$_POST['email']) { echo"<div class='error'>Error<br />Please provide your Name and Email Address so we may properly contact you.</div>"; } else { $age = $_POST['age']; $name = $_POST['name']; $gender = $_POST['gender']; $email = $_POST['email']; $phone = $_POST['phone']; $comments = $_POST['comments']; $query = "INSERT INTO contact_us (age,name,gender,email,phone,comments) VALUES ('$age','$name','$gender','$email','$phone','$comments')"; mysql_query($query); mysql_close(); $yoursite = "Mysite "; $youremail = $email; $subject = "Website Guest Contact Us Form"; $message = "$name would like you to contact them Contact PH: $phone Email: $email Age: $age Gender: $gender Comments: $comments"; $email2 = "[email protected]"; mail($email2, $subject, $message, "From: $email"); echo"<div class='thankyou'>Thank you for contacting us,<br /> we will respond as soon as we can.</div>"; } } ?> The email is coming through fine, but the data is not storing the dbase... am i missing something? Its the same script as i use on another contact us page, only difference is instead of parsing the data on teh same page, i now send this data to a "thankyou.php" page... i tried changing $_POST to $_GET but that killed the page... what am i doing wrong?

    Read the article

  • Collapsing data frame by selecing one row per group

    - by jkebinger
    I'm trying to collapse a data frame by removing all but one row from each group of rows with identical values in a particular column. In other words, the first row from each group. For example, I'd like to convert this > d = data.frame(x=c(1,1,2,4),y=c(10,11,12,13),z=c(20,19,18,17)) > d x y z 1 1 10 20 2 1 11 19 3 2 12 18 4 4 13 17 Into this: x y z 1 1 11 19 2 2 12 18 3 4 13 17 I'm using aggregate to do this currently, but the performance is unacceptable with more data: > d.ordered = d[order(-d$y),] > aggregate(d.ordered,by=list(key=d.ordered$x),FUN=function(x){x[1]}) I've tried split/unsplit with the same function argument as here, but unsplit complains about duplicate row numbers. Is rle a possibility? Is there an R idiom to convert rle's length vector into the indices of the rows that start each run, which I can then use to pluck those rows out of the data frame?

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • L10N: Trusted test data for Locale Specific Sorting

    - by Chris Betti
    I'm working on an internationalized database application that supports multiple locales in a single instance. When international users sort data in the applications built on top of the database, the database theoretically sorts the data using a collation appropriate to the locale associated with the data the user is viewing. I'm trying to find sorted lists of words that meet two criteria: the sorted order follows the collation rules for the locale the words listed will allow me to exercise most / all of the specific collation rules for the locale I'm having trouble finding such trusted test data. Are such sort-testing datasets currently available, and if so, what / where are they? "words.en.txt" is an example text file containing American English text: Andrew Brian Chris Zachary I am planning on loading the list of words into my database in randomized order, and checking to see if sorting the list conforms to the original input. Because I am not fluent in any language other than English, I do not know how to create sample datasets like the following sample one in French (call it "words.fr.txt"): cote côte coté côté The French prefer diacritical marks to be ordered right to left. If you sorted that using code-point order, it likely comes out like this (which is an incorrect collation): cote coté côte côté Thank you for the help, Chris

    Read the article

  • Adding an ADO.NET Entity Data Model throws build errors

    - by user3726262
    I am using Visual Studio 2013 express. I create a new project and then I add a database to that project. But, when I add an ADO.NET Entity Framework model to that project and then run the program, I get the following four build errors listed below. To try to remedy this myself, I added the namespaces 'System.Data.Entity' and 'System.Data.Entity.Design', but that didn't help. Also, I uninstalled and re-installed the Nuget package. I also uninstalled and re-installed Visual Studio 2013 Express for Windows Desktop. But these measures didn't help the situation either. Please note that I used to use the Entity Data model just fine. But it was around the time that I did a system restore on my computer, and when I updated VS 2013 with an update offered on the start page, and finally, when I signed up for MS Azure, that I started running into the problem described above. Now I would think that uninstalling and reinstalling Visual Studio 2013, and then installing the 'Nuget' Package would solve all problems. What am I missing here? The errors mentioned above are: Error 1 The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' (are you missing an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 14 30 DataLayer Error 2 The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 16 52 DataLayer Error 3 The type or namespace name 'DbModelBuilder' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 23 49 DataLayer Error 4 The type or namespace name 'DbSet' could not be found (are you missing a using directive or an assembly reference?) C:\Users\John\documents\visual studio 2013\Projects\Riches\Riches\RichesModel.Context.cs 28 16 DataLayer Thank you and I realize that my last attempt at this question was rather rough-draftish, John

    Read the article

  • Building a control-flow graph from an AST with a visitor pattern using Java

    - by omegatai
    Hi guys, I'm trying to figure out how to implement my LEParserCfgVisitor class as to build a control-flow graph from an Abstract-Syntax-Tree already generated with JavaCC. I know there are tools that already exist, but I'm trying to do it in preparation for my Compilers final. I know I need to have a data structure that keeps the graph in memory, and I want to be able to keep attributes like IN, OUT, GEN, KILL in each node as to be able to do a control-flow analysis later on. My main problem is that I haven't figured out how to connect the different blocks together, as to have the right edge between each blocks depending on their nature: branch, loops, etc. In other words, I haven't found an explicit algorithm that could help me build my visitor. Here is my empty Visitor. You can see it works on basic langage expressions, like if, while and basic operations (+,-,x,^,...) public class LEParserCfgVisitor implements LEParserVisitor { public Object visit(SimpleNode node, Object data) { return data; } public Object visit(ASTProgram node, Object data) { data = node.childrenAccept(this, data); return data; } public Object visit(ASTBlock node, Object data) { } public Object visit(ASTStmt node, Object data) { } public Object visit(ASTAssignStmt node, Object data) { } public Object visit(ASTIOStmt node, Object data) { } public Object visit(ASTIfStmt node, Object data) { } public Object visit(ASTWhileStmt node, Object data) { } public Object visit(ASTExpr node, Object data) { } public Object visit(ASTAddExpr node, Object data) { } public Object visit(ASTFactExpr node, Object data) { } public Object visit(ASTMultExpr node, Object data) { } public Object visit(ASTPowerExpr node, Object data) { } public Object visit(ASTUnaryExpr node, Object data) { } public Object visit(ASTBasicExpr node, Object data) { } public Object visit(ASTFctExpr node, Object data) { } public Object visit(ASTRealValue node, Object data) { } public Object visit(ASTIntValue node, Object data) { } public Object visit(ASTIdentifier node, Object data) { } } Can anyone give me a hand? Thanks!

    Read the article

  • Display Yearly Report When Data Not Available (CI, PHP, MySQL)

    - by tegaralaga
    First of all, i do apologize for my bad english, cos english isn't my native language. I want to display yearly report based on month, let say i got order on January, August, December, but the rest there's no order. So in MySQL database only have 3 order (Jan,Aug,Dec). When i query use CI ( select month(order_date) as month_name , count(order_id) as amount from order where year(order_date)=2011 group by month(order_date) ) there's only 3 data let say the 3 data is (use $query-result_array()) Array ( [0] => Array ( [month_num] => 1 [amount] => 4 ) [1] => Array ( [month_num] => 8 [amount] => 1 ) [2] => Array ( [month_num] => 12 [amount] => 19 ) ) how to make it to 12 data (12 Month) the array become like this (when data not available the amount is 0) Array ( [0] => Array ( [month_num] => 1 [amount] => 4 ) [1] => Array ( [month_num] => 2 [amount] => 0 ) [2] => Array ( [month_num] => 3 [amount] => 0 ) etc ) Thanks in advance :)

    Read the article

  • Array data to display in table cells where col and row are given in array

    - by Saleem
    I have a array Array ( Array ( [id] => 1 , [schedule_id] => 4 , [subject] => Subject 1 , [classroom] => 1 , [time] => 08:00:00 , [col] => 1 , [row] => 1 ), Array ( [id] => 2 , [schedule_id] => 4 , [subject] => Subject 2 , [classroom] => 1 , [time] => 08:00:00 , [col] => 2 , [row] => 1 ), Array ( [id] => 3 , [schedule_id] => 4 , [subject] => Subject 3 , [classroom] => 2 , [time] => 09:00:00 , [col] => 1 , [row] => 2 ), Array ( [id] => 4 , [schedule_id] => 4 , [subject] => Subject 4 , [classroom] => 2 , [time] => 09:00:00 , [col] => 2 , [row] => 2 ) ) I want to display it in table format according to col and row value col 1 & row 1 col 2 $ row 1 1st array data 2nd array data Subject, room, time Subject, room, time 1, 1, 08:00 2, 1, 08:00 col 1 $ row 2 col 2 $ row 2 3rd array data 4th array data Subject, room, time Subject, room, time 3, 2, 09:00 4, 2, 08:00 I am new to arrays and need you support to sort this table. Thanks

    Read the article

  • iphone coredata fetch-request sorting

    - by satyam
    I'm using core data and fetching the results successfully. I've few questions regarding core data 1. When I add a record, will it be added at the end or at the start of entity. 2. I'm using following code to fetch the data. Array is being populated with all the records. But they are not in the same order as I entered records into entity. why? on what basis default sorting is used? NSFetchRequest* allLatest = [[NSFetchRequest alloc] init]; [allLatest setEntity:[NSEntityDescription entityForName:@"Latest" inManagedObjectContext:[self managedObjectContext]]]; NSError* error = nil; NSArray* records = [managedObjectContext executeFetchRequest:allLatest error:&error]; [allLatest release]; 3. The way that I enter the records, 1,2,3,4......... after some time, I want to delete the records that I entered first(i mean oldest data). Something like delete oldest two records. How to do it?

    Read the article

  • Updating target workbook - extracting data from source workbook

    - by Allan
    My question is as follows: I have given a workbook to multiple people. They have this workbook in a folder of their choice. The workbook name is the same for all people, but folder locations vary. Let's assume the common file name is MyData-1.xls. Now I have updated the workbook and want to give it to these people. However when they receive the new one (let's call it MyData-2.xls) I want specific parts of their data pulled from their file (MyData-1) and automatically put into the new one provided (MyData-2). The columns and cells to be copied/imported are identical for both workbooks. Let's assume I want to import cell data (values only) from MyData-1.xls, Sheet 1, cells B8 through C25 ... to ... the same location in the MyData-2.xls workbook. How can I specify in code (possibly attached to a macro driven import data now button) that I want this data brought into this new workbook. I have tried it at my own location by opening the two workbooks and using the copy/paste-special with links process. It works really well, but It seems to create a hard link between the two physical workbooks. I changed the name of the source workbook and it still worked. This makes me believe that there is a "hard link" between the tow and that this will not allow me to give the target (MyData-2.xls) workbook to others and have it find their source workbook.

    Read the article

  • problems with retrieving data that was saved outside of a grails webflow

    - by callie16
    Hi, this is actually connected to an earlier question of mine here. Anyway, I have 3 domains that look like this: class A { ... static hasMany = [ b : B ] ... } class B { ... static belongsTo = [ a : A ] static hasMany = [ c : C ] ... } class C { ... static belongsTo = [ b : B ] ... } In my GSP page, I call an action in the Controller via a remote function in a javascript block (I'm using Dojo so I'm passing data to be saved this way... it's not a form per se so I use JSON for now to pass the data to the Controller). Let's say, I'm calling something like this: def someAction = { def jsonArr = [parse the JSON here] def tmpA = A.get(params.id) ... def tmpB = new B() b.someParam = jsonArr.someParam ... def tmpC = new C() tmpC.cParam = jsonArr.cParam tmpB.addToC(tmpC) tmpB.save(flush: true) //this may or may not be here but I'm adding it for the sake of completeness tmpA.addToB(tmpB) tmpA.save(flush: true) // NOTE: If I check here via println or whatnot, tmpA has a tmpB which has a tmpC... in other words, the data got saved. It's also in the DB. redirect(action: 'order' ...) } Then comes the fun part. Here's the webflow sample: def orderFlow = { ... someStateIShouldEndUpIn { on("next") { // or on previous... doesn't matter def anId = params.id def currA = A.get(anId) // this does NOT return a null value def testB = currA.b // this DOES return a null value }.to("somePage") ... } ... } Any ideas on why this happens? Moreover, when I dump the data of currA, b=null... instead of b=[] or b=[contents of tmpB]. Any help would be seriously appreciated... been at this for a couple of days now... Thanks!

    Read the article

  • Ways of breaking down SQL transactional/call data into reports -- 'square data'?

    - by RizwanK
    I've got a large database of call-traffic information (although the question could be answered with any generic data set.) For instance, a row contains : call endpoint server (endpoint_name) call endpoint status (sip_disconnect_reason) call destination (destination) call completed (duration) [duration 0 is completed] call account group (account_group) It's pretty easy to run SQL reports against the data, i.e. select count(*), endpoint_name from calls where duration0 group by endpoint_name select count(*),destination from calls where blah group by destination I've been calling this filtering or breakdown reports (I get the number of calls per carrier, etc.). Add another breakdown, and you've got two breakdowns, a la select count(*), endpoint_name, sip_disconnect_reason from calls where duration=0 group by endpoint_name, sip_disconnect_reason Of course, if you keep adding breakdowns, you end up making super-large reports and slicing your data so thin that you can't extract any trends from it. So my question is this : Is there a name for this sort of method of report writing? (I've heard words like squares, slicing and breakdown reports applied to them) --- I'm looking for a Python/Reporting toolkit that I can use to make these easier to generate for my end users. aside : Are there other ways of representing transactional data that might be useful rather than the above method? Thanks,

    Read the article

  • I need help on this data file to be edited in SOM_PAK format

    - by Mola
    Hi Experts, I am working on Self Organizing Map (SOM) Implementation and i have a microarray dataset which i am trying to read in using some_read_data function, but i keep having an errors when i edit it to have it in SOM_PAK form which is recognise by SOM for reading such as ??? Error using == somtoolbox\som_read_data.m Only 69 vector components on input file data line 1 (dimension is 70) Error in == SomMainFunction at 3 sD = som_read_data('B_r2.txt'); but when i try to read the data without editing which is the original file as shown here: http://rapidshare.com/files/376239367/DLBCL.txt.html It indicates "Data read OK", but i have the following error ??? Error using == unknown Out of memory. Type HELP MEMORY for your options. Error in == somtoolbox\som_bmus.m at 189 Bmus = zeros(dlen,length(which_bmus)); Error in == somvis\somvis_p_matrix.m at 41 [dummy dists] = som_bmus (dat, dat, 2:datlen); Error in == SomMainFunction at 16 [pheight rad_real perc] = somvis_p_matrix(sM,sD); You can get the datafile from here:http://rapidshare.com/files/376239367/DLBCL.txt.html I need someone to help me correct this data for me and put it in SOM_PAK format. I have tried getting it in SOM_PAK format, but it still giving me errors:

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >