Search Results

Search found 16891 results on 676 pages for 'custom cell'.

Page 477/676 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • How to resize div element less than its content?

    - by andDaviD
    I have a table and there is a TreeView in a cell. Here is my code: <style> #leftPanel { width:60px; height:'100%'; border:1px solid red; background:yellow; } <style> <table> <tr> <td> <div id="leftPanel"> <asp:PlaceHolder runat="server" ID="TreeView" /> <%-- Here is a TreeView --%> </div> </td> </tr> <%-- other code --%> <table> $(function () { $('#leftPanel').resizable( { handles: 'e, w' }); }); I can't reduce the size of my div less then the size of my TreeView. What code need I write to allow leftPanel to be less than the size of TreeView ? It is necessary if TreeView node's text too long.

    Read the article

  • How can I join 3 tables with mysql & php?

    - by steven
    check out the page [url]http://www.mujak.com/test/test3.php[/url] It pulls the users Post,username,xbc/xlk tags etc which is perfect... BUT since I am pulling information from a MyBB bulletin board system, its quite different. When replying, people are are allowed to change the "Thread Subject" by simplying replying and changing it. I dont want it to SHOW the changed subject title, just the original title of all posts in that thread. By default it repies with "RE:thread title". They can easily edit this and it will show up in the "Subject" cell & people wont know which thread it was posted in because they changed their thread to when replying to the post. So I just want to keep the orginial thread title when they are replying. Make sense~?? Tables:mybb_users Fields:uid,username Tables:mybb_userfields Fields:ufid Tables:mybb_posts Fields:pid,tid,replyto,subject,ufid,username,uid,message Tables:mybb_threads Fields:tid,fid,subject,uid,username,lastpost,lastposter,lastposteruid I haev tried multiple queries with no success: $result = mysql_query(" SELECT * FROM mybb_users LEFT JOIN (mybb_posts, mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid AND mybb_users.uid=mybb_userfields.ufid ) WHERE mybb_posts.fid=42"); $result = mysql_query(" SELECT * FROM mybb_users LEFT JOIN (mybb_posts, mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid AND mybb_users.uid=mybb_posts.uid ) WHERE mybb_threads.fid=42"); $result = mysql_query(" SELECT * FROM mybb_posts LEFT JOIN (mybb_userfields, mybb_threads) ON ( mybb_userfields.ufid=mybb_posts.uid AND mybb_threads.tid=mybb_posts.tid ) WHERE mybb_posts.fid=42");

    Read the article

  • Navigation Controller with Tab Bar only on first view

    - by dinoc
    I am seeking advice on how to start my project. I need to use a combination of the navigation controller and tabbar controller but on the second screen, I need the tabbar controller not to be there. Here is a brief description of the two main screens Screen 1 will have a tabbar controller with two tabs. The first tab is a tableview and when you tap on a table cell, it drills down to Screen 2. The second tab is just a filter view that updates the table in the first tab of Screen 1. Screen two is just a details screen from the cells of Screen 1. The catch is that I don't want the TabBar on Screen 2. I am struggling with how to get started. Do I start with a Navigation-based application since I need to be able to drill down? How do I just add a tab bar to the main screen of the navigation based app? I can't start with a Tab Bar application because if I load a navigation controller inside one of the views of the tab controller, then when I drill down inside the nav controller, the tab bar still stays on the next screen when I need it to go away. Any help would be appreciated.

    Read the article

  • What is the worst gotcha in WPF?

    - by David
    Hi, I've started to make myself a list of "WPF gotchas": things that bug me and that I had to write down to remember because I fall for them every time.... Now, I'm pretty sure you all stumbled upon similar situations at one point, and I would like you to share your experience on the subject: What is the gotcha that gets you all the time? the one you find the most annoying? (I have a few issues that seem to be without explanation, maybe your submissions will explain them) Here are a few of my "personnal" gotchas (randomly presented): For a MouseEvent to be fired even when the click is on the "transparent" background of a control (e.g. a label) and not just on the content (the Text in this case), the control's Background has to be set to "Brushes.Transparent" and not just "null" (default value for a label) A WPF DataGridCell's DataContext is the RowView to whom the cell belong, not the CellView When inside a ScrollViewer, a Scrollbar is managed by the scrollviewer itself (i.e. setting properties such as ScrollBar.Value is without effect) Key.F10 is not fired when you press "F10", instead you get Key.System and you have to go look for e.SystemKey to get the Key.F10 ... and now you're on.

    Read the article

  • Strange behaviour of DataTable with DataGridView

    - by Paul
    Please explain me what is happening. I have created a WinForms .NET application which has DataGridView on a form and should update database when DataGridView inline editing is used. Form has SqlDataAdapter _da with four SqlCommands bound to it. DataGridView is bound directly to DataTable _names. Such a CellValueChanged handler: private void dataGridView1_CellValueChanged(object sender, DataGridViewCellEventArgs e) { _da.Update(_names); } does not update database state although _names DataTable is updated. All the rows of _names have RowState == DataRowState.Unchanged Ok, I modified the handler: private void dataGridView1_CellValueChanged(object sender, DataGridViewCellEventArgs e) { DataRow row = _names.Rows[e.RowIndex]; row.BeginEdit(); row.EndEdit(); _da.Update(_names); } this variant really writes modified cell to database, but when I attempt to insert new row into grid, I get an error about an absence of row with index e.RowIndex So, I decided to improve the handler further: private void dataGridView1_CellValueChanged(object sender, DataGridViewCellEventArgs e) { if (_names.Rows.Count<e.RowIndex) { DataRow row = _names.Rows[e.RowIndex]; row.BeginEdit(); row.EndEdit(); } else { DataRow row = _names.NewRow(); row["NameText"] = dataGridView1["NameText", e.RowIndex].Value; _names.Rows.Add(row); } _da.Update(_names); } Now the really strange things happen when I insert new row to grid: the grid remains what it was until _names.Rows.Add(row); After this line THREE rows are inserted into table - two rows with the same value and one with Null value. The slightly modified code: DataRow row = _names.NewRow(); row["NameText"] = "--------------" _names.Rows.Add(row); inserts three rows with three different values: one as entered into the grid, the second with "--------------" value and third - with Null value. I really got stuck in guessing what is happening.

    Read the article

  • Prototype or jQuery for DOM manipulation (client-side dynamic content)

    - by luiggitama
    I need to know which of these two JavaScript frameworks is better for client-side dynamic content modification for known DOM elements (by id), in terms of performance, memory usage, etc.: Prototype's $('id').update(content) jQuery's jQuery('#id').html(content) BTW, both libraries coexist with no conflict in my app, because I'm using RichFaces for JSF development, that's why I can use "jQuery" instead of "$". I have at least 20 updatable areas in my page, and for each one I prepare content (tables, option lists, etc.), based on some user-defined client-side criteria filtering or some AJAX event, etc., like this: var html = []; int idx = 0; ... html[idx++] = '<tr><td class="cell"><span class="link" title="View" onclick="myFunction('; html[idx++] = param; html[idx++] = ')"></span>'; html[idx++] = someText; html[idx++] = '</td></tr>'; ... So here comes the question, which is better to use: // Prototype's $('myId').update(html.join('')); // or jQuery's jQuery('#myId').html(html.join('')); Other needed functions are hide() and show(), which are present in both frameworks. Which is better? Also I'm needing to enable/disable form controls, and to read/set their values. Note that I know my updatable area's id (I don't need CSS selectors at this point). And I must tell that I'm saving these queried objects in some data structure for later use, so they are requested just once when the page is rendered, like this: MyData = {div1:jQuery('#id1'), div2:$('id2'), ...}; ... div1.update('content 1'); div2.html('content 2'); So, which is the best practice?

    Read the article

  • Updating Cells in a DataTable

    - by Maxim Z.
    I'm writing a small app to do a little processing on some cells in a CSV file I have. I've figured out how to read and write CSV files with a library I found online, but I'm having trouble: the library parses CSV files into a DataTable, but, when I try to change a cell of the table, it isn't saving the change in the table! Below is the code in question. I've separated the process into multiple variables and renamed some of the things to make it easier to debug for this question. Code Inside the loop: string debug1 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString(); string debug2 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString().Trim(); string debug3 = readIn.Rows[i].ItemArray[numColumnToCopyFrom].ToString().Trim(); string towrite = debug2 + ", " + debug3; readIn.Rows[i].ItemArray[numColumnToCopyTo] = (object)towrite; After the loop: readIn.AcceptChanges(); When I debug my code, I see that towrite is being formed correctly and everything's OK, except that the row isn't updated: why isn't it working? I have a feeling that I'm making a simple mistake here: the last time I worked with DataTables (quite a long time ago), I had similar problems. If you're wondering why I'm adding another comma in towrite, it's because I'm combining a street address field with a zip code field - I hope that's not messing anything up. My code is kind of messy, as I'm only trying to edit one file to make a small fix, so sorry.

    Read the article

  • jQuery Reference First Column in HTML Table

    - by Vic
    I have a table where all of the cells are INPUT tags. I have a function which looks for the first input cell and replaces it with it's value. So this: <tr id="row_0" class="datarow"> <td><input class="tabcell" value="Injuries"></td> <td><input class="tabcell" value="01"></td> becomes this: <tr id="row_0" class="datarow"> <td>Injuries</td> <td><input class="tabcell" value="01"></td> Here is the first part of the function: function setRowLabels() { var row = []; $('.dataRow').each(function(i) { row.push($('td input:eq(0)', this).val() + ' - '); $('td input:eq(0)', this).replaceWith($('td input:eq(0)', this).val()); $('td input:gt(0)', this).each(function(e) { etcetera But when the page reloads, the first column is not an input type, so it changes the second column to text too! Can I tell it to only change the first column, no matter what the type is? I tried $('td:eq(0)', this).replaceWith($('td:eq(0)', this).val()); but it does not work. Any suggestions appreciated! Thanks

    Read the article

  • [DOM/JS] display table in IE

    - by budzor
    I have code like that: var xPola = 10, //how many cols yPola = 10, //how many cols bokPola = 30, //size of cell body = document.getElementsByTagName('body')[0]; var tablica = document.createElement('table'); body.appendChild(tablica); for( var y = 0; y < yPola; y++ ) { var rzad = document.createElement('tr'); tablica.appendChild(rzad); for( var x = 0; x < xPola; x++ ) { var pole = document.createElement('td'); pole.setAttribute('width', bokPola); pole.setAttribute('height', bokPola); rzad.appendChild(pole); } }; it works fine in FF, Chrome & Opera (it displays 10x10 table with 30px width&height rows). In IE nothing happens. I check in firebug lite and it is in HTML section, inside BODY tag but i see nothing. Whats wrong?

    Read the article

  • Text being printed twice in uitableview

    - by user337174
    I have created a uitableview that calculates the distance of a location in the table. I used this code in cell for row at index path. NSString *lat1 = [object valueForKey:@"Lat"]; NSLog(@"Current Spot Latitude:%@",lat1); float lat2 = [lat1 floatValue]; NSLog(@"Current Spot Latitude Float:%g", lat2); NSString *long1 = [object valueForKey:@"Lon"]; NSLog(@"Current Spot Longitude:%@",long1); float long2 = [long1 floatValue]; NSLog(@"Current Spot Longitude Float:%g", long2); //Getting current location from NSDictionary CoreDataTestAppDelegate *appDelegate = (CoreDataTestAppDelegate *) [[UIApplication sharedApplication] delegate]; NSString *locLat = [NSString stringWithFormat:appDelegate.latitude]; float locLat2 = [locLat floatValue]; NSLog(@"Lat: %g",locLat2); NSString *locLong = [NSString stringWithFormat:appDelegate.longitude]; float locLong2 = [locLong floatValue]; NSLog(@"Long: %g",locLong2); //Location of selected spot CLLocation *loc1 = [[CLLocation alloc] initWithLatitude:lat2 longitude:long2]; //Current Location CLLocation *loc2 = [[CLLocation alloc] initWithLatitude:locLat2 longitude:locLong2]; double distance = [loc1 getDistanceFrom: loc2] / 1600; NSMutableString* converted = [NSMutableString stringWithFormat:@"%.1f", distance]; [converted appendString: @" m"]; It works fine apart from a problem i have just discovered where the distance text is duplicated over the top of the detailed text label when you scroll beyond the height of the page. here's a screen shot of what i mean. Any ideas why its doing this?

    Read the article

  • How to use CellRenderer for GregorianCalendar?

    - by HansDampf
    So I have been trying to use the example from Tutorial and change it so it fits my program. The getColumnValue method returns the object that holds the information that is supposed to be displayed. Is this the way to go or should it rather return the actual String to be displayed. I guess not because that way I would mix the presentation with the data, which I was trying to avoid. public class IssueTableFormat implements TableFormat<Appointment> { public int getColumnCount() { return 6; } public String getColumnName(int column) { if(column == 0) return "Datum"; else if(column == 1) return "Uhrzeit"; else if(column == 2) return "Nummer"; else if(column == 3) return "Name"; else if(column == 4) return "letzte Aktion"; else if(column == 5) return "Kommentar"; throw new IllegalStateException(); } public Object getColumnValue(Appointment issue, int column) { if(column == 0) return issue.getDate(); else if(column == 1) return issue.getDate(); else if(column == 2) return issue.getSample(); else if(column == 3) return issue.getSample(); else if(column == 4) return issue.getHistory(); else if(column == 5) return issue.getComment(); throw new IllegalStateException(); } } The column 0 and 1 contain a GregorianCalendar object, but I want column 0 to show the date and 1 to show the time. So I know using cellRenderers can help here. This is what I tried. public class DateRenderer extends DefaultTableCellRenderer { public DateRenderer() { super(); } public void setValue(Object value) { GregorianCalendar g =(GregorianCalendar) value; value=g.get(GregorianCalendar.HOUR); } } But the cell doesnt show anything, what is wrong here?

    Read the article

  • Java order jlist by status

    - by Takami
    i have a small problem, i don't know how to sort my jlist by status which is retrieved from database. i want sort by "online" and "offline", i mean online computers go first and then offline computers, i have this code now, it just makes the icon+text for the jlist Can you tell me how can i filter/sortby status? public void acx_pc(String query) { try { Statement st = con.createStatement(); ResultSet rs = st.executeQuery(query); String comb; Map<Object, Icon> icons = new HashMap<>(); ArrayList<String> pc_list = new ArrayList<>(); int i = 0; while (rs.next()) { //Getting info from DB String pc_name = rs.getString("nombre_pc"); String pc_ip = rs.getString("IP"); String status = rs.getString("estado"); //Setting text for the jList comb = pc_name + " - " + pc_ip; //Comparing Status switch (status) { case "online": //This is just for rendering an image+text to Jlist icons.put(comb, new ImageIcon(getClass().getResource("/Imagenes/com_on_30x30.png"))); break; case "offline": //This is just for rendering an image to Jlist icons.put(comb, new ImageIcon(getClass().getResource("/Imagenes/com_off_30x30.png"))); break; } //Adding info to ArrayList pc_list.add(i, comb); i++; } con.close(); // Setting the list/text on Jlist Home.computer_jlist.setListData(pc_list.toArray()); // create a cell renderer to add the appropriate icon Home.computer_jlist.setCellRenderer(new pc_cell_render(icons)); } catch (Exception e) { System.out.println("Error aqui: " + e); } } I want to do something like (should automatically order) http://imageshack.us/a/img27/9018/2mx1.png and not: http://imageshack.us/a/img407/346/e9r.png

    Read the article

  • Gmail Sync on Android phone

    - by sunocky
    Android has the Gmail push features, which means the new message arrives in the mailbox without checking or refreshing the mailbox. As I understand, the sync processes are like these: 1) User turns on the sync 2) There will be a alert msg and the sync flag in the Gmail DB of this device will be True 3) When a new email reach the Gmail Server, it will check if the device sync value, if it's True then send the email OK, here, I don't quite understand how exactly does it work, For a WiFi and cell signal connection, does the phone has a TCP socket open keep listening to the Gmail Server, or when a new email arrives the Server send a SMS alert to the phone, and then it will open the data channel to fetch the email? Are the two ways of connections have different approaches? And second question is which method is the priority one? Say when you are in the middle of receiving data(emails), and suddenly the phone connect to a wireless network, will the data socket be closed and then reopened for the WiFi one? What's the behavior for the case when carrier's data channel and WiFi flips? I have also downloaded the source code, anyone knows which part should I be looking into in order to solves my questions? I found a folder called "email" inside the folder "package", should I be looking at its code? I know I asked quite some questions here, I'd appreciate if you know the answer for any of them, thanks very much!

    Read the article

  • Speed up csv export when using php from mysql database query

    - by John
    Ok, so i've got a web system (built on codeigniter & running on mysql) that allows people to query a database of postal address data by making selections in a series of forms until they arrive at the selection that want, pretty standard stuff. They can then buy that information and download it via that system. The queries run very fast, but when it comes to applying that query to the database,and exporting it to csv, once the datasets get to around the 30,000 record mark (each row has around 40 columns of which about 20 are all populated with on average 20 chars of data per cell) it can take 5 or so minutes to export to csv. So, my question is, what is the main cause for the slowness? Is it that the resultset of data from the query is so large, that it is running into memory issues? Therefore should i allow much more memory to the process? Or, is there a much more efficient way of exporting to csv from a mysql query that i'm not doing? Should i save the contents of the query to a temp table and simply export the temp table to csv? Or am i going about this all wrong? Also, is the fact that i'm using Codeigniters Active Record for this prohibitive due to the way that it stores the resultset? Any advice is welcome! Thank you for reading!

    Read the article

  • Battery life if using GPS and background program

    - by StealthRT
    I was wondering if anyone has created an app that starts in the background and utilizes the GPS to gather the current Lat and Long every minute or so? If you have, would you please provide your battery times? As in, how long does your phone last until its all out of juice from just running that background app with standard cell phone programs. I'm trying to see if it would be worth the time to create an app for myself but if i work for 8 hours and dont have a way of charging my phone during that time then i dont want to be going home and it shut down on me since my app i would create works at my house. I need the app to work since it will see when i am in range of my home (from the GPS) and then send commands to my server at the house from my phone. So thats why it would need to be able to stay in the background doing a check every 1 minute or so. Or only turn on the GPS (Is is doable with iOS? & Android?) whenever its after 5pm each day so that it will minimize the load on the battery?!? Any help or suggestions would be great! Thanks!

    Read the article

  • Pass two string to Detail View from Search Array

    - by waqqas
    I am using a tableview with a search function and is populated by a plist. I want to pass 2 strings through to a DetailView. But I can only get the sting used to populate the cell to pass over. I have tried many variations but can't figure it out. Any help would be greatly appreciated. DetailView *nextLevel = [DetailView alloc]; NSString *valueString; NSString *selectedTitle; if (arraySearch == nil) { NSString *key = [arrayDataKeys objectAtIndex:indexPath.section]; NSArray *nameSection = [dictAllData objectForKey:key]; NSDictionary *dict = [nameSection objectAtIndex:indexPath.row]; valueString = [dict objectForKey:@"name"]; selectedTitle = [dict objectForKey:@"word"]; } else { valueString = [arraySearch objectAtIndex:indexPath.row]; **selectedTitle = [arraySearch objectAtIndex:indexPath.row];** //Obviously this is wrong but this is the line I need help with } nextLevel.title = valueString; nextLevel.itemname = valueString; nextLevel.itemWord = selectedTitle;

    Read the article

  • Query table value aliasing in Oracle SQL

    - by Strata
    I have a homework assignment in SQL for Oracle 10g where I have to apply union to two different select statements, to return two columns. I need the values of each cell under vendor_state to indicate CA and every other value in another state to return "Outside CA", to indicate they're elsewhere. I applied the union and produced the two columns and the listings for CA, but many other state IDs were listed and I couldn't find an explanation for how to change the actual values in the query itself. Eventually, I stumbled on an answer, but I can't explain why this works. The code is as follows: SELECT vendor_name, vendor_state FROM vendors WHERE vendor_state IN 'CA' UNION SELECT vendor_name, 'Outside CA' AS vendor_state FROM vendors WHERE vendor_state NOT IN 'CA' ORDER BY vendor_name This gives me the exact answer I need, but I don't know why the aliasing in the second select statement can behave this way....no explanation is given in my textbook and nothing I've read indicates that column aliasing can be done like this. But, by switching the column name and the alias value, I have replaced the value being returned rather than the column name itself...I'm not complaining about the result, but it would help if I knew how I did it.

    Read the article

  • How do you get and set a class property across multiple functions in Objective-C?

    - by editor
    Following up on this question about sharing objects between classes, I now need to figure out how to share the objects across various functions in a class. First, the setup: In my App Delegate I load menu information from JSON into a NSMutableDictionary and message that through to a view controller using a function called initWithData. I need to use this dictionary to populate a new Table View, which has methods like numberOfRowsInSection and cellForRowAtIndexPath. I'd like to use the dictionary count to return numberOfRowsInSection and info in the dictionary to populate each cell. Unfortunately, my code never gets beyond the init stage and the dictionary is empty so numberOfRowsInSection always returns zero. I thought I could create a class property, synthesize it and then set it. But it doesn't seem to want to retain the property's value. What am I doing wrong here? In the header .h: @interface FirstViewController:UIViewController <UITableViewDataSource, UITableViewDelegate, UITabBarControllerDelegate> { NSMutableDictionary *sectorDictionary; NSInteger sectorCount; } @property (nonatomic, retain) NSMutableDictionary *sectorDictionary; - (id)initWithData:(NSMutableDictionary*)data; @end in the implementation .m: - (id) testFunction:(NSMutableDictionary*)dictionary { NSLog(@"Count #1: %d", [dictionary count]); return nil; } - (id)initWithData:(NSMutableDictionary *)data { if (!(self=[super init])) { return nil; } [self testFunction:data]; // this is where I'd like to set a retained property self.sectorDictionary = data; return nil; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { NSLog(@"Count #2: %d", [self.sectorDictionary count]); return [self.sectorDictionary count]; } Output from NSLog: 2010-05-04 23:00:06.255 JSONApp[15890:207] Count #1: 9 2010-05-04 23:00:06.259 JSONApp[15890:207] Count #2: 0

    Read the article

  • Modifying html of dom element that was created after page loaded

    - by Ben321
    I have two separate AJAX calls. One that gets a list of items from a txt file and creates an HTML table out of them and one that talks to a database to find how much each item costs and then lists this in the corresponding table cell for each item (I know this may sound like a strange approach, but it's a good option in our case...). The issue is that the price is not getting written to the table since the table is created (or to be precise, the rows of the table are created) after the page loads. I'm not sure how to fix this. $(document).ready(function() { makeItemTable(); listPrices(); ... }); function makeItemTable() { $.ajax({ url: 'products.php', type: 'GET' }) .done(function(response) { $('.price-table > tbody').html(response); }) } function listPrices() { .ajax({ url: 'prices.php', type: 'GET' }) .done(function(response) { priceData = $.parseJSON(response); $('.price-table tr').each(function() { var item = $(this).find('td:nth-child(1)').text(); if (priceData[item]) { var price = priceData[item]; $(this).find('td:nth-child(2)').text(price); } }) }

    Read the article

  • convert a repeating code to method

    - by Mr_Green
    In my project, I am adding ComboBox, Text, Link label to my DataGridView dgvMain.I have created different methods for different cell templates as shown below: (The code below is working) gridLnklbl(string headerName) DataGridViewLinkColumn col = new DataGridViewLinkColumn(); col.HeaderText = headerName; // col.Name = "col" + headerName; // same code repeating to all the methods dgvMain.Columns.Add(col); // gridCmb(string headerName) DataGridViewComboBoxColumn col = new DataGridViewComboBoxColumn(); col.HeaderText = headerName; col.Name = "col" + headerName; dgvMain.Columns.Add(col); gridText(string headerName) DataGridViewTextBoxColumn col = new DataGridViewTextBoxColumn(); col.HeaderText = headerName; col.Name = "col" + headerName; dgvMain.Columns.Add(col); As you can see, except the declaration of objects, the code for every method is repeating. Just curious to know, can the repeating code be converted to single method? I dont know how to do that.. Its not about 3 codes of line, I have written many more lines which can be make common to those methods.

    Read the article

  • Google sheet dynamic WHERE clause for query() statement

    - by jason_cant_code
    I have a data table like so: a 1 a 2 b 3 b 4 c 5 c 6 c 7 I want to pull items out of this table by dynamically telling it what letters to pull. My current formula is: =query(A1:B7,"select * where A ='" & D1 & "'"). D1 being the cell I wish to modify to modify the query. I want to be able input into D1 -- a, a,b, a,b,c and have the query work. I know it would involve or statements in the query, but haven't figured out how to make the formula dynamic. I am looking for a general solution for this pattern: a -- A = 'a' a,b -- A = 'a' or A = 'b' a,b,c -- A = 'a' or A = 'b' or A='c' Or any other solution that solves the problem. Edit: So far I have =ArrayFormula(CONCATENATE("A='"&split(D3,",")&"' or ")) this gives A='a' or A='b' or A='c' or for a,b,c. can't figure out how to remove the last or.

    Read the article

  • NSUTF8StringEncoding gives me this %0A%20%20%20%20%22http://example.com/example.jpg%22%0A

    - by user1530141
    So I'm trying to load pictures from twitter. If i just use the URL in the json results without encoding, in the dataWithContentsOfURL, I get nil URL argument. If I encode it, I get %0A%20%20%20%20%22http://example.com/example.jpg%22%0A. I know I can use rangeOfString: or stringByReplacingOccurrencesOfString: but can I be sure that it will always be the same, is there another way to handle this, and why is this happening to my twitter response and not my instagram response? i have also tried stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet] and it does nothing. this is the url directly from the json... 2013-11-08 22:09:31:812 JaVu[1839:1547] -[SingleEventTableViewController tableView:cellForRowAtIndexPath:] [Line 406] ( "http://pbs.twimg.com/media/BYWHiq1IYAAwSCR.jpg" ) here is my code if ([post valueForKeyPath:@"entities.media.media_url"]) { NSString *twitterString = [[NSString stringWithFormat:@"%@", [post valueForKeyPath:@"entities.media.media_url"]]stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; twitterString = [twitterString stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]; NSLog(@"%@", twitterString); if (twitterString != nil){ NSURL *twitterPhotoUrl = [NSURL URLWithString:twitterString]; NSLog(@"%@", twitterPhotoUrl); dispatch_queue_t queue = kBgQueue; dispatch_async(queue, ^{ NSError *error; NSData* data = [NSData dataWithContentsOfURL:twitterPhotoUrl options:NSDataReadingUncached error:&error]; UIImage *image = [UIImage imageWithData:data]; dispatch_sync(dispatch_get_main_queue(), ^{ [streamPhotoArray replaceObjectAtIndex:indexPath.row withObject:image]; cell.instagramPhoto.image = image; }); }); } }

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >