Search Results

Search found 23613 results on 945 pages for 'query parameters'.

Page 883/945 | < Previous Page | 879 880 881 882 883 884 885 886 887 888 889 890  | Next Page >

  • How to find unmapped properties in a NHibernate mapped class?

    - by haarrrgh
    I just had a NHibernate related problem where I forgot to map one property of a class. A very simplified example: public class MyClass { public virtual int ID { get; set; } public virtual string SomeText { get; set; } public virtual int SomeNumber { get; set; } } ...and the mapping file: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="MyAssembly" namespace="MyAssembly.MyNamespace"> <class name="MyClass" table="SomeTable"> <property name="ID" /> <property name="SomeText" /> </class> </hibernate-mapping> In this simple example, you can see the problem at once: there is a property named "SomeNumber" in the class, but not in the mapping file. So NHibernate will not map it and it will always be zero. The real class had a lot more properties, so the problem was not as easy to see and it took me quite some time to figure out why SomeNumber always returned zero even though I was 100% sure that the value in the database was != zero. So, here is my question: Is there some simple way to find this out via NHibernate? Like a compiler warning when a class is mapped, but some of its properties are not. Or some query that I can run that shows me unmapped properties in mapped classes...you get the idea. (Plus, it would be nice if I could exclude some legacy columns that I really don't want mapped.)

    Read the article

  • Setting up a NAS with Citrix XenServer

    - by JasonBrown
    Just a quick query on anyone whos worked with XenServer, I want to setup a NAS at home but with virtualization (I've looked into VMWare Server and KVM, I quite like KVM!) but I was told about XenServer 5.5. I have comomodity hardware (ASUS board, dual core 2.66Ghz CPU with 8Gb RAM), I need to setup a fileserver to house about 2-3Tb worth of data (big chunky video - not porn!). Need to run Linux (preferably CentOS) but also run Windows virtualised for testing. I was thinking of going the XenServer route, however I want to be able to offer a VM access to the 2-3Tb of HDDs (5 HDD drives) directly so it can do its thing (maybe using FreeNAS). Would this be possible with XenServer? Or will I have to do more work - and another box - to offer this? My goals are to use FreeNAS (ZFS!) for the filesserver, CentOS for SVN and aother bits we need to use (LAMP Stack), Windows for our win32 testing all on one box. I see this iSCSI target bits and get scared.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • Populating a Combobox inside a Gridview

    - by Nawait
    i'm having a few problems working with a gridview and a combobox inside of it. Here is the code for my ListView control: <ListView Height="139" HorizontalAlignment="Left" Margin="10,158,0,0" Name="lvAppointment" VerticalAlignment="Top" Width="250" MinWidth="350"> <ListView.View> <GridView> <GridViewColumn Header="Appointment" Width="120"> <GridViewColumn.CellTemplate> <DataTemplate> <DatePicker SelectedDate="{Binding Path=Appointment}"/> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> <GridViewColumn Header="Type" Width="170"> <GridViewColumn.CellTemplate> <DataTemplate> <ComboBox ???/> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> <GridViewColumn Header="Done" Width="50"> <GridViewColumn.CellTemplate> <DataTemplate> <CheckBox IsChecked="{Binding Path=Done}" IsThreeState="False"/> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> </GridView> </ListView.View> I'm popluating the list from a sql ce database via c# with the following code using (SqlCeCommand sqlCeAppointment = new SqlCeCommand("SELECT appid,appointment,done,apptype.type FROM appointment INNER JOIN apptype ON appointment.refatid = apptype.atid WHERE refeventid = @eventid;", sqlCeConn)) { sqlCeAppointment.Parameters.AddWithValue("@eventid", ((cListEventItem)lvEvent.SelectedItems[0]).id); using (SqlCeDataReader sqlCeAppointmentReader = sqlCeAppointment.ExecuteReader()) { lvAppointment.Items.Clear(); while (sqlCeAppointmentReader.Read()) { lvAppointment.Items.Add(new cListAppointmentItem { id = sqlCeAppointmentReader.GetGuid(sqlCeTerminReader.GetOrdinal("appid")), Appointment = sqlCeAppointmentReader.GetDateTime(sqlCeTerminReader.GetOrdinal("appointment")), Type = sqlCeAppointmentReader.GetString(sqlCeTerminReader.GetOrdinal("type")), Done = sqlCeAppointmentReader.GetByte(sqlCeTerminReader.GetOrdinal("done")) }); } } } I can popluate the list just fine. But i want "Type" to be a combobox so the user can select the apropriate type of the appointment (its a list of appointments connected to an event). This combobox should be filled with data thats inside a table of the sql ce database (apptype). This table is not static, the users can add and delete items from this list. I have tried a few ways i found via google, but failed. I guess i'm having problems understanding how this works/should work. I hope someone can help me :( Thanks in advance

    Read the article

  • Best way to not update empty posts

    - by user1533106
    Hello, Im using codeigniter, and the page in case just update infos about an user. If the user go to the page and edit values and that posts come as "" or empty (samething) then no update it let the query pass it, i got a logic but its a bit ugly and ill take alot of time. $nome = "'nome' =>" . $this->input->post('nome') . "'"; $sobrenome = "'sobrenome' =>" . $this->input->post('sobrenome') . "'"; if($nome != ""){ $nome = "'nome' =>" . $this->input->post('nome') . "'"; }else{ $nome = ""; } if($sobrenome != ""){ $sobrenome = "'sobrenome' =>" . $this->input->post('sobrenome') . "'"; }else{ $sobrenome = ""; } $data = array($nome, $sobrenome); The problem is, i got alot of fields :( If anyone know a smart way or a better way, please i want know

    Read the article

  • Can a second stored procedure doing the same thing finish before first one?

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called 'CreatedAudit' as well. I call the CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") Right after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • How to filter List<T> with LINQ and Reflection

    - by Ehsan Sajjad
    i am getting properties via reflection and i was doing like this to iterate on the list. private void HandleListProperty(object oldObject, object newObject, string difference, PropertyInfo prop) { var oldList = prop.GetValue(oldObject, null) as IList; var newList = prop.GetValue(newObject, null) as IList; if (prop.PropertyType == typeof(List<DataModel.ScheduleDetail>)) { List<DataModel.ScheduleDetail> ScheduleDetailsOld = oldList as List<DataModel.ScheduleDetail>; List<DataModel.ScheduleDetail> ScheduleDetailsNew = newList as List<DataModel.ScheduleDetail>; var groupOldSchedules = ScheduleDetailsOld .GroupBy(x => x.HomeHelpID) .SelectMany(s => s.DistinctBy(d => d.HomeHelpID) .Select(h => new { h.HomeHelpID, h.HomeHelpName })); } } Now i am making it generic because there will be coming different types of Lists and i don't want to put if conditions this way i want to write generic code to handle any type of list. I came up with this way: private void HandleListProperty(object oldObject, object newObject, string difference, PropertyInfo prop) { var oldList = prop.GetValue(oldObject, null) as IList; var newList = prop.GetValue(newObject, null) as IList; var ListType = prop.PropertyType; var MyListInstance = Activator.CreateInstance(ListType); MyListInstance = oldList; } i am able to get the items in MyListInstance but as the type will come at runtime i am not getting how to write linq query to filter them, any ideah how to do.

    Read the article

  • Solr return whether member is in multivalued field

    - by ??iu
    Is there any way to return in the fields list whether a value exists as one of the values of a multivalued field? E.g., if your schema is <schema> ... <field name="user_name" type="text" indexed="true" stored="true" required="true" /> <field name="follower" type="integer" indexed="true" stored="true" multiValued="true" /> ... </schema> A sample document might look like: <doc> <field name="user_name">tester blah</field> <field name="follower">1</field> <field name="follower">62</field> <field name="follower">63</field> <field name="follower">64</field> </doc> I would like to be able to query for, say, "tester" and follower:62 and have it match "tester blah" and have some indication of whether 62 is a follower or not in the results.

    Read the article

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Distinct() to return List<> returning Duplicates

    - by KDM
    I have a list of Filters that are passed into a webservice and I iterate over the collection and do Linq query and then add to the list of products but when I do a GroupBy and Distinct() it doesn't remove the duplicates. I am using a IEnumerable because when you use Disinct it converts it to IEnumerable. If you know how to construct this better and make my function return a type of List<Product> that would be appreciated thanks. Here is my code in C#: if (Tab == "All-Items") { List<Product> temp = new List<Product>(); List<Product> Products2 = new List<Product>(); foreach (Filter filter in Filters) { List<Product> products = (from p in db.Products where p.Discontinued == false && p.DepartmentId == qDepartment.Id join f in db.Filters on p.Id equals f.ProductId join x in db.ProductImages on p.Id equals x.ProductId where x.Dimension == "180X180" && f.Name == filter.Name /*Filter*/ select new Product { Id = p.Id, Title = p.Title, ShortDescription = p.ShortDescription, Brand = p.Brand, Model = p.Model, Image = x.Path, FriendlyUrl = p.FriendlyUrl, SellPrice = p.SellPrice, DiscountPercentage = p.DiscountPercentage, Votes = p.Votes, TotalRating = p.TotalRating }).ToList<Product>(); foreach (Product p in products) { temp.Add(p); } IEnumerable temp2 = temp.GroupBy(x => x.Id).Distinct(); IEnumerator e = temp.GetEnumerator(); while (e.MoveNext()) { Product c = e.Current as Product; Products2.Add(c); } } pf.Products = Products2;// return type must be List<Product> }

    Read the article

  • Best way to keep a .net client app updated with status of another application

    - by rwmnau
    I have a Windows service that's running all the time, and takes some action every 15 minutes. I also have a client WinForms app that displays some information about what the service is doing. I'd like the forms application to keep itself updated with a recent status, but I'm not sure if polling every second is a good move performance-wise. When it starts, my Windows Service opens a WCF named pipe to receive queries (from my client form) Every second, a timer on the winform sends a query to the pipe, and then displays the results. If the pipe isn't there, the form displays that the service isn't running. Is that the best way to do this? If my service opens the pipe when it starts, will it always stay open (until I close it or my service stops)? In addition to polling the service, maybe there's some way for the service to notify any watching applications of certain events, like starting and stopping processing? That way, I could poll less, since I'd presumably know about big events already, and would only be polling for progress. Anything that I'm missing?

    Read the article

  • Stop SQL returning the same result twice in a JOIN

    - by nbs189
    I have joined together several tables to get data i want but since I am new to SQL i can't figure out how to stop data being returned more than once. her's the SQL statement; SELECT T.url, T.ID, S.status, S.ID, E.action, E.ID, E.timestamp FROM tracks T, status S, events E WHERE S.ID AND T.ID = E.ID ORDER BY E.timestamp DESC The data that is returned is something like this; +----------------------------------------------------------------+ | URL | ID | Status | ID | action | ID | timestamp | +----------------------------------------------------------------+ | T.1 | 4 | hello | 4 | has uploaded a track | 4 | time | | T.2 | 3 | bye | 3 | has some news | 3 | time | | t.1 | 4 | more | 4 | has some news | 4 | time | +----------------------------------------------------------------+ That's a very basic example but does outline what happens. If you look at the third row the URL is repeated when there is a different status. This is what I want to happen; +-------------------------------------------------------+ | URL or Status | ID | action | timestamp | +-------------------------------------------------------+ | T.1 | 4 | has uploaded a track | time | | hello | 3 | has some news | time | | bye | 4 | has some news | time | +-------------------------------------------------------+ Please notice that the the url (in this case the mock one is T.1) is shown when the action is has uploaded a track. This is very important. The action in the events table is inserted on trigger of a status or track insert. If a new track is inserted the action is 'has uploaded a track' and you guess what it is for a status. The ID and timestamp is also inserted into the events table at this point. Note: There are more tables that go into the query, 3 more in fact, but I have left them out for simplicity.

    Read the article

  • PHP & MySQL username validation and storage problem.

    - by php
    For some reason when a user enters a brand new username the error message <p>Username unavailable</p> is displayed and the name is not stored. I was wondering if some can help find the flaw in my code so I can fix this error? Thanks Here is the PHP code. if($_POST['username'] && trim($_POST['username'])!=='') { $u = "SELECT * FROM users WHERE username = '$username' AND user_id <> '$user_id'"; $r = mysqli_query ($mysqli, $u) or trigger_error("Query: $u\n<br />MySQL Error: " . mysqli_error($mysqli)); if (mysqli_num_rows($r) == TRUE) { echo '<p>Username unavailable</p>'; $_POST['username'] = NULL; } else if(isset($_POST['username']) && mysqli_num_rows($r) == 0 && strlen($_POST['username']) <= 255) { $username = mysqli_real_escape_string($mysqli, $_POST['username']); } else if($_POST['username'] && strlen($_POST['username']) >= 256) { echo '<p>Username can not exceed 255 characters</p>'; } }

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • How to get a category listing from Magento?

    - by alex
    I want to create a page in Magento that shows a visual representation of the categories.. example CATEGORY product 1 product 2 ANOTHER CATEGORY product 3 My problem is, their database is organised very differently to what I've seen in the past. They have tables dedicated to data types like varchar, int, etc. I assume this is for performance or similiar. I haven't found a way to use MySQL to query the database and get a list of categories. I'd then like to match these categories to products, to get a listing of products for each category. Unfortunately Magento seems to make this very difficult. Also I have not found a method that will work from within a page block.. I have created showcase.phtml and put it in the XML layout and it displays and runs it's PHP code. I was hoping for something easy like looping through $this-getAllCategories(); and then a nested loop inside with something like $category-getChildProducts(); Can anyone help me? Thank you!

    Read the article

  • PHP - advice for java HashMap alternative in php?

    - by teutara
    I know it is super noob and will be answered in no time, but I could not figure.. sorry for any inconvenience.. Here is the thing: ID colA colB Length 1 seq1 seq11 1 2 seq1 seq11 11 3 seq3 seq33 21 4 seq3 seq33 14 I have a db with this kind of a table, has more than 10M rows. I want to loop though colA first, get the relevant colB value, and check if there are any other occurrences of the same value. For example in colB (seq11) there are 2 occurrences of colA(seq1), this time I have to combine those and output the sum of the length. Similar to this: ID colA colB Length 1 seq1 seq11 12 2 seq3 seq33 35 I am a bit java guy, but because my colleague has written everything in php and this will be just an adding, i need a php solution. With java i would have used hashmap, so that I would have the colA data once and just increment the value of "Length Column".. I know it is not a proper question, but.. Thank you in advance.. $$$$$$$$$$ EDIT $$$$$$$$$$ I tried this query in order to group by occurences: SELECT COUNT(*) SeqName FROM SeqTable GROUP BY SeqName HAVING COUNT(*)>0;

    Read the article

  • How to add "missing" columns in a column group in reporting services?

    - by Gimly
    Hello, I'm trying to create a report that displays for each months of the year the quantity of goods sold. I have a query that returns the list of goods sold for each month, it looks something like this : SELECT Seller.FirstName, Seller.LastName, SellingHistory.Month, SUM(SellingHistory.QuantitySold) FROM SellingHistory JOIN Seller on SellingHistory.SellerId = Seller.SellerId WHERE SellingHistory.Year = @Year GOUP BY Seller.FirstName, Seller.LastName, SellingHistory.Month What I want to do is display a report that has a column for each months + a total column that will display for each Seller the quantity sold in the selected month. Seller Name | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Total What I managed to do is using a matrix and a column group (group on Month) to display the columns for existing data, if I have data from January to March, it will display the 3 first columns and the total. What I would like to do is always display all the columns. I thought about making that by adding the missing months in the SQL request, but I find that a bit weird and I'm sure there must be some "cleanest" solution as this is something that must be quite frequent. Thanks. PS: I'm using SQL Server Express 2008

    Read the article

  • How to structure code with 2 methods, one after another, which throw the same two exceptions?

    - by dotnetdev
    Hi, I have two methods, one called straight after another, which both throw the exact same 2 exceptions (IF an erroneous condition occurs, not stating that I'm getting exceptions). For this, should I write seperate try and catch blocks with the one statement in each try block and catch both exceptions (Both of which I can handle as I checked MSDN class library reference and there is something I can do, eg, re-open SqlConnection or run a query and not a stored proc which does not exist). So code like this: try { obj.Open(); } catch (SqlException) { // Take action here. } catch (InvalidOperationException) { // Take action here. } And likewise for the other method I call straight after. This seems like a very messy way of coding. The other way is to code with the exception variable (that is ommited as I am using AOP to log the exception details, using a class-level attribute). Doing this, this could aid me in finding out which method caused an exception and then taking action accordingly. Is this the best approach or is there another best practise altogether? I also assume that, as only these two methods are thrown, I do not need to catch Exception as that would be for an exception I cannot handle (causes way out of my control). Thanks

    Read the article

  • Calling compiled C from R with .C()

    - by Sarah
    I'm trying to call a program (function getNBDensities in the C executable measurementDensities_out) from R. The function is passed several arrays and the variable double runsum. Right now, the getNBDensities function basically does nothing: it prints to screen the values of passed parameters. My problem is the syntax of calling the function: array(.C("getNBDensities", hr = as.double(hosp.rate), # a vector (s x 1) sp = as.double(samplingProbabilities), # another vector (s x 1) odh = as.double(odh), # another vector (s x 1) simCases = as.integer(x[c("xC1","xC2","xC3")]), # another vector (s x 1) obsCases = as.integer(y[c("yC1","yC2","yC3")]), # another vector (s x 1) runsum = as.double(runsum), # double DUP = TRUE, NAOK = TRUE, PACKAGE = "measurementDensities_out")$f, dim = length(y[c("yC1","yC2","yC3")]), dimnames = c("yC1","yC2","yC3")) The error I get, after proper execution of the function (i.e., the right output is printed to screen), is Error in dim(data) <- dim : attempt to set an attribute on NULL I'm unclear what the dimensions are that I should be passing the function: should it be s x 5 + 1 (five vectors of length s and one double)? I've tried all sorts of combinations (including sx5+1) and have found only seemingly conflicting descriptions/examples online of what's supposed to happen here. For those who are interested, the C code is below: #include <R.h> #include <Rmath.h> #include <math.h> #include <Rdefines.h> #include <R_ext/PrtUtil.h> #define NUM_STRAINS 3 #define DEBUG void getNBDensities( double *hr, double *sp, double *odh, int *simCases, int *obsCases, double *runsum ); void getNBDensities( double *hr, double *sp, double *odh, int *simCases, int *obsCases, double *runsum ) { #ifdef DEBUG for ( int s = 0; s < NUM_STRAINS; s++ ) { Rprintf("\nFor strain %d",s); Rprintf("\n\tHospitalization rate = %lg", hr[s]); Rprintf("\n\tSimulation probability = %lg",sp[s]); Rprintf("\n\tSimulated cases = %d",simCases[s]); Rprintf("\n\tObserved cases = %d",obsCases[s]); Rprintf("\n\tOverdispersion parameter = %lg",odh[s]); } Rprintf("\nRunning sum = %lg",runsum[0]); #endif } naive solution While better (i.e., potentially faster or syntactically clearer) solutions may exist (see Dirk's answer below), the following simplification of the code works: out<-.C("getNBDensities", hr = as.double(hosp.rate), sp = as.double(samplingProbabilities), odh = as.double(odh), simCases = as.integer(x[c("xC1","xC2","xC3")]), obsCases = as.integer(y[c("yC1","yC2","yC3")]), runsum = as.double(runsum)) The variables can be accessed in >out.

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • Best ways to format LINQ queries.

    - by Aren B
    Before you ignore / vote-to-close this question, I consider this a valid question to ask because code clarity is an important topic of discussion, it's essential to writing maintainable code and I would greatly appreciate answers from those who have come across this before. I've recently run into this problem, LINQ queries can get pretty nasty real quick because of the large amount of nesting. Below are some examples of the differences in formatting that I've come up with (for the same relatively non-complex query) No Formatting var allInventory = system.InventorySources.Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }).GroupBy(i => i.Region, i => i.Inventory); Elevated Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory); Block Formatting var allInventory = system.InventorySources .Select( src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory ); List Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy(i => i.Region, i => i.Inventory); I want to come up with a standard for linq formatting so that it maximizes readability & understanding and looks clean and professional. So far I can't decide so I turn the question to the professionals here.

    Read the article

  • Use LINQ, to Sort and Filter items in a List<ReturnItem> collection, based on the values within a Li

    - by Daniel McPherson
    This is tricky to explain. We have a DataTable that contains a user configurable selection of columns, which are not known at compile time. Every column in the DataTable is of type String. We need to convert this DataTable into a strongly typed Collection of "ReturnItem" objects so that we can then sort and filter using LINQ for use in our application. We have made some progress as follows: We started with the basic DataTable. We then process the DataTable, creating a new "ReturnItem" object for each row This "ReturnItem" object has just two properties: ID ( string ) and Columns( List(object) ). The properties collection contains one entry for each column, representing a single DataRow. Each property is made Strongly Typed (int, string, datetime, etc). For example it would add a new "DateTime" object to the "ReturnItem" Columns List containing the value of the "Created" Datatable Column. The result is a List(ReturnItem) that we would then like to be able to Sort and Filter using LINQ based on the value in one of the properties, for example, sort on "Created" date. We have been using the LINQ Dynamic Query Library, which gets us so far, but it doesn't look like the way forward because we are using it over a List Collection of objects. Basically, my question boils down to: How can I use LINQ, to Sort and Filter items in a List(ReturnItem) collection, based on the values within a List(object) property which is part of the ReturnItem class?

    Read the article

  • Display web page from another site in asp page.

    - by Daniel
    hi all, Our customer has a requirement to extend the functionality of their existing large government project. It is an ASP.NET 3.5 (recently upgraded from 2.0) project. The existing solution is quite a behemoth that is almost unmaintainable so they have decided that they want to provide the new functionality by hosting it on another website that is shown within the existing website. As to how this is best to be done I'm not quite sure right now and if there is any security issues preventing it or that need to be considered. Essentially the user would log on to the existing web site as normal and when cliicking on a certain link the page would load as normal with some kind of frame or control that has within it the contents of the page from the other site. IE. They do not want to simply redirect to the other site they want to show it embedded within the current one such that the existing menus etc are still available. I believe if information needed to be passed to the embedded page it would be done using query strings as I'm not sure if there is even another way to accomplish this. Can anyone give me some pointers on where to start at looking to implement this or any potential pitfalls I should be aware of. Thanks

    Read the article

  • C#: Object having two constructors: how to limit which properties are set together?

    - by Dr. Zim
    Say you have a Price object that accepts either an (int quantity, decimal price) or a string containing "4/$3.99". Is there a way to limit which properties can be set together? Feel free to correct me in my logic below. The Test: A and B are equal to each other, but the C example should not be allowed. Thus the question How to enforce that all three parameters are not invoked as in the C example? AdPrice A = new AdPrice { priceText = "4/$3.99"}; // Valid AdPrice B = new AdPrice { qty = 4, price = 3.99m}; // Valid AdPrice C = new AdPrice { qty = 4, priceText = "2/$1.99", price = 3.99m};// Not The class: public class AdPrice { private int _qty; private decimal _price; private string _priceText; The constructors: public AdPrice () : this( qty: 0, price: 0.0m) {} // Default Constructor public AdPrice (int qty = 0, decimal price = 0.0m) { // Numbers only this.qty = qty; this.price = price; } public AdPrice (string priceText = "0/$0.00") { // String only this.priceText = priceText; } The Methods: private void SetPriceValues() { var matches = Regex.Match(_priceText, @"^\s?((?<qty>\d+)\s?/)?\s?[$]?\s?(?<price>[0-9]?\.?[0-9]?[0-9]?)"); if( matches.Success) { if (!Decimal.TryParse(matches.Groups["price"].Value, out this._price)) this._price = 0.0m; if (!Int32.TryParse(matches.Groups["qty"].Value, out this._qty)) this._qty = (this._price > 0 ? 1 : 0); else if (this._price > 0 && this._qty == 0) this._qty = 1; } } private void SetPriceString() { this._priceText = (this._qty > 1 ? this._qty.ToString() + '/' : "") + String.Format("{0:C}",this.price); } The Accessors: public int qty { get { return this._qty; } set { this._qty = value; this.SetPriceString(); } } public decimal price { get { return this._price; } set { this._price = value; this.SetPriceString(); } } public string priceText { get { return this._priceText; } set { this._priceText = value; this.SetPriceValues(); } } }

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

< Previous Page | 879 880 881 882 883 884 885 886 887 888 889 890  | Next Page >