Search Results

Search found 18272 results on 731 pages for 'ldap query'.

Page 682/731 | < Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >

  • Using an embedded DB (SQLite / SQL Compact) for Message Passing within an app?

    - by wk1989
    Hello, Just out of curiosity, for applications that have a fairly complicated module tree, would something like sqlite/sql compact edition work well for message passing? So if I have modules containing data such as: \SubsystemA\SubSubSysB\ModuleB\ModuleDataC, \SubSystemB\SubSubSystemC\ModuleA\ModuleDataX Using traditional message passing/routing, you have to go through intermediate modules in order to pass a message to ModuleB to request say ModuleDataC. Instead of doing that, if we we simply store "\SubsystemA\SubSubSysB\ModuleB\ModuleDataC" in a sqlite database, getting that data is as simple as a sql query and needs no routing and passing stuff around. Has anyone done this before? Even if you haven't, do you foresee any issues & performance impact? The only concern I have right now would be the passing of custom types, e.g. if ModuleDataC is a custom data structure or a pointer, I'll need some way of storing the data structure into the DB or storing the pointer into the DB. Thanks, JW EDIT One usage case I haven't thought about is when you want to send a message from ModuleA to ModuleB to get ModuleB to do something rather than just getting/setting data. Is it possible to do this using an embedded DB? I believe callback from the DB would be needed, how feasible is this?

    Read the article

  • Hibernate HQL m:n join problem

    - by smallufo
    I am very unfamiliar with SQL/HQL , and am currently stuck with this 'maybe' simple problem : I have two many-to-many Entities , with a relation table : Car , CarProblem , and Problem . One Car may have many Problems , One Problem may appear in many Cars, CarProblem is the association table with other properties . Now , I want to find Car(s) with specified Problem , how do I write such HQL ? All ids are Long type . I've tried a lot of join / inner-join combinations , but all in vain.. -- updated : Sorry , forget to mention : Car has many CarProblem Problem has many CarProblem Car and Problem are not directly connected in Java Object. -- update , java code below -- @Entity public class Car extends Model{ @OneToMany(mappedBy="car" , cascade=CascadeType.ALL) public Set<CarProblem> carProblems; } @Entity public class CarProblem extends Model{ @ManyToOne public Car car; @ManyToOne public Problem problem; ... other properties } @Entity public class Problem extends Model { other properties ... // not link to CarProblem , It seems not related to this problem // **This is a very stupid query , I want to get rid of it ...** public List<Car> findCars() { List<CarProblem> list = CarProblem.find("from CarProblem as cp where cp.problem.id = ? ", id).fetch(); Set<Car> result = new HashSet<Car>(); for(CarProblem cp : list) result.add(cp.car); return new ArrayList<Car>(result); } } The Model is from Play! framework , so these properties are all public .

    Read the article

  • Adding extra data to a Variable

    - by DogPooOnYourShoe
    Right now, my code plucks out only one value using Mysql. So I thought I might aswell add each found result to a variable, however I dont know how to do this. This must be a very basic question, but I cant find a answer for it echo '<table border="1">'; echo "<tr><td><b>Surname</b></td><td><b>Title/Name</b></td><td><b>Numbers</b></td><td><b>Telephone</b></td><td><b>Edit</b></td><td><b>Del</b></td></tr>\n"; while ($row= mysql_fetch_array($result)) { $Surname = $row["Surname"]; $Title = $row["TitleName"]; $Email = $row["Email"]; $Telephone = $row["Telephone"]; $id = $row["id"]; $MooringNumbers = $row['Number']; $Assignedto['AssignedTo']; } $MooringQuery = "select * FROM mooring WHERE AssignedTo='$id'"; $MooringResult = mysql_query($MooringQuery) or die("Couldn't execute query"); while ($row1= mysql_fetch_array($MooringResult)) { $AssignedTo = $row1["AssignedTo"]; $MooringNumbers = $row1["Number"]; echo '<tr><td>' .$Surname.'</td><td>'.$Title.'</td><td>'.$MooringNumbers . '</td><td>'.$Telephone.'</td><td>' . '<a href="rlayCustomerUpdtForm.php?id='.$id.'">[EDIT]</a></td>'.'<td>'. '<a href="deleteCustomer.php?id='.$id.'">[x]</a></td>'. '</tr>'; }

    Read the article

  • Adding fields until screen is full

    - by Eric
    For the sake of this question, let us suppose that I want a row of buttons. I want to put as many buttons in that row as I can fit on the screen, but no more. In other words, as long as a prospective button will not be cut off or have its text shortened, add it. It seems that I should be able to do something like: HorizontalFieldManager hfm = new HorizontalFieldManager(); int remainingWidth = Display.getWidth(); int i =0; while(true) { ButtonField bf = new ButtonField("B " + i); remainingWidth -= bf.getWidth(); if(remainingWidth<0) break; hfm.add(bf); i++; } add(hfm); But this doesn't work. bf.getWidth() is always 0. I suspect that this is because the button has not yet been laid out when I query for the width. So, perhaps I could just make sure the buttons are always the same size. But this won't work for a few reasons: Different BB platforms have different looks for buttons and text that will fit on a button on a Curve won't fit on a button on a Storm. 3rd party themes may change the look of buttons, so I can't even count on buttons being a certain size on a certain platform. Is there no way for me to actually check the remaining space before adding a button? It feels like a fairly useful feature; am I just missing something?

    Read the article

  • Objective C: Function returning correct data for the first time of call and null for other times

    - by Kooshal Bhungy
    Hi all, Am a beginner in objective C, i am implementing a function that would query a web server and display the returning string in console. I am calling the function (getDatafromServer) repeatedly in a loop. The problem is that the first time am getting the value whereas the other times, it returns me a (null) in console... I've searched about memory management and check out on the forums but none have worked. Can you please guys tell me where am wrong in the codes below? Thanks in advance.... @implementation RequestThread +(void)startthread:(id)param{ while (true) { //NSLog(@"Test threads"); sleep(5); NSLog(@"%@",[self getDatafromServer]); } } +(NSString *) getDatafromServer{ NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSString *myRequestString = @"name=Hello%20&[email protected]"; NSData *myRequestData = [NSData dataWithBytes:[myRequestString UTF8String] length:[myRequestString length]]; NSMutableURLRequest *request = [[NSMutableURLRequest alloc] initWithURL: [NSURL URLWithString:@"http://192.168.1.32/gs/includes/widget/getcalls.php?user=asdasd&passw=asdasdasd"]]; [request setHTTPMethod:@"POST"]; [request setHTTPBody: myRequestData]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"content-type"]; NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; NSString *myString = [NSString stringWithUTF8String:[returnData bytes]]; [myRequestString release]; [request release]; [returnData release]; return myString; [pool release]; } @end

    Read the article

  • Cannot perform an ORDERBY against my EF4 data

    - by Jaxidian
    I have a query hitting EF4 using STEs and I'm having an issue with user-defined sorting. In debugging this, I have removed the dynamic sorting and am hard-coding it and I still have the issue. If I swap/uncomment the var results = xxx lines in GetMyBusinesses(), my results are not sorted any differently - they are always sorting it ascendingly. FYI, Name is a varchar(200) field in SQL 2008 on my Business table. private IQueryable<Business> GetMyBusinesses(MyDBContext CurrentContext) { var myBusinesses = from a in CurrentContext.A join f in CurrentContext.F on a.FID equals f.id join b in CurrentContext.Businesses on f.BID equals b.id where a.PersonID == 52 select b; var results = from r in myBusinesses orderby "Name" ascending select r; //var results = from r in results // orderby "Name" descending // select r; return results; } private PartialEntitiesList<Business> DoStuff() { var myBusinesses = GetMyBusinesses(); var myBusinessesCount = GetMyBusinesses().Count(); Results = new PartialEntitiesList<Business>(myBusinesses.Skip((PageNumber - 1)*PageSize).Take(PageSize).ToList()) {UnpartialTotalCount = myBusinessesCount}; return Results; } public class PartialEntitiesList<T> : List<T> { public PartialEntitiesList() { } public PartialEntitiesList(int capacity) : base(capacity) { } public PartialEntitiesList(IEnumerable<T> collection) : base(collection) { } public int UnpartialTotalCount { get; set; } }

    Read the article

  • Persist data when the table was not mapped (JPA EclipseLink)

    - by enrique
    Hi everybody I need some help in persisting data into a table that has not been mapped... The issue is that the database we have has a table which all of its columns are foreign keys so by mapping the whole database all of the tables are correctly mapped. However that table called "category" is not mapped. The way in which we can browse the data is by passing for the table I mentioned using the @jointable annotation which was set by the system in the other tables with which "category" has a relation. So we can go ahead and using the collections and perform a query. But the issue comes when I want to persist data into that table because there´s no any entity. We tried to persist through the collections but no luck. Then I have just tried by creating the entity with its PK and Facade all by hand. However when I try to persist using the Merge method the system tries to perform an Insert when it is supposed to perform an Update. So obviously it returns an error. Does anybody have an idea on this situation? Thanks.-

    Read the article

  • PROLOG - DCG parsing

    - by user2895589
    Hello I am new Prolog and DGC.I want to write a DCG to parse time expressions like 10.20 am or 12 oclock. how can I check 10.20 am is valid expression or not for Olcock I wrote some code. oclock --> digit1,phrase1. digit1 --> [T],{digit1(T)}. digit1(1). digit1(2). digit1(3). digit1(4). digit1(5). digit1(6). digit1(7). digit1(8). digit1(9). digit1(10). digit1(11). digit1(12). phrase1 --> [P],{phrase1(P)}. phrase1(Oclock). i ma checking by query oclock([1,oclock],[]). can someone help me on this.

    Read the article

  • Silverlight logging out causes "Object reference not set to an instance"

    - by Alex
    I am using the Silverlight 4 Business Application Template. I've created a DomainDataSource in XAML like so: <riaControls:DomainDataSource x:Name="LogData" QueryName="GetLogs" AutoLoad="True" LoadSize="20" > <riaControls:DomainDataSource.DomainContext> <local:AdminDomainContext /> </riaControls:DomainDataSource.DomainContext> <riaControls:DomainDataSource.QueryParameters> <riaControls:Parameter ParameterName="UserLoginID" Value="{Binding Path=User.UserLoginID, Source={StaticResource WebContext}}" /> </riaControls:DomainDataSource.QueryParameters> </riaControls:DomainDataSource> The problem I'm experiencing is that whenever I log out, I get: Load operation failed for query 'GetLogs'. Object reference not set to an instance of an object. I assume that because I've logged out, User.UserLoginID is now null and is causing the exception. So... anybody know a good way for me to solve this? I don't really want to set the QueryParameter programmatically.

    Read the article

  • Querying XML using node numbers

    - by CP
    Okay, so I'm writing a utility that compares 2 XML documents using Microsoft's XML diff patch tool. The result looks something like this: <?xml version="1.0" encoding="utf-16"?><xd:xmldiff version="1.0" srcDocHash="10728157883908851288" options="IgnoreChildOrder IgnoreComments IgnoreWhitespace " fragments="yes" xmlns:xd="http://schemas.microsoft.com/xmltools/2002/xmldiff"><xd:node match="1"><xd:node match="1"><xd:node match="1"><xd:node match="2"><xd:node match="1"><xd:node match="1"><xd:node match="2"><xd:change match="1">testi22n2123</xd:change></xd:node></xd:node><xd:add match="/1/1/1/2/1/8" opid="1" /><xd:node match="7"><xd:node match="1"><xd:change match="1">31</xd:change></xd:node><xd:node match="2"><xd:change match="1">test2ing</xd:change></xd:node></xd:node><xd:remove match="8" opid="1" /></xd:node></xd:node></xd:node></xd:node></xd:node><xd:descriptor opid="1" type="move" /></xd:xmldiff> What I'm trying to do is go back into the source document and get the source data that represents the difference. I initially tried creating an Xpath query, but as I understand it now this XmlDiff thing works off the DOM... which seems like the dinosaur of XML objects these days. What's the best way to get at the node in the source XML by using the numbers provided in the diff result?

    Read the article

  • How to update a string property of an sqlite database item

    - by Thomas Joos
    hi all, I'm trying to write an application that checks if there is an active internet connection. If so it reads an xml and checks every 'lastupdated' item ( php generated string ). It compares it to the database items and if there is a new value, this particular item needs to be updated. My code seems to work ( compiles, no error messages, no failures, .. ) but I notice that the particular property does not change, it becomese (null). When I output the binded string value it returns the correct string value I want to update into the db.. Any idea what I'm doing wrong? const char *sql = "update myTable Set last_updated=? Where node_id =?"; sqlite3_stmt *statement; // Preparing a statement compiles the SQL query into a byte-code program in the SQLite library. // The third parameter is either the length of the SQL string or -1 to read up to the first null terminator. if (sqlite3_prepare_v2(database, sql, -1, &statement, NULL) == SQLITE_OK){ NSLog(@"last updated item: %@", [d lastupdated]); sqlite3_bind_text(statement, 1, [d lastupdated],-1,SQLITE_TRANSIENT); sqlite3_bind_int (statement, 2, [d node_id]); }else { NSLog(@"SQLite statement error!"); } if(SQLITE_DONE != sqlite3_step(statement)){ NSAssert1(0, @"Error while updating. '%s'", sqlite3_errmsg(database)); }else { NSLog(@"SQLite Update done!"); }

    Read the article

  • Returning objects in php

    - by user220201
    I see similar questions asked but I seem to have problem with more basic stuff than were asked. How to declare a variable in php? My specific problem is I have a function that reads a DB table and returns the record (only one) as an object. class User{ public $uid; public $name; public $status; } function GetUserInfo($uid) { // Query DB $userObj = new User(); // convert the result into the User object. var_dump($userObj); return $userObj; } // In another file I call the above function. .... $newuser = GetUserInfo($uid); var_dump($newuser); What is the problem here, I cannot understand. Essentially the var_dump() in the function GetUserInfo() works fine. The var_dump() outside after the call to GetUserInfo() does not work. Thanks for any help. S

    Read the article

  • Best ways to construct Dynamic Search Conditions for Sql

    - by CoolBeans
    I have always wondered what's the best way to achieve this task. In most web based applications you have to provide search options on many different criteria. Based on what criteria is chosen behind the scene you modify your SQL. Generally, this is how I tend to go about it:- Have a base SQL template. In the base template have conditions like this WHERE [#PRE_COND1] AND [#PRE_COND2] .. so on and so forth. So an example SQL might look something like SELECT NAME,AGE FROM PERSONS [,#TABLE2] [,#TABLE3] WHERE [#PRE_COND1] AND [#PRE_COND2] ORDER BY [#ORD_COND1] AND [#ORD_COND2] etc. During run time after figuring out the all the search criteria user has entered, I replace the [#PRE_COND1]s and [#ORD_COND1]s with the appropriate SQLs and then execute the query. I personally do not like this brute force method. However, I never came across a better approach either. How do you accomplish such tasks generally given you are either using native JDBC or Spring JDBC? It is almost like I need a C MACRO like functionality in Java to do this.

    Read the article

  • NHibernate Criteria Find by Id

    - by user315648
    Hello, I have 2 entities: public class Authority : Entity { [NotNull, NotEmpty] public virtual string Name { get; set; } [NotNull] public virtual AuthorityType Type { get; set; } } public class AuthorityType : Entity { [NotNull, NotEmpty] public virtual string Name { get; set; } public virtual string Description { get; set; } } Now I wish to find all authorities from repository by type. I tried doing it like this: public IList<Authority> GetAuthoritiesByType(int id) { ICriteria criteria = Session.CreateCriteria(typeof (Authority)); criteria.Add(Restrictions.Eq("Type.Id", id)); IList<Authority> authorities = criteria.List<Authority>(); return authorities; } However, I'm getting an error that something is wrong with the SQL ("could not execute query". The innerexception is following: {"Invalid column name 'TypeFk'.\r\nInvalid column name 'TypeFk'."} Any advice ? Any other approach ? Best wishes, Andrew

    Read the article

  • Performance when querying a View

    - by Nate Bross
    I'm wondering if this is a bad practice or if in general this is the correct approach. Lets say that I've created a view that combines a few attributes from a few tables. My question, what do I need to do so I can query against this view as if it were a table without worrying about performance? All attributes in the original tables are indexed, my concern is that the result view will have hundreds of thousands of records, which I will want to narrow down quite a bit based on user input. What I'd like to avoid, is having multiple versions of the code that generates this view floating around with a few extra "where" conditions to facilitate the user input filtering. For example, assume my view has this header VIEW(Name, Type, DateEntered) this may have 100,000+ rows (possibly millions). I'd like to be able to make this view in SQL Server, and then in my application write querlies like this: SELECT Name, Type, DateEntered FROM MyView WHERE DateEntered BETWEEN @date1 and @date2; Basically, I am denormalizing my data for a series of reports that need to be run, and I'd like to centralize where I pull the data from, maybe I'm not looking at this problem from the right angle though, so I'm open to alternative ways to attack this.

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • change postgres date format

    - by Jay
    Is there a way to change the default format of a date in Postgres? Normally when I query a Postgres database, dates come out as yyyy-mm-dd hh:mm:ss+tz, like 2011-02-21 11:30:00-05. But one particular program the dates come out yyyy-mm-dd hh:mm:ss.s, that is, there is no time zone and it shows tenths of a second. Apparently something is changing the default date format, but I don't know what or where. I don't think it's a server-side configuration parameter, because I can access the same database with a different program and I get the format with the timezone. I care because it appears to be ignoring my "set timezone" calls in addition to changing the format. All times come out EST. Additional info: If I write "select somedate from sometable" I get the "no timezone" format. But if I write "select to_char(somedate::timestamptz, 'yyyy-mm-dd hh24:mi:ss-tz')" then timezones work as I would expect. This really sounds to me like something is setting all timestamps to implicitly be "to_char(date::timestamp, 'yyyy-mm-dd hh24:mi:ss.m')". But I can't find anything in the documentation about how I would do this if I wanted to, nor can I find anything in the code that appears to do this. Though as I don't know what to look for, that doesn't prove much.

    Read the article

  • MSSQL 2008 - Bit Param Evaluation alters Execution Plan

    - by Nathanial Woolls
    I have been working on migrating some of our data from Microsoft SQL Server 2000 to 2008. Among the usual hiccups and whatnot, I’ve run across something strange. Linked below is a SQL query that returns very quickly under 2000, but takes 20 minutes under 2008. I have read quite a bit on upgrading SQL server and went down the usual paths of checking indexes, statistics, etc. before coming to the conclusion that the following statement, found in the WHERE clause, causes the execution plan for the steps that follow this statement to change dramatically: And ( @bOnlyUnmatched = 0 -- offending line Or Not Exists( The SQL statements and execution plans are linked below. A coworker was able to rewrite a portion of the WHERE clause using a CASE statement, which seems to “trick” the optimizer into using a better execution plan. The version with the CASE statement is also contained in the linked archive. I’d like to see if someone has an explanation as to why this is happening and if there may be a more elegant solution than using a CASE statement. While we can work around this specific issue, I’d like to have a broader understanding of what is happening to ensure the rest of the migration is as painless as possible. Zip file with SQL statements and XML execution plans Thanks in advance!

    Read the article

  • How can I make "month" columns in Sql?

    - by Beska
    I've got a set of data that looks something like this (VERY simplified): productId Qty dateOrdered --------- --- ----------- 1 2 10/10/2008 1 1 11/10/2008 1 2 10/10/2009 2 3 10/12/2009 1 1 10/15/2009 2 2 11/15/2009 Out of this, we're trying to create a query to get something like: productId Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec --------- ---- --- --- --- --- --- --- --- --- --- --- --- --- 1 2008 0 0 0 0 0 0 0 0 0 2 1 0 1 2009 0 0 0 0 0 0 0 0 0 3 0 0 2 2009 0 0 0 0 0 0 0 0 0 3 2 0 The way I'm doing this now, I'm doing 12 selects, one for each month, and putting those in temp tables. I then do a giant join. Everything works, but this guy is dog slow. I know this isn't much to go on, but knowing that I barely qualify as a tyro in the db world, I'm wondering if there is a better high level approach to this that I might try. (I'm guessing there is.) (I'm using MS Sql Server, so answers that are specific to that DB are fine.) (I'm just starting to look at "PIVOT" as a possible help, but I don't know anything about it yet, so if someone wants to comment about that, that might be helpful as well.)

    Read the article

  • Most efficient way to LIMIT results in a JOIN?

    - by johnnietheblack
    I have a fairly simple one-to-many type join in a MySQL query. In this case, I'd like to LIMIT my results by the left table. For example, let's say I have an accounts table and a comments table, and I'd like to pull 100 rows from accounts and all the associated comments rows for each. Thy only way I can think to do this is with a sub-select in in the FROM clause instead of simply selecting FROM accounts. Here is my current idea: SELECT a.*, c.* FROM (SELECT * FROM accounts LIMIT 100) a LEFT JOIN `comments` c on c.account_id = a.id ORDER BY a.id However, whenever I need to do a sub-select of some sort, my intermediate level SQL knowledge feels like it's doing something wrong. Is there a more efficient, or faster, way to do this, or is this pretty good? By the way... This might be the absolute simplest way to do this, which I'm okay with as an answer. I'm simply trying to figure out if there IS another way to do this that could potentially compete with the above statement in terms of speed.

    Read the article

  • Detecting well behaved / well known bots

    - by Simon_Weaver
    I found this question very interesting : Programmatic Bot Detection I have a very similar question, but I'm not bothered about 'badly behaved bots'. I am tracking (in addition to google analytics) the following per visit : Entry URL Referer UserAgent Adwords (by means of query string) Whether or not the user made a purchase etc. The problem is that to calculate any kind of conversion rate I'm ending up with lots of 'bot' visits that are greatly skewing my results. I'd like to ignore as many as possible bot visits, but I want a solution that I don't need to monitor too closely, and that won't in itself be a performance hog and preferably still work if someone has javascript disabled. Are there good published lists of the top 100 bots or so? I did find a list at http://www.user-agents.org/ but that appears to contain hundreds if not thousands of bots. I don't want to check every referer against thousands of links. Here is the current googlebot UserAgent. How often does it change? Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

    Read the article

  • All connections in pool are in use

    - by veljkoz
    We currently have a little situation on our hands - it seems that someone, somewhere forgot to close the connection in code. Result is that the pool of connections is relatively quickly exhausted. As a temporary patch we added Max Pool Size = 500; to our connection string on web service, and recycle pool when all connections are spent, until we figure this out. So far we have done this: SELECT SPId FROM MASTER..SysProcesses WHERE DBId = DB_ID('MyDb') and last_batch < DATEADD(MINUTE, -15, GETDATE()) to get SPID's that aren't used for 15 minutes. We're now trying to get the query that was executed last using that SPID with: DBCC INPUTBUFFER(61) but the queries displayed are various, meaning either something on base level regarding connection manipulation was broken, or our deduction is erroneous... Is there an error in our thinking here? Does the DBCC / sysprocesses give results we're expecting or is there some side-effect catch? (for example, connections in pool influence?) (please, stick to what we could find out using SQL since the guys that did the code are many and not all present right now)

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

  • LINQ to SQL: NOTing a prebuilt expression

    - by ck
    I'm building a library of functions for one of my core L2S classes, all of which return a bool to allow checking for certain situations. Example: Expression<Func<Account, bool>> IsSomethingX = a => a.AccountSupplementary != null && a.AccountSupplementary.SomethingXFlag != null && a.AccountSupplementary.SomethingXFlag.Value; Now to query where this is not true, I CAN'T do this: var myAccounts= context.Accounts .Where(!IsSomethingX); // does not compile However, using the syntax from the PredicateBuilder class, I've come up with this: public static IQueryable<T> WhereNot<T>(this IQueryable<T> items, Expression<Func<T, bool>> expr1) { var invokedExpr = Expression.Invoke(expr1, expr1.Parameters.Cast<Expression>()); return items.Where(Expression.Lambda<Func<T, bool>> (Expression.Not(invokedExpr), expr1.Parameters)); } var myAccounts= context.Accounts .WhereNot(IsSomethingX); // does compile which actually produces the correct SQL. Does this look like a good solution, and is there anything I need to be aware of that might cause me problems in future?

    Read the article

  • Caching queries in Django

    - by dolma33
    In a django project I only need to cache a few queries, using, because of server limitations, a cache table instead of memcached. One of those queries looks like this: Let's say I have a Parent object, which has a lot of Child objects. I need to store the result of the simple query parent.childs.all(). I have no problem with that, and everything works as expected with some code like key = "%s_children" %(parent.name) value = cache.get(key) if value is None: cache.set(key, parent.children.all(), CACHE_TIMEOUT) value = cache.get(key) But sometimes, just sometimes, the cache.set does nothing, and, after executing cache.set, cache.get(key) keeps returning None. After some test, I've noticed that cache.set is not working when parent.children.all().count() has higher values. That means that if I'm storing inside of key (for example) 600 children objects, it works fine, but it wont work with 1200 children. So my question is: is there a limit to the data that a key could store? How can I override it? Second question: which way is "better", the above code, or the following one? key = "%s_children" %(parent.name) value = cache.get(key) if value is None: value = parent.children.all() cache.set(key, value, CACHE_TIMEOUT) The second version won't cause errors if cache.set doesn't work, so it could be a workaround to my issue, but obviously not a solution. In general, let's forget about my issue, which version would you consider "better"?

    Read the article

< Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >