Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 263/911 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • How does Select statement works in a Dynamic Linq Query?

    - by Richard77
    Hello, 1) I've a Product table with 4 columns: ProductID, Name, Category, and Price. Here's the regular linq to query this table. public ActionResult Index() { private ProductDataContext db = new ProductDataContext(); var products = from p in db.Products where p.Category == "Soccer" select new ProductInfo { Name = p.Name, Price = p.Price} return View(products); } Where ProductInfo is just a class that contains 2 properties (Name and Price). The Index page Inherits ViewPage - IEnumerable - ProductInfo. Everything works fine. 2) To dynamicaly execute the above query, I do this: Public ActionResult Index() { var products = db.Products .Where("Category = \"Soccer\"") .Select(/* WHAT SOULD I WRITE HERE TO SELECT NAME & PRICE?*/) return View(products); } I'm using both 'System.Lind.Dynamic' namespace and the DynamicLibrary.cs (downloaded from ScottGu blog). Here are my questions: What expression do I use to select only Name and Price? (Most importantly) How do I retrieve the data in my view? (i.e. What type the ViewPage inherits? ProductInfo?)

    Read the article

  • Why can I query with an int but not a string here? PHP MySQL Datatypes

    - by CT
    I am working on an Asset Database problem. I receive $id from $_GET["id"]; I then query the database and display the results. This works if my id is an integer like "93650" but if it has other characters like "wci1001", it displays this MySQL error: Unknown column 'text' in 'where clause' All fields in tables are of type: VARCHAR(50) What would I need to do to be able to use this query to search by id that includes other characters? Thank you. <?php <?php /* * ASSET DB FUNCTIONS SCRIPT * */ # connect to database function ConnectDB(){ mysql_connect("localhost", "asset_db", "asset_db") or die(mysql_error()); mysql_select_db("asset_db") or die(mysql_error()); } # find asset type returns $type function GetAssetType($id){ $sql = "SELECT asset.type From asset WHERE asset.id = $id"; $result = mysql_query($sql) or die(mysql_error()); $row = mysql_fetch_assoc($result); $type = $row['type']; return $type; } # query server returns $result (sql query array) function QueryServer($id){ $sql = " SELECT asset.id ,asset.company ,asset.location ,asset.purchaseDate ,asset.purchaseOrder ,asset.value ,asset.type ,asset.notes ,server.manufacturer ,server.model ,server.serialNumber ,server.esc ,server.warranty ,server.user ,server.prevUser ,server.cpu ,server.memory ,server.hardDrive FROM asset LEFT JOIN server ON server.id = asset.id WHERE asset.id = $id "; $result = mysql_query($sql); return $result; } # get server data returns $serverArray function GetServerData($result){ while($row = mysql_fetch_assoc($result)) { $id = $row['id']; $company = $row['company']; $location = $row['location']; $purchaseDate = $row['purchaseDate']; $purchaseOrder = $row['purchaseOrder']; $value = $row['value']; $type = $row['type']; $notes = $row['notes']; $manufacturer = $row['manufacturer']; $model = $row['model']; $serialNumber = $row['serialNumber']; $esc = $row['esc']; $warranty = $row['warranty']; $user = $row['user']; $prevUser = $row['prevUser']; $cpu = $row['cpu']; $memory = $row['memory']; $hardDrive = $row['hardDrive']; $serverArray = array($id, $company, $location, $purchaseDate, $purchaseOrder, $value, $type, $notes, $manufacturer, $model, $serialNumber, $esc, $warranty, $user, $prevUser, $cpu, $memory, $hardDrive); } return $serverArray; } # print server table function PrintServerTable($serverArray){ $id = $serverArray[0]; $company = $serverArray[1]; $location = $serverArray[2]; $purchaseDate = $serverArray[3]; $purchaseOrder = $serverArray[4]; $value = $serverArray[5]; $type = $serverArray[6]; $notes = $serverArray[7]; $manufacturer = $serverArray[8]; $model = $serverArray[9]; $serialNumber = $serverArray[10]; $esc = $serverArray[11]; $warranty = $serverArray[12]; $user = $serverArray[13]; $prevUser = $serverArray[14]; $cpu = $serverArray[15]; $memory = $serverArray[16]; $hardDrive = $serverArray[17]; echo "<table width=\"100%\" border=\"0\"><tr><td style=\"vertical-align:top\"><table width=\"100%\" border=\"0\"><tr><td colspan=\"2\"><h2>General Info</h2></td></tr><tr id=\"hightlight\"><td>Asset ID:</td><td>"; echo $id; echo "</td></tr><tr><td>Company:</td><td>"; echo $company; echo "</td></tr><tr id=\"hightlight\"><td>Location:</td><td>"; echo $location; echo "</td></tr><tr><td>Purchase Date:</td><td>"; echo $purchaseDate; echo "</td></tr><tr id=\"hightlight\"><td>Purchase Order #:</td><td>"; echo $purchaseOrder; echo "</td></tr><tr><td>Value:</td><td>"; echo $value; echo "</td></tr><tr id=\"hightlight\"><td>Type:</td><td>"; echo $type; echo "</td></tr><tr><td>Notes:</td><td>"; echo $notes; echo "</td></tr></table></td><td style=\"vertical-align:top\"><table width=\"100%\" border=\"0\"><tr><td colspan=\"2\"><h2>Server Info</h2></td></tr><tr id=\"hightlight\"><td>Manufacturer:</td><td>"; echo $manufacturer; echo "</td></tr><tr><td>Model:</td><td>"; echo $model; echo "</td></tr><tr id=\"hightlight\"><td>Serial Number:</td><td>"; echo $serialNumber; echo "</td></tr><tr><td>ESC:</td><td>"; echo $esc; echo "</td></tr><tr id=\"hightlight\"><td>Warranty:</td><td>"; echo $warranty; echo "</td></tr><tr><td colspan=\"2\">&nbsp;</td></tr><tr><td colspan=\"2\"><h2>User Info</h2></td></tr><tr id=\"hightlight\"><td>User:</td><td>"; echo $user; echo "</td></tr><tr><td>Previous User:</td><td>"; echo $prevUser; echo "</td></tr></table></td><td style=\"vertical-align:top\"><table width=\"100%\" border=\"0\"><tr><td colspan=\"2\"><h2>Specs</h2></td></tr><tr id=\"hightlight\"><td>CPU:</td><td>"; echo $cpu; echo "</td></tr><tr><td>Memory:</td><td>"; echo $memory; echo "</td></tr><tr id=\"hightlight\"><td>Hard Drive:</td><td>"; echo $hardDrive; echo "</td></tr><tr><td colspan=\"2\">&nbsp;</td></tr><tr><td colspan=\"2\">&nbsp;</td></tr><tr><td colspan=\"2\"><h2>Options</h2></td></tr><tr><td colspan=\"2\"><a href=\"#\">Edit Asset</a></td></tr><tr><td colspan=\"2\"><a href=\"#\">Delete Asset</a></td></tr></table></td></tr></table>"; } ?> __ /* * View Asset * */ # include functions script include "functions.php"; $id = $_GET["id"]; if (empty($id)):$id="000"; endif; ConnectDB(); $type = GetAssetType($id); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" type="text/css" href="style.css" /> <title>Wagman IT Asset</title> </head> <body> <div id="page"> <div id="header"> <img src="images/logo.png" /> </div> </div> <div id="content"> <div id="container"> <div id="main"> <div id="menu"> <ul> <table width="100%" border="0"> <tr> <td width="15%"></td> <td width="30%%"><li><a href="index.php">Search Assets</a></li></td> <td width="30%"><li><a href="addAsset.php">Add Asset</a></li></td> <td width="25%"></td> </tr> </table> </ul> </div> <div id="text"> <ul> <li> <h1>View Asset</h1> </li> </ul> <?php if (empty($type)):echo "<ul><li><h2>Asset ID does not match any database entries.</h2></li></ul>"; else: switch ($type){ case "Server": $result = QueryServer($id); $ServerArray = GetServerData($result); PrintServerTable($ServerArray); break; case "Desktop"; break; case "Laptop"; break; } endif; ?> </div> </div> </div> <div class="clear"></div> <div id="footer" align="center"> <p>&nbsp;</p> </div> </div> <div id="tagline"> Wagman Construction - Bridging Generations since 1902 </div> </body> </html>

    Read the article

  • NHibernate2 query is wired when fetch the collection from the proxy. Is this correct behavior?

    - by ensecoz
    This is my class: public class User { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<UserFriend> Friends { get; protected set; } } public class UserFriend { public virtual int Id { get; set; } public virtual User User { get; set; } public virtual User Friend { get; set; } } This is my mapping (Fluent NHibernate): public class UserMap : ClassMap<User> { public UserMap() { Id(x => x.Id, "UserId").GeneratedBy.Identity(); HasMany<UserFriend>(x => x.Friends); } } public class UserFriendMap : ClassMap<UserFriend> { public UserFriendMap() { Id(x => x.Id, "UserFriendId").GeneratedBy.Identity(); References<User>(x => x.User).TheColumnNameIs("UserId").CanNotBeNull(); References<User>(x => x.Friend).TheColumnNameIs("FriendId").CanNotBeNull(); } } The problem is when I execute this code: User user = repository.Load(1); User friend = repository.Load(2); UserFriend userFriend = new UserFriend(); userFriend.User = user; userFriend.Friend = friend; friendRepository.Save(userFriend); var friends = user.Friends; At the last line, NHibernate generate this query for me: SELECT friends0_.UserId as UserId1_, friends0_.UserFriendId as UserFrie1_1_, friends0_.UserFriendId as UserFrie1_6_0_, friends0_.FriendId as FriendId6_0_, friends0_.UserId as UserId6_0_ FROM "UserFriend" friends0_ WHERE friends0_.UserId=@p0; @p0 = '1' QUESTION: Why the query look very wired? It should select only 3 fields (which are UserFriendId, UserId, FriendId) Am I right? or there is something going on inside NHibernate?

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • "Order By" in LINQ-to-SQL Causes performance issues

    - by panamack
    I've set out to write a method in my C# application which can return an ordered subset of names from a table containing about 2000 names starting at the 100th name and returning the next 20 names. I'm doing this so I can populate a WPF DataGrid in my UI and do some custom paging. I've been using LINQ to SQL but hit a snag with this long executing query so I'm examining the SQL the LINQ query is using (Query B below). Query A runs well: SELECT TOP (20) [t0].[subject_id] AS [Subject_id], [t0].[session_id] AS [Session_id], [t0].[name] AS [Name] FROM [Subjects] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP (100) [t1].[subject_id] FROM [Subjects] AS [t1] WHERE [t1].[session_id] = 1 ORDER BY [t1].[name] ) AS [t2] WHERE [t0].[subject_id] = [t2].[subject_id] ))) AND ([t0].[session_id] = 1) Query B takes 40 seconds: SELECT TOP (20) [t0].[subject_id] AS [Subject_id], [t0].[session_id] AS [Session_id], [t0].[name] AS [Name] FROM [Subjects] AS [t0] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM ( SELECT TOP (100) [t1].[subject_id] FROM [Subjects] AS [t1] WHERE [t1].[session_id] = 1 ORDER BY [t1].[name] ) AS [t2] WHERE [t0].[subject_id] = [t2].[subject_id] ))) AND ([t0].[session_id] = 1) ORDER BY [t0].[name] When I add the ORDER BY [t0].[name] to the outer query it slows down the query. How can I improve the second query? This was my LINQ stuff Nick int sessionId = 1; int start = 100; int count = 20; // Query subjects with the shoot's session id var subjects = cldb.Subjects.Where<Subject>(s => s.Session_id == sessionId); // Filter as per params var orderedSubjects = subjects .OrderBy<Subject, string>( s => s.Col_zero ); var filteredSubjects = orderedSubjects .Skip<Subject>(start) .Take<Subject>(count);

    Read the article

  • Same query has nested loops when used with INSERT, but Hash Match without.

    - by AaronLS
    I have two tables, one has about 1500 records and the other has about 300000 child records. About a 1:200 ratio. I stage the parent table to a staging table, SomeParentTable_Staging, and then I stage all of it's child records, but I only want the ones that are related to the records I staged in the parent table. So I use the below query to perform this staging by joining with the parent tables staged data. --Stage child records INSERT INTO [dbo].[SomeChildTable_Staging] ([SomeChildTableId] ,[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 ) SELECT [SomeChildTableId] ,D.[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 FROM [dbo].[SomeChildTable] D INNER JOIN dbo.SomeParentTable_Staging I ON D.SomeParentTableID = I.SomeParentTableID; The execution plan indicates that the tables are being joined with a Nested Loop. When I run just the select portion of the query without the insert, the join is performed with Hash Match. So the select statement is the same, but in the context of an insert it uses the slower nested loop. I have added non-clustered index on the D.SomeParentTableID so that there is an index on both sides of the join. I.SomeParentTableID is a primary key with clustered index. Why does it use a nested loop for inserts that use a join? Is there a way to improve the performance of the join for the insert?

    Read the article

  • How do I construct a more complex single LINQ to XML query?

    - by Cyberherbalist
    I'm a LINQ newbie, so the following might turn out to be very simple and obvious once it's answered, but I have to admit that the question is kicking my arse. Given this XML: <measuresystems> <measuresystem name="SI" attitude="proud"> <dimension name="mass" dim="M" degree="1"> <unit name="kilogram" symbol="kg"> <factor name="hundredweight" foreignsystem="US" value="45.359237" /> <factor name="hundredweight" foreignsystem="Imperial" value="50.80234544" /> </unit> </dimension> </measuresystem> </measuresystems> I can query for the value of the conversion factor between kilogram and US hundredweight using the following LINQ to XML, but surely there is a way to condense the four successive queries into a single complex query? XElement mss = XElement.Load(fileName); IEnumerable<XElement> ms = from el in mss.Elements("measuresystem") where (string)el.Attribute("name") == "SI" select el; IEnumerable<XElement> dim = from e2 in ms.Elements("dimension") where (string)e2.Attribute("name") == "mass" select e2; IEnumerable<XElement> unit = from e3 in dim.Elements("unit") where (string)e3.Attribute("name") == "kilogram" select e3; IEnumerable<XElement> factor = from e4 in unit.Elements("factor") where (string)e4.Attribute("name") == "pound" && (string)e4.Attribute("foreignsystem") == "US" select e4; foreach (XElement ex in factor) { Console.WriteLine ((string)ex.Attribute("value")); }

    Read the article

  • rookie MySql question about paging; Is one query enough?

    - by Camran
    I have managed to get paging to work, almost. I want to display to the user, total nr of records found, and the currently displayed records. Ex: 4000 found, displaying 0-100. I am testing this with the nr 2 (because I don't have that many records, have like 20). So I am using LIMIT $start, $nr_results; Do I have to make two queries in order to display the results the way I want, one query fetching all records and then make a mysql_num_rows to get all records, then the one with the LIMIT ? I have this: mysql_num_rows($qry_result); $total_pages = ceil($num_total / $res_per_page); //$res_per_page==2 and $num_total = 2 if ($p - 10 < 1) { $pagemin=1; } else { $pagemin = $p - 10; } if ($p + 10 $total_pages) { $pagemax = $total_pages; } else { $pagemax = $p + 10; } Here is the query: SELECT mt.*, fordon.*, boende.*, elektronik.*, business.*, hem_inredning.*, hobby.* FROM classified mt LEFT JOIN fordon ON fordon.classified_id = mt.classified_id LEFT JOIN boende ON boende.classified_id = mt.classified_id LEFT JOIN elektronik ON elektronik.classified_id = mt.classified_id LEFT JOIN business ON business.classified_id = mt.classified_id LEFT JOIN hem_inredning ON hem_inredning.classified_id = mt.classified_id LEFT JOIN hobby ON hobby.classified_id = mt.classified_id ORDER BY modify_date DESC LIMIT 0, 2 Thanks, if you need more input let me know. Basically Q is, do I have to make two queries?

    Read the article

  • What are good strategies for organizing single class per query service layer?

    - by KallDrexx
    Right now my Asp.net MVC application is structured as Controller - Services - Repositories. The services consist of aggregate root classes that contain methods. Each method is a specific operation that gets performed, such as retrieving a list of projects, adding a new project, or searching for a project etc. The problem with this is that my service classes are becoming really fat with a lot of methods. As of right now I am separating methods out into categories separated by #region tags, but this is quickly becoming out of control. I can definitely see it becoming hard to determine what functionality already exists and where modifications need to go. Since each method in the service classes are isolated and don't really interact with each other, they really could be more stand alone. After reading some articles, such as this, I am thinking of following the single query per class model, as it seems like a more organized solution. Instead of trying to figure out what class and method you need to call to perform an operation, you just have to figure out the class. My only reservation with the single query per class method is that I need some way to organize the 50+ classes I will end up with. Does anyone have any suggestions for strategies to best organize this type of pattern?

    Read the article

  • Inserting variables into a query string - it won't work!

    - by Jonesy
    Basically i have a query string that when i hardcode in the catalogue value its fine. when I try adding it via a variable it just doesn't pick it up. This works: Dim WaspConnection As New SqlConnection("Data Source=JURA;Initial Catalog=WaspTrackAsset_NROI;User id=" & ConfigurationManager.AppSettings("WASPDBUserName") & ";Password='" & ConfigurationManager.AppSettings("WASPDBPassword").ToString & "';") This doesn't: Public Sub GetWASPAcr() connection.Open() Dim dt As New DataTable() Dim username As String = HttpContext.Current.User.Identity.Name Dim sqlCmd As New SqlCommand("SELECT WASPDatabase FROM dbo.aspnet_Users WHERE UserName = '" & username & "'", connection) Dim sqlDa As New SqlDataAdapter(sqlCmd) sqlDa.Fill(dt) If dt.Rows.Count > 0 Then For i As Integer = 0 To dt.Rows.Count - 1 If dt.Rows(i)("WASPDatabase") Is DBNull.Value Then WASP = "" Else WASP = "WaspTrackAsset_" + dt.Rows(i)("WASPDatabase") End If Next End If connection.Close() End Sub Dim WaspConnection As New SqlConnection("Data Source=JURA;Initial Catalog=" & WASP & ";User id=" & ConfigurationManager.AppSettings("WASPDBUserName") & ";Password='" & ConfigurationManager.AppSettings("WASPDBPassword").ToString & "';") When I debug the catalog is empty in the query string but the WASP variable holds the value "WaspTrackAsset_NROI" Any idea's why? Cheers, jonesy

    Read the article

  • SQL Server service broker reporting as off when I have written a query to turn it on

    - by dotnetdev
    I have made a small ASP.NET website. It uses sqlcachedependency The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Source Error: Line 12: System.Data.SqlClient.SqlDependency.Start(connString); This is the erroneous line in my global.asax. However, in sql server (2005), I enabled service broker like so (I connect and run the SQL Server service when I debug my site): ALTER DATABASE mynewdatabase SET ENABLE_BROKER with rollback immediate And this was successful. What am I missing? I am trying to use sql caching dependency and have followed all procedures. Thanks

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • Help with mysql sum and group query and managing results for jquery graph.

    - by Scarface
    I have a system I am trying to design that will retrieve information from a database, so that it can be plotted in a jquery graph. I need to retrieve the information and somehow put it in the necessary coordinates format (for example two coordinates var d = [[1269417600000, 10],[1269504000000, 15]];). My table that I am selecting from is a table that stores user votes with fields: points_id (1=vote up, 2=vote down), user_id, timestamp, topic_id What I need to do is select all the votes and somehow group them into respective days and then sum the difference between 1 votes and 2 votes for each day. I then need to somehow display the data in the appropriate plotting format shown earlier. For example April 1, 4 votes. The data needs to be separated by commas, except the last plot entry, so I am not sure how to approach that. I showed an example below of the kind of thing I need but it is not correct, echo "var d=["; $query=mysql_query( "SELECT *, SUM(IF(points_id = \"1\", 1,0))-SUM(IF([points_id = \"2\", 1,0)) AS 'total' FROM points LEFT JOIN topic ON topic.topic_id=points.topic_id WHERE topic.creator='$user' GROUP by timestamp HAVING certain time interval" ); while ($row=mysql_fetch_assoc($query)){ $timestamp=$row['timestamp']; $votes=$row['total']; echo "[$timestamp,$vote],"; } echo "];";

    Read the article

  • Why does my jQuery/YQL call not return anything?

    - by tastyapple
    I'm trying to access YQL with jQuery but am not getting a response: http://jsfiddle.net/tastyapple/grMb3/ Anyone know why? $(function(){ $.extend( { _prepareYQLQuery: function (query, params) { $.each( params, function (key) { var name = "#{" + key + "}"; var value = $.trim(this); if (!value.match(/^[0-9]+$/)) { value = '"' + value + '"'; } query = query.replace(name, value); } ); return query; }, yql: function (query) { var $self = this; var successCallback = null; var errorCallback = null; if (typeof arguments[1] == 'object') { query = $self._prepareYQLQuery(query, arguments[1]); successCallback = arguments[2]; errorCallback = arguments[3]; } else if (typeof arguments[1] == 'function') { successCallback = arguments[1]; errorCallback = arguments[2]; } var doAsynchronously = successCallback != null; var yqlJson = { url: "http://query.yahooapis.com/v1/public/yql", dataType: "jsonp", success: successCallback, async: doAsynchronously, data: { q: query, format: "json", env: 'store://datatables.org/alltableswithkeys', callback: "?" } } if (errorCallback) { yqlJson.error = errorCallback; } $.ajax(yqlJson); return $self.toReturn; } } ); $.yql( "SELECT * FROM github.repo WHERE id='#{username}' AND repo='#{repository}'", { username: "jquery", repository: "jquery" }, function (data) { if (data.results.repository["open-issues"].content > 0) { alert("Hey dude, you should check out your new issues!"); } } ); });

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • As3 & PHP URLEncoding problem!

    - by Jk_
    Hi everyone, I'm stuck with a stupid problem of encoding. My problem is that all my accentuated characters are displayed as weird iso characters. Example : é is displayed %E9 I send a string to my php file : XMLLoader.load(new URLRequest(online+"/query.php?Query=" + q)); XMLLoader.addEventListener(Event.COMPLETE,XMLLoaded); When I trace q, I get : "INSERT INTO hello_world (message) values('éàaà');" The GOOD query My php file look like this : <?php include("conection.php");//Conectiong to database $Q = $_GET['Query']; $query = $Q; $resultID = mysql_query($query) or die("Could not execute or probably SQL statement malformed (error): ". mysql_error()); $xml_output = "<?xml version=\"1.0\"?>\n"; // XML header $xml_output .= "<answers>\n"; $xml_output .= "<lastID id=".'"'.mysql_insert_id().'"'." />\n"; $xml_output .= "<query string=".'"'.$query.'"'." />\n"; $xml_output .= "</answers>"; echo $xml_output;//Output the XML ?> When I get back my XML into flash the $query looks like this : "INSERT INTO hello_world (message) values('%E9%E0a%E0');" And these values are then displayed into my DB which is annoying. Any help would be appreciated! Cheers. Jk_

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • em.createQuery keeps returning null

    - by Developer106
    I have this application which i use JSF 2.0 and EclipseLink, i have entities created for a database made in MySQL, Created these entities using netbeans 7.1.2, it gets created automaticly. Then i use session beans to work with these entities, the thing is the em.createQuery always returns a null, though I checked NamedQueries in the entities and they perfectly match a sample from the entities named queries:- @NamedQueries({ @NamedQuery(name = "Users.findAll", query = "SELECT u FROM Users u"), @NamedQuery(name = "Users.findByUserId", query = "SELECT u FROM Users u WHERE u.userId = :userId"), @NamedQuery(name = "Users.findByUsername", query = "SELECT u FROM Users u WHERE u.username = :username"), @NamedQuery(name = "Users.findByEmail", query = "SELECT u FROM Users u WHERE u.email = :email"), notice how i use this findByEmail query in the session bean :- public Users findByEmail(String email){ em.getTransaction().begin(); String find = "Users.findByEmail"; Query query = em.createNamedQuery(find); query.setParameter("email", email); Users user = (Users) query.getSingleResult(); but it always returns null from this em.createNamedQuery, i tried using .createQuery first but it also was no good. the stacktrace of the exception Caused by: java.lang.NullPointerException at com.readme.entities.sessionBeans.UsersFacade.findByEmail(UsersFacade.java:48) at com.readme.user.signup.SignupBean.checkAvailability(SignupBean.java:137) at com.readme.user.signup.SignupBean.save(SignupBean.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) What Seems To Be The Problem Here ?

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • How to pick a specific object from a Linq Query using Distinct?

    - by Holli
    I have a list with two or more objects of class Agent. Name = "A" Priority = 0 ResultCount = 100 ; Name = "B" Priority = 1 ResultCount = 100 ; Both objects have the same ResultCount. In that case I only need one object and not two or more. I did this with a Linq Query with Distinct and an custom made Comparer. IEnumerable<Agent> distinctResultsAgents = (from agt in distinctUrlsAgents select agt).Distinct(comparerResultsCount); With this query I get only one object from the list but I never know which one. But I don't want just any object, I want object "B" because the Priority is higher then object "A". How can I do that? My custom Comparer is very simple and has a method like this: public bool Equals(Agent x, Agent y) { if (x == null || y == null) return false; if (x.ResultCount == y.ResultCount) return true; return false; }

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • How to combine the multiple part linq into one query?

    - by user2943399
    Operator should be ‘AND’ and not a ‘OR’. I am trying to refactor the following code and i understood the following way of writing linq query may not be the correct way. Can somone advice me how to combine the following into one query. AllCompany.Where(itm => itm != null).Distinct().ToList(); if (AllCompany.Count > 0) { //COMPANY NAME if (isfldCompanyName) { AllCompany = AllCompany.Where(company => company["Company Name"].StartsWith(fldCompanyName)).ToList(); } //SECTOR if (isfldSector) { AllCompany = AllCompany.Where(company => fldSector.Intersect(company["Sectors"].Split('|')).Any()).ToList(); } //LOCATION if (isfldLocation) { AllCompany = AllCompany.Where(company => fldLocation.Intersect(company["Location"].Split('|')).Any()).ToList(); } //CREATED DATE if (isfldcreatedDate) { AllCompany = AllCompany.Where(company => company.Statistics.Created >= createdDate).ToList(); } //LAST UPDATED DATE if (isfldUpdatedDate) { AllCompany = AllCompany.Where(company => company.Statistics.Updated >= updatedDate).ToList(); } //Allow Placements if (isfldEmployerLevel) { fldEmployerLevel = (fldEmployerLevel == "Yes") ? "1" : ""; AllCompany = AllCompany.Where(company => company["Allow Placements"].ToString() == fldEmployerLevel).ToList(); }

    Read the article

  • node.js: looping, email each matching row from query but it it emails the same user by # number of matches?

    - by udonsoup16
    I have a node.js email server that works fine however I noticed a problem. If the query string found 4 rows, it would send four emails but only to the first result instead of to each found email address. var querystring1 = 'SELECT `date_created`,`first_name`,`last_name`,`email` FROM `users` WHERE TIMESTAMPDIFF(SECOND,date_created, NOW()) <=298 AND `email`!="" LIMIT 0,5'; connection.query(querystring1, function (err, row) { if (err) throw err; for (var i in row) { // Email content; change text to html to have html emails. row[i].column name will pull relevant database info var sendit = { to: row[i].email, from: '******@******', subject: 'Hi ' + row[i].first_name + ', Thanks for joining ******!', html: {path:__dirname + '/templates/welcome.html'} }; // Send emails transporter.sendMail(sendit,function(error,response){ if (error) { console.log(error); }else{ console.log("Message sent1: " + row[i].first_name);} transporter.close();} ) }}); .... How do I have it loop through each found row and send a custom email using row data individual row data?

    Read the article

  • Disable a form and all contained elements until an ajax query completes (or another solution to prev

    - by Max Williams
    I have a search form with inputs and selects, and when any input/select is changed i run some js and then make an ajax query with jquery. I want to stop the user from making further changes to the form while the request is in progress, as at the moment they can initiate several remote searches at once, effectively causing a race between the different searches. It seems like the best solution to this is to prevent the user from interacting with the form while waiting for the request to come back. At the moment i'm doing this in the dumbest way possible by hiding the form before making the ajax query and then showing it again on success/error. This solves the problem but looks horrible and isn't really acceptable. Is there another, better way to prevent interaction with the form? To make things more complicated, to allow nice-looking selects, the user actually interacts with spans which have js hooked up to them to tie them to the actual, hidden, selects. So, even though the spans aren't inputs, they are contained in the form and represent the actual interactive elements of the form. Grateful for any advice - max. Here's what i'm doing now: function submitQuestionSearchForm(){ //bunch of irrelevant stuff var questionSearchForm = jQuery("#searchForm"); questionSearchForm.addClass("searching"); jQuery.ajax({ async: true, data: jQuery.param(questionSearchForm.serializeArray()), dataType: 'script', type: 'get', url: "/questions", success: function(msg){ //more irrelevant stuff questionSearchForm.removeClass("searching"); }, error: function(msg){ questionSearchForm.removeClass("searching"); } }); return true; }

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >