Search Results

Search found 17627 results on 706 pages for 'hierarchical query'.

Page 248/706 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • nHibernate Criteria API Projections

    - by Craig
    I have an entity that is like this public class Customer { public Customer() { Addresses = new List<Address>(); } public int CustomerId { get; set; } public string Name { get; set; } public IList<Address> Addresses { get; set; } } And I am trying to query it using the Criteria API like this. ICriteria query = m_CustomerRepository.Query() .CreateAlias("Address", "a", NHibernate.SqlCommand.JoinType.LeftOuterJoin); var result = query .SetProjection(Projections.Distinct( Projections.ProjectionList() .Add(Projections.Alias(Projections.Property("CustomerId"), "CustomerId")) .Add(Projections.Alias(Projections.Property("Name"), "Name")) .Add(Projections.Alias(Projections.Property("Addresses"), "Addresses")) )) .SetResultTransformer(new AliasToBeanResultTransformer(typeof(Customer))) .List<Customer>() as List<Customer>; When I run this query the Addresses property of the Customer object is null. Is there anyway to add a projection for this List property?

    Read the article

  • How to send a list of object from my MainPage.xaml to another page

    - by LivingThing
    When navigating to another page how can i make my list of object available to another page. for example in my mainpage.xaml var data2 = from query in document.Descendants("weather") select new Forecast { date = (string)query.Element("date"), tempMaxC = (string)query.Element("tempMaxC"), tempMinC = (string)query.Element("tempMinC"), weatherIconUrl = (string)query.Element("weatherIconUrl"), }; forecasts = data2.ToList<Forecast>(); .... NavigationService.Navigate(new Uri("/WeatherInfoPage.xaml", UriKind.Relative)); and then in my other class, i want to make it available so that i can use it like this private void AddPageItem(List<Forecast> forecasts) { .. }

    Read the article

  • Beginner SQL question: arithmetic with multiple COUNT(*) results

    - by polygenelubricants
    Continuing with the spirit of using the Stack Exchange Data Explorer to learn SQL, (see: Can we become our own “Northwind” for teaching SQL / databases?), I've decided to try to write a query to answer a simple question (on meta): What % of stackoverflow users have over 10,000 rep?. Here's what I've done: Query#1 SELECT COUNT(*) FROM Users WHERE Users.Reputation >= 10000 Result: 556 Query#2 SELECT COUNT(*) FROM USERS Result: 227691 Now, how do I put them together into one query? What is this query idiom called? What do I need to write so I can get, say, a one-row three-column result like this: 556 227691 0,00244190592

    Read the article

  • iPhone Google Maps KML Search

    - by satyam
    I'm using Google maps with KML Query. But my query string is "Japanese" string "??????" I'm using http://maps.google.co.jp. When requesting data, I'm getting "0" bytes. But the same query when I put in browser, its download a KML file. My code is as follows: query = [NSString stringWithFormat:@"http://maps.google.co.jp/maps?&near=%f,%f&q=??????&output=kml&num=%d", lat,lon, num]; NSURL* url = [NSURL URLWithString:query]; NSURLRequest* request = [NSURLRequest requestWithURL:url]; NSLog(@"Quering URL = %@", url); NSHTTPURLResponse *response = [[NSHTTPURLResponse alloc] autorelease]; NSData *myData = [NSURLConnection sendSynchronousRequest:request returningResponse: &response error: nil ]; NSInteger errorcode = [response statusCode]; I'm receiving "myData" with 0 bytes. why?

    Read the article

  • Linq SqlMethods.Like fails

    - by Russell Steen
    I'm following the tips here, trying to leverage the statement that the sql doesn't get created until the enumerator is tripped. However I get the following error on the code below. I'm using Linq2Entities, not linq2sql. Is there a way to do this in Linq2entities? Method 'Boolean Like(System.String, System.String)' cannot be used on the client; it is only for translation to SQL. query = db.MyTables.Where(x => astringvar.Contains(x.Field1)); if (!String.IsNullOrEmpty(typeFilter)) { if (typeFilter.Contains('*')) { typeFilter = typeFilter.Replace('*', '%'); query = query.Where(x=> SqlMethods.Like(x.Type, typeFilter)); } else { query = query.Where(x => x.Type == typeFilter); } }

    Read the article

  • TooManyRowsAffectedException with encrypted triggers

    - by Jon Masters
    I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them. Is there another way to get around the TooManyRowsAffectedException that is thrown on commit? Update 1 So far only way I've gotten around the issue is to step around the .Save routine with var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order"); query.SetString("Notes", orderHeader.Notes); query.SetString("Status", orderHeader.OrderStatus); query.SetInt32("Order", orderHeader.OrderHeaderId); query.ExecuteUpdate(); It feels dirty and is not easily to extend, but it doesn't crater.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • How do these user/userParam references relate to the Customer and Account lookups?

    - by plath
    In the following code example how do the user/userParam references relate to the Customer and Account lookups and what is the relationship between Customer and Account? // PersistenceManager pm = ...; Transaction tx = pm.currentTransaction(); User user = userService.currentUser(); List<Account> accounts = new ArrayList<Account>(); try { tx.begin(); Query query = pm.newQuery("select from Customer " + "where user == userParam " + "parameters User userParam"); List<Customer> customers = (List<Customer>) query.execute(user); query = pm.newQuery("select from Account " + "where parent-pk == keyParam " + "parameters Key keyParam"); for (Customer customer : customers) { accounts.addAll((List<Account>) query.execute(customer.key)); } } finally { if (tx.isActive()) { tx.rollback(); } }

    Read the article

  • How do these user/userParam references relate to the Customer and Account lookups?

    - by marmalade
    In the following code example how do the user/userParam references relate to the Customer and Account lookups and what is the relationship between Customer and Account? // PersistenceManager pm = ...; Transaction tx = pm.currentTransaction(); User user = userService.currentUser(); List<Account> accounts = new ArrayList<Account>(); try { tx.begin(); Query query = pm.newQuery("select from Customer " + "where user == userParam " + "parameters User userParam"); List<Customer> customers = (List<Customer>) query.execute(user); query = pm.newQuery("select from Account " + "where parent-pk == keyParam " + "parameters Key keyParam"); for (Customer customer : customers) { accounts.addAll((List<Account>) query.execute(customer.key)); } } finally { if (tx.isActive()) { tx.rollback(); } }

    Read the article

  • JPA getSingleResult() or null

    - by Eugene Ramirez
    Hi. I have an insertOrUpdate method which inserts an Entity when it doesn't exist or update it if it does. To enable this, I have to findByIdAndForeignKey, if it returned null insert if not then update. The problem is how do I check if it exists? So I tried getSingleResult. But it throws an exception if the public Profile findByUserNameAndPropertyName(String userName, String propertyName) { String namedQuery = Profile.class.getSimpleName() + ".findByUserNameAndPropertyName"; Query query = entityManager.createNamedQuery(namedQuery); query.setParameter("name", userName); query.setParameter("propName", propertyName); Object result = query.getSingleResult(); if(result==null)return null; return (Profile)result; } but "getSingleResult" throws an exception. Thanks

    Read the article

  • Spring Data MongoDB Date between two Dates

    - by sics
    i'm using Spring Data for MongoDB and got the following classes class A { List<B> b; } class B { Date startDate; Date endDate; } when i save an object of A it gets persisted like { "_id" : "DQDVDE000VFP8E39", "b" : [ { "startDate" : ISODate("2009-10-05T22:00:00Z"), "endDate" : ISODate("2009-10-29T23:00:00Z") }, { "startDate" : ISODate("2009-11-01T23:00:00Z"), "endDate" : ISODate("2009-12-30T23:00:00Z") } ] } Now i want to query the db for documents matching entries in b where a given date is between startDate and endDate. Query query = new Query(Criteria.where("a").elemMatch( Criteria.where("startDate").gte(date) .and("endDate").lte(date) ); Which results in the following mongo query: { "margins": { "$elemMatch": { "startDate" : { "$gte" : { "$date" : "2009-11-03T23:00:00.000Z"}}, "endDate" : { "$lte" : { "$date" : "2009-11-03T23:00:00.000Z"}} } } } but returns no resulting documents. Does anybody know what i'm doing wrong? I don't get it... Thank you very much in advance!!

    Read the article

  • MS Access 2003 - Formatting results in a list box problem.

    - by Justin
    So I have a list box that displays averages in a table like format from a crossyab query. It's just what I need the query is right, there is just one thing. I had to set the field properties in the query as format: standard..decimal:2. Which is exactly what I needed. However..the list box will not pick up on this. First I typed the crosstab sql into the list box's properties....and then I ran into this problem. So then I actually just created the query object, saved it and set that as the rowsource for the list box. Still won't work....when I open the query it is the correct format. So is there a way to further format a text box? Is there a way tell it to limit decimal places to one or two on returned values? Thanks!

    Read the article

  • Python and Plone help

    - by Grenko
    Im using the plone cms and am having trouble with a python script. I get a name error "the global name 'open' is not defined". When i put the code in a seperate python script it works fine and the information is being passed to the python script becuase i can print the query. Code is below: #Import a standard function, and get the HTML request and response objects. from Products.PythonScripts.standard import html_quote request = container.REQUEST RESPONSE = request.RESPONSE # Insert data that was passed from the form query=request.query #print query f = open("blast_query.txt","w") for i in query: f.write(i) return printed I also have a second question, can i tell python to open a file in in a certain directory for example, If the script is in a certain loaction i.e. home folder, but i want the script to open a file at home/some_directory/some_directory can it be done?

    Read the article

  • JQuery and WCF - GET Method passes null

    - by user70192
    Hello, I have a WCF service that accepts requests from JQuery. Currently, I can access this service. However, the parameter value is always null. Here is my WCF Service definition: [OperationContract] [WebInvoke(Method = "GET", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] public string ExecuteQuery(string query) { // NOTE: I get here, but the query parameter is always null string results = Engine.ExecuteQuery(query); return results; } Here is my JQuery call: var searchUrl = "/services/myService.svc/ExecuteQuery"; var json = { "query": eval("\"test query\"") }; alert(json2string(json)); // Everything is correct here if (json != null) { $.ajax({ type: "GET", url: searchUrl, contentType: "application/json; charset=utf-8", data: json2string(json), dataType: "json" }); } What am I doing wrong? It seems odd that I can call the service but the parameter is always null. Thank you

    Read the article

  • problems with unpickling a 80 megabyte file in python

    - by tipu
    I am using the pickle module to read and write large amounts of data to a file. After writing to the file a 80 megabyte pickled file, I load it in a SocketServer using class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): print("in handle") words_file_handler = open('/home/tipu/Dropbox/dev/workspace/search/words.db', 'rb') words = pickle.load(words_file_handler) tweets = shelve.open('/home/tipu/Dropbox/dev/workspace/search/tweets.db', 'r'); results_per_page = 25 query_details = self.request.recv(1024).strip() query_details = eval(query_details) query = query_details["query"] page = int(query_details["page"]) - 1 return_ = [] booleanquery = BooleanQuery(MyTCPHandler.words) if query.find("(") > -1: result = booleanquery.processAdvancedQuery(query) else: result = booleanquery.processQuery(query) result = list(result) i = 0 for tweet_id in result and i < 25: #return_.append(MyTCPHandler.tweets[str(tweet_id)]) return_.append(tweet_id) i += 1 self.request.send(str(return_)) However the file never seems to load after the pickle.load line and it eventually halts the connection attempt. Is there anything I can do to speed this up?

    Read the article

  • What does this do and why does it require a transaction?

    - by S. Palin
    What does the following code example do and why does it require a transaction? // PersistenceManager pm = ...; Transaction tx = pm.currentTransaction(); User user = userService.currentUser(); List<Account> accounts = new ArrayList<Account>(); try { tx.begin(); Query query = pm.newQuery("select from Customer " + "where user == userParam " + "parameters User userParam"); List<Customer> customers = (List<Customer>) query.execute(user); query = pm.newQuery("select from Account " + "where parent-pk == keyParam " + "parameters Key keyParam"); for (Customer customer : customers) { accounts.addAll((List<Account>) query.execute(customer.key)); } } finally { if (tx.isActive()) { tx.rollback(); } }

    Read the article

  • Querying a Single Column with LINQ

    - by Hossein Margani
    Hi Every one! I want to fetch array of values of a single column in a table, for example, I have a table named Customer(ID,Name), and want to fetch ids of all customers. my query in LINQ is: var ids = db.Customers.Select(c=>c.ID).ToList(); The answer of this query is correct, but I ran SQL Server Profiler, and saw the query which was like this: SELECT [t0].[ID], [t0].[Name] FROM [dbo].[Customer] AS [t0] I understood that LINQ selects all columns and then creates the integer array of ID fields. How can I write a LINQ query which generates this query in SQL Server: SELECT [t0].[ID] FROM [dbo].[Customer] AS [t0] Thank you.

    Read the article

  • Filter zipcodes by proximity in Django with the Spherical Law of Cosines

    - by spiffytech
    I'm trying to handle proximity search for a basic store locater in Django. Rather than haul PostGIS around with my app just so I can use GeoDjango's distance filter, I'd like to use the Spherical Law of Cosines distance formula in a model query. I'd like all of the calculations to be done in the database in one query, for efficiency. An example MySQL query from The Internet implementing the Spherical Law of Cosines like this: SELECT id, ( 3959 * acos( cos( radians(37) ) * cos( radians( lat ) ) * cos( radians( lng ) - radians(-122) ) + sin( radians(37) ) * sin( radians( lat ) ) ) ) AS distance FROM stores HAVING distance < 25 ORDER BY distance LIMIT 0 , 20; The query needs to reference the Zipcode ForeignKey for each store's lat/lng values. How can I make all of this work in a Django model query?

    Read the article

  • Linq to NHibernate using Queryable.Where predicate

    - by Groo
    I am querying an SQLite database using LINQ to NHibernate. Person is an entity containing an Id and a Name: public class Person { public Guid Id { get; private set; } public string Name { get; private set; } } Let's say my db table contains a single person whose name is "John". This test works as expected: var query = from item in session.Linq<Person>() where (item.Name == "Mike") select item; // no such entity should exist Assert.IsFalse(query.Any()); but this one fails: var query = from item in session.Linq<Person>() select item; query.Where(item => item.Name == "Mike"); // following line actually returns the // "John" entry Assert.IsFalse(query.Any()); What am I missing?

    Read the article

  • how to have defined connection within function for pdo communication with DB

    - by Scarface
    hey guys I just started trying to convert my query structure to PDO and I have come across a weird problem. When I call a pdo query connection within a function and the connection is included outside the function, the connection becomes undefined. Anyone know what I am doing wrong here? I was just playing with it, my example is below. include("includes/connection.php"); function query(){ $user='user'; $id='100'; $sql = 'SELECT * FROM users'; $stmt = $conn->prepare($sql); $result=$stmt->execute(array($user, $id)); // now iterate over the result as if we obtained // the $stmt in a call to PDO::query() while($r = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "$r[username] $r[id] \n"; } } query();

    Read the article

  • GAE more than 3 attributes to filter?

    - by Vik
    Hie I am using GAE jdoql and wrote query like: Query query = pm.newQuery(BloodDonor.class); query.setFilter(" state == :stateName && district == :distName &&" + " city == :cityName && bloodGroup == :blood"); @SuppressWarnings("unchecked") List<BloodDonor> donors = (List<BloodDonor>) query.execute(state.toLowerCase(), district.toLowerCase(), city.toLowerCase(), bloodGroup.toLowerCase()); This doesnt work as execute method does not support more than 3 parameters. So how to pass more than 3

    Read the article

  • Help me understand Rails eager loading

    - by aaronrussell
    I'm a little confused as to the mechanics of eager loading in active record. Lets say a Book model has many Pages and I fetch a book using this query: @book = Book.find book_id, :include => :pages Now this where I'm confused. My understanding is that @book.pages is already loaded and won't execute another query. But suppose I want to find a specific page, what would I do? @book.pages.find page_id # OR... @book.pages.to_ary.find{|p| p.id == page_id} Am I right in thinking that the first example will execute another query, and therefore making the eager loading pointless, or is active record clever enough to know that it doesn't need to do another query? Also, my second question, is there an argument that in some cases eager loading is more intensive on the database and sometimes multiple small queries will be more efficient that a single large query? Thanks for your thoughts.

    Read the article

  • how have defined connection within function for pdo communication with DB

    - by Scarface
    hey guys I just started trying to convert my query structure to PDO and I have come across a weird problem. When I call a pdo query connection within a function and the connection is included outside the function, the connection becomes undefined. Anyone know what I am doing wrong here? I was just playing with it, my example is below. include("includes/connection.php"); function query(){ $user='user'; $id='100'; $sql = 'SELECT * FROM users'; $stmt = $conn->prepare($sql); $result=$stmt->execute(array($user, $id)); echo $count=$stmt->rowCount(); if (!$result || $stmt->rowCount()>=1){ echo 'balls'; } // now iterate over the result as if we obtained // the $stmt in a call to PDO::query() while($r = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "$r[username] $r[id] \n"; } } query();

    Read the article

  • Forked Function not assigning pointer

    - by Luke Mcneice
    In the code below I have a function int GetTempString(char Query[]); calling it in main works fine. However, when calling the function from a fork the fork hangs (stops running, no errors, no output) before this line: pch = strtok (Query," ,"); the printf shows that the pointer to pch is null. Again this only happens when the fork is executing it. What am I doing doing wrong? int main() { if((Timer =fork())==-1) printf("Timer Fork Failed"); else if(Timer==0) { while(1) { sleep(2); GetTempString("ch 1,2,3,4"); } } else { //CODE GetTempString("ch 1,2,3,4"); } } int GetTempString(char Query[]) { char * pch; printf("DEBUG: '%s'-'%d'\n",Query,pch); pch = strtok (Query," ,");//* PROBLEM HERE* //while loop for strtok... return 1; }

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >