Search Results

Search found 743 results on 30 pages for 'fetching'.

Page 24/30 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Query to MySQL from c# returns System.Byte[]

    - by karthik
    I am using the below SP to return the value of Generated Insert statement and it works fine when executed in Query browser. When i try to get the value from C#, it give's me "System.Byte[]" as return value. When i try to get the value from MySql query browser, it give's me return value as : 'insert into admindb.accounts values("54321","2","karthik2","karthik2","1");' I guess the problem is with the single quotes of the returned value. Is it so ? DELIMITER $$ DROP PROCEDURE IF EXISTS `admindb`.`InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen`( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') as MyColumn from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • iOS LocationManager is not updating location (Titanium Appcelerator module)

    - by vale4674
    I've made Appcelerator Titanium Module for fetching device's rotaion and location. Source can be found on GitHub. The problem is that it fetches only one cached location but device motion data is OK and it is refreshing. I don't use delegate, I pull that data in my Titanium Javascript Code. If I set "City Run" in Simulator - Debug - Location nothing happens. The same cached location is returning. Pulling of location is OK because I tried with native app wich does this: textView.text = [NSString stringWithFormat:@"%f %f\n%@", locationManager.location.coordinate.longitude, locationManager.location.coordinate.latitude, textView.text]; And it is working in simulator and on device. But the same code as you can see on GitHub is not working as Titanium module. Any ideas? EDIT: I am looking at GeolocationModule src and I see nothing special there. As I said, my code in my module has to work since it is working in native app. "Only" problem is that it is not updating location and it always returns me that cached location.

    Read the article

  • No expires header

    - by Tom Gullen
    I have the report from YSlow: (no expires) http://static3.scirra.net/avatars/128/40cfdcbd1b1ec1842e199c97c4b85a4a.png (And a lot more similar). In my web.config though, I have: <system.webServer> <staticContent> <clientCache httpExpires="Sun, 29 Mar 2020 00:00:00 GMT" cacheControlMode="UseExpires" /> </staticContent> <caching> <profiles> <add extension=".ashx" policy="CacheForTimePeriod" kernelCachePolicy="DontCache" duration="01:00:00" /> <add extension=".png" policy="CacheUntilChange" kernelCachePolicy="CacheUntilChange" location="Any" /> </profiles> </caching> <rewrite> <rules> <rule name="Avatar"> <match url="avatars/([0-9]+)/(.*).png" /> <action type="Rewrite" url="gravatar.ashx?hash={R:2}&amp;size={R:1}" appendQueryString="false" /> </rule> </rules> </rewrite> Should this not be adding the expires header correctly? My objectives are: Gravatar.ashx fetches image from Gravatar server Server caches result for 1 hour (similar to SO) Expires header is added so client doesn't keep fetching it from my server

    Read the article

  • fullCalendar: Implementing a custom event fetcher using event as function

    - by Nick-ACNB
    If I specify a json feed, the events are re-fetched everytime and the fetching time shows the item appearing which causes a delay which is undesirable (at least after initial fetch was done). I am using the latest version from gitHub 1.4.6. The lazyFetching property seems to only work when you change views and you previously fetched for example a month and are drilling down to week/day which is unusable since I only use agendaDay. Here is what I am trying to achieve: $("#calendar").fullCalendar({ events: function(start, end, callback) { $.getJSON( "/GetEvents", { start: start.valueOf() }, function(data, textStatus) { $.each(data, function(i, event) { //Goal here is to only add items that aren't already rendered. if ($("#calendar") .fullCalendar('clientEvents', event.id)=="") $("#calendar").fullCalendar('renderEvent', event, true); }); } ); }); The goal is to use the sticky property in the renderEvent method so that on-screen they aren't re-rendered if they we're previously fetched. I omitted a part where I manually delete those that we're deleted and modify those that we're updated since I am in multi-user settings but you get the point. My issue is that they are fetched once and added. But once I change day and come back, they don't render even if I used sticky... has anyone gotten this error or did I code anything wrong? Also, is this a wrong way to go? I will consider any input. :) Thank you very much.

    Read the article

  • What's the best way to retrieve two pieces of data from an XML file?

    - by Morinar
    I've got an XML document that is in either a pre or post FO transformed state that I need to extract some information from. In the pre-case, I need to pull out two tags that represent the pageWidth and pageHeight and in the post case I need to extract the page-height and page-width parameters from a specific tag (I forget which one it is off the top of my head). What I'm looking for is an efficient/easily maintainable way to grab these two elements. I'd like to only read the document a single time fetching the two things I need. I initially started writing something that would use BufferedReader + FileReader, but then I'm doing string searching and it gets messy when the tags span multiple lines. I then looked at the DOMParser, which seems like it would be ideal, but I don't want to have to read the entire file into memory if I could help it as the files could potentially be large and the tags I'm looking for will nearly always be close to the top of the file. I then looked into SAXParser, but that seems like a big pile of complicated overkill for what I'm trying to accomplish. Anybody have any advice? Or simple implementations that would accomplish my goal? Thanks.

    Read the article

  • A many-to-many relation joined disallows intellisense/lookup in joined table

    - by BerggreenDK
    I want to be able to select a product and retrieve all sub-parts(products) within it. My approach is to find the Product id and then retrieve the list of ProductParts with that as a parent and while fetching those, follow the key back to the Product child to get the name and details of each Part. I was hoping to use something similar to: part.linked_product_id.name I have two tables. One for [Product] and and a relation [ProductPart] that has two FK ID's to [Product] Table Product() { int ID; // (PRIMARY, NOT NULL) String Name; ... details removed for overview purpose... } Table ProductPart() { int Product_ID; // FK (linked with relation to Product/parent) int Part_Product_ID; // FK (linked with relation to Product/childen) ... details removed for overview purpose... } I have checked the SQL-diagram and it shows the two relations (both are one to many) and in my DBML they also looks right. Here is my LINQ to SQL snippet that doesnt work for me... wondering why my joins dont work as supposed. FaultySnippet() { ProductDataContext db = new ProductDataContext(); var list = ( from part in db.ProductParts join prod in db.Products on part.Part_Product_ID equals prod.ID where (part.Product_ID == product_ID) select new { Name = part.Part_Product_ID. ?? // <-- NO details from Joined table? ... rest of properties from ProductPart join... I hoped... } ); }

    Read the article

  • Hibernate noob fetch join problem

    - by Bruce
    Hi all I have two classes, Test2 and Test3. Test2 has an attribute test3 that is an instance of Test3. In other words, I have a unidirectional OneToOne association, with test2 having a reference to test3. When I select Test2 from the db, I can see that a separate select is being made to get the details of the associated test3 class. This is the famous 1+N selects problem. To fix this to use a single select, I am trying to use the fetch=join annotation, which I understand to be @Fetch(FetchMode.JOIN) However, with fetch set to join, I still see separate selects. Here are the relevant portions of my setup.. hibernate.cfg.xml: <property name="max_fetch_depth">2</property> Test2: public class Test2 { @OneToOne (cascade=CascadeType.ALL , fetch=FetchType.EAGER) @JoinColumn (name="test3_id") @Fetch(FetchMode.JOIN) public Test3 getTest3() { return test3; } NB I set the FetchType to EAGER out of desperation, even though it defaults to EAGER anyway for OneToOne mappings, but it made no difference. Thanks for any help! Edit: I've pretty much given up on trying to use FetchMode.JOIN - can anyone confirm that they have got it to work ie produce a left outer join? In the docs I see that "Usually, the mapping document is not used to customize fetching. Instead, we keep the default behavior, and override it for a particular transaction, using left join fetch in HQL" If I do a left join fetch instead: query = session.createQuery("from Test2 t2 left join fetch t2.test3"); then I do indeed get the results I want - ie a left outer join in the query.

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • Deep Zoom in Ajax - Possible? Any examples out there?

    - by Phil
    I have an idea to implement a deep zoom type interface hosted in a browser for sports training data (speed, distance, heart rate etc.) However, rather than images I actually want to zoom into a hierarchy of information. For example, the initial display would contain a grid of years - hover over 2008, for example, and spin the mouse wheel (or click) will zoom into that year but during the zoom I want 2008 to fade out and be replaced with a calendar of months. Again zoom into a month and the months are replaced with the months calendar, zoom into a day and you finally see a chart with the training data plotted on it. All the time only dates with actual data would be highlighted in some fashion. My question is whether this would even be possible and whether anyone has seen examples of this already. I'm imagining that most of the time the next level of information could be cached in the browser (in fact, because this is calendar-based, I can calculate most of that and cache the dates to be highlighted.) I could also zoom into an empty chart whilst an Ajax thread is fetching the data to display. I've never tried anything like this before and I'm especially interested in whether DHTML would be capable of this sort of zoom (I suspect not and I would have to resort to Silverlight) and whether the Ajax execution would be uninterrupted whilst the browser rendering thread is kept busy zooming.

    Read the article

  • FQL query using stream table doesn't accept app access token

    - by tougher
    I've searched stackoverflow all day long to find an answer without luck, so here we go. I'm trying to fetch data from the stream table like this: FQL: SELECT post_id, message, created_time FROM stream WHERE source_id = 131559313586863 URL: http:// graph.facebook.com/fql?q=SELECT+post_id%2C+message%2C+created_time+FROM+stream+WHERE+source_id+%3D+131559313586863&access_token=10669xxxxx74470|PF-7GSdBx0Nxxxxxkdi1KwSQG-w But I get a 400 Bad Request as response with the error message: "An access token is required to request this resource.". I'm fetching an application access token with this url: https:// graph.facebook.com/oauth/access_token?client_id=FACEBOOK_APP_ID&client_secret=FACEBOOK_APP_SECRET&grant_type=client_credentials Facebook state in this blog post that "You will need to pass a valid app or user access token to access this functionality.". Functionality refers to /feed and /posts (the stream table). Futhermore this wiki tells the same story about using the stream table, "From June 3 2011 a token is required to query this table. You can use any application or user token to make the query.". Does anyone see my hopefully obvious flaw? Please note: The profile in the FQL query is public. I need this to run userless though a cronjob. No user interaction is possible. The request works if I replace the app access token with my own user token from https:// developers.facebook.com/tools/explorer Sorry for breaking the URL's. I need more reputation to post more than 2 links :-/

    Read the article

  • Good PHP / MYSQL hashing solution for large number of text values

    - by Dave
    Short descriptio: Need hashing algorithm solution in php for large number of text values. Long description. PRODUCT_OWNER_TABLE serial_number (auto_inc), product_name, owner_id OWNER_TABLE owner_id (auto_inc), owener_name I need to maintain a database of 200000 unique products and their owners (AND all subsequent changes to ownership). Each product has one owner, but an owner may have MANY different products. Owner names are "Adam Smith", "John Reeves", etc, just text values (quite likely to be unicode as well). I want to optimize the database design, so what i was thinking was, every week when i run this script, it fetchs the owner of a proudct, then checks against a table i suppose similar to PRODUCT_OWNER_TABLE, fetching the owner_id. It then looks up owner_id in OWNER_TABLE. If it matches, then its the same, so it moves on. The problem is when its different... To optimize the database, i think i should be checking against the other "owner_name" entries in OWNER_TABLE to see if that value exists there. If it does, then i should use that owner_id. If it doesnt, then i should add another entry. Note that there is nothing special about the "name". as long as i maintain the correct linkagaes AND make the OWNER_TABLE "read-only, append-new" type table - I should be able create a historical archive of ownership. I need to do this check for 200000 entries, with i dont know how many unique owner names (~50000?). I think i need a hashing solution - the OWNER_TABLE wont be sorted, so search algos wont be optimal. programming language is PHP. database is MYSQL.

    Read the article

  • JAAS + authentification from database

    - by AhmedDrira
    i am traying to performe an authentification from data base using JAAS i v configured the login-config.xml like this <application-policy name="e-procurment_domaine"> <authentication> <login-module code="org.jboss.security.auth.spi.DatabaseServerLoginModule" flag="required"> <module-option name = "dsJndiName">BasepfeDS</module-option> <module-option name="securityDomain">java:/jaas/e-procurment_domaine</module-option> <module-option name="principalsQuery">SELECT pass FROM personne WHERE login=?</module-option> <module-option name="rolesQuery">SELECT disc FROM personne WHERE login=?</module-option> </login-module> </authentication> </application-policy> and I've written a test : this one @Test public void testFindALL() { System.out.println("Debut test de la méthode findALL"); // WebAuthentication wa=new WebAuthentication(); // wa.login("zahrat", "zahrat"); securityClient.setSimple("zahrat", "zahrat"); try { securityClient.login(); } catch (LoginException e) { // TODO Auto-generated catch block e.printStackTrace(); } Acheteur acheteur = new Acheteur(); System.out.println("" + acheteurRemote.findAll().size()); // } catch (EJBAccessException ex) { // System.out.println("Erreur attendue de type EJBAccessException: " // + ex.getMessage()); // } catch (Exception ex) { // ex.printStackTrace(); // fail("Exception pendant le test find ALL"); System.out.println("Fin test find ALL");} // } the test is fail i dont know why , but when i change the polycy with the methode of .property file it works .. i am using the annotation on the session BEAN classes @SecurityDomain("e-procurment_domaine") @DeclareRoles({"acheteur","vendeur","physique"}) @RolesAllowed({"acheteur","vendeur","physique"}) and the annotation on the session for the methode @RolesAllowed("physique") @Override public List<Acheteur> findAll() { log.debug("fetching all Acheteur"); return daoGenerique.findWithNamedQuery("Acheteur.findAll"); } i think that the test have an acess to my data base doe's it need mysql DRIVER or a special config on JBOSS?

    Read the article

  • Alignment in assembly

    - by jena
    Hi, I'm spending some time on assembly programming (Gas, in particular) and recently I learned about the align directive. I think I've understood the very basics, but I would like to gain a deeper understanding of its nature and when to use alignment. For instance, I wondered about the assembly code of a simple C++ switch statement. I know that under certain circumstances switch statements are based on jump tables, as in the following few lines of code: .section .rodata .align 4 .align 4 .L8: .long .L2 .long .L3 .long .L4 .long .L5 ... .align 4 aligns the following data on the next 4-byte boundary which ensures that fetching these memory locations is efficient, right? I think this is done because there might be things happening before the switch statement which caused misalignment. But why are there actually two calls to .align? Are there any rules of thumb when to call .align or should it simply be done whenever a new block of data is stored in memory and something prior to this could have caused misalignment? In case of arrays, it seems that alignment is done on 32-byte boundaries as soon as the array occupies at least 32 byte. Is it more efficient to do it this way or is there another reason for the 32-byte boundary? I'd appreciate any explanation or hint on literature.

    Read the article

  • rookie MySql question about paging; Is one query enough?

    - by Camran
    I have managed to get paging to work, almost. I want to display to the user, total nr of records found, and the currently displayed records. Ex: 4000 found, displaying 0-100. I am testing this with the nr 2 (because I don't have that many records, have like 20). So I am using LIMIT $start, $nr_results; Do I have to make two queries in order to display the results the way I want, one query fetching all records and then make a mysql_num_rows to get all records, then the one with the LIMIT ? I have this: mysql_num_rows($qry_result); $total_pages = ceil($num_total / $res_per_page); //$res_per_page==2 and $num_total = 2 if ($p - 10 < 1) { $pagemin=1; } else { $pagemin = $p - 10; } if ($p + 10 $total_pages) { $pagemax = $total_pages; } else { $pagemax = $p + 10; } Here is the query: SELECT mt.*, fordon.*, boende.*, elektronik.*, business.*, hem_inredning.*, hobby.* FROM classified mt LEFT JOIN fordon ON fordon.classified_id = mt.classified_id LEFT JOIN boende ON boende.classified_id = mt.classified_id LEFT JOIN elektronik ON elektronik.classified_id = mt.classified_id LEFT JOIN business ON business.classified_id = mt.classified_id LEFT JOIN hem_inredning ON hem_inredning.classified_id = mt.classified_id LEFT JOIN hobby ON hobby.classified_id = mt.classified_id ORDER BY modify_date DESC LIMIT 0, 2 Thanks, if you need more input let me know. Basically Q is, do I have to make two queries?

    Read the article

  • Mamimum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; if (Uri.Equals(requestedURI, responseURI)) { string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } else { URLAddress = responseURI.AbsoluteUri; relayPageCount++; if (relayPageCount > 5) { fetchResult.IsOK = false; fetchResult.ErrorMessage = "Maximum page redirection occured."; relayPageCount = 0; return fetchResult; } req.Abort(); resp.Close(); return Fetch(); } } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • HttpWebResonse hangs on multiple request

    - by Ehsan
    I've an application that create many web request to donwload the news pages of a web site (i've tested for many web sites) after a while I find out that the application slows down in fetching the html source then I found out that HttpWebResonse fails getting the response. I post only the function that do this job. public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; if (Uri.Equals(requestedURI, responseURI)) { string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } else { URLAddress = responseURI.AbsoluteUri; relayPageCount++; if (relayPageCount > 5) { fetchResult.IsOK = false; fetchResult.ErrorMessage = "Maximum page redirection occured."; return fetchResult; } return Fetch(); } } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; } any solution would greatly appreciate

    Read the article

  • Hibernate: Walk millions of rows and don't leak memory

    - by Autocracy
    The below code functions, but Hibernate never lets go of its grip of any object. Calling session.clear() causes exceptions regarding fetching a joined class, and calling session.evict(currentObject) before retrieving the next object also fails to free the memory. Eventually I exhaust my heap space. Checking my heap dumps, StatefulPersistenceContext is the garbage collector's root for all references pointing to my objects. public class CriteriaReportSource implements JRDataSource { private ScrollableResults sr; private Object currentObject; private Criteria c; private static final int scrollSize = 10; private int offset = 1; public CriteriaReportSource(Criteria c) { this.c = c; advanceScroll(); } private void advanceScroll() { // ((Session) Main.em.getDelegate()).clear(); this.sr = c.setFirstResult(offset) .setMaxResults(scrollSize) .scroll(ScrollMode.FORWARD_ONLY); offset += scrollSize; } public boolean next() { if (sr.next()) { currentObject = sr.get(0); if (sr.isLast()) { advanceScroll(); } return true; } return false; } public Object getFieldValue(JRField jrf) throws JRException { Object retVal = null; if(currentObject == null) { return null; } try { retVal = PropertyUtils.getProperty(currentObject, jrf.getName()); } catch (Exception ex) { Logger.getLogger(CriteriaReportSource.class.getName()).log(Level.SEVERE, null, ex); } return retVal; } }

    Read the article

  • How can I get the rank of rows relative to total number of rows based on a field?

    - by Arms
    I have a scores table that has two fields: user_id score I'm fetching specific rows that match a list of user_id's. How can I determine a rank for each row relative to the total number of rows, based on score? The rows in the result set are not necessarily sequential (the scores will vary widely from one row to the next). I'm not sure if this matters, but user_id is a unique field. Edit @Greelmo I'm already ordering the rows. If I fetch 15 rows, I don't want the rank to be 1-15. I need it to be the position of that row compared against the entire table by the score property. So if I have 200 rows, one row's rank may be 3 and another may be 179 (these are arbitrary #'s for example only). Edit 2 I'm having some luck with this query, but I actually want to avoid ties SELECT s.score , s.created_at , u.name , u.location , u.icon_id , u.photo , (SELECT COUNT(*) + 1 FROM scores WHERE score > s.score) AS rank FROM scores s LEFT JOIN users u ON u.uID = s.user_id ORDER BY s.score DESC , s.created_at DESC LIMIT 15 If two or more rows have the same score, I want the latest one (or earliest - I don't care) to be ranked higher. I tried modifying the subquery with AND id > s.id but that ended up giving me an unexpected result set and different ties.

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • TextBox value not updated

    - by Jignesh
    I am fetching data from database to textbox using Linq.When i try update the same textbox value,it does not work. DAL.TournamentsDataContext tdc = new SchoolSports.DAL.TournamentsDataContext(); var tournamentTable = tdc.GetTable<DAL.Tournament>(); var tournamentRecord = (from rec in tournamentTable where rec.TournamentId == TournamentId select rec).Single(); tournamentRecord.Tournament_type = Tournament_type; tournamentRecord.Tournament_Name = Tournament_Name; ; tournamentRecord.Tournament_Level = Tournament_Level; tournamentRecord.Tournament_For = Tournament_For; tournamentRecord.Country_Code = Country_Code; tournamentRecord.Tournament_Status = Tournament_Status; tournamentRecord.Tournament_begin_date = Tournament_begin_date; tournamentRecord.Tournament_end_date = Tournament_end_date; tournamentRecord.Sponsored_By = Sponsored_By; tournamentRecord.Tournament_Details = Tournament_Details; var organiserTable = tdc.GetTable<DAL.Organiser>(); var organiserRecord = (from rec in organiserTable where rec.Tournament_Id == TournamentId select rec).Single(); organiserRecord.Name_Of_Organiser = OrName; organiserRecord.Telephone = OrTeleNo; organiserRecord.Email = OrEmail; organiserRecord.Mobile = OrMobile; organiserRecord.Fax = OrFax; if (Tournament_For == "School") { var invitedSchoolIdTable = tdc.GetTable<DAL.Invited_School>(); var invitedSchoolIdRecord = (from rec in invitedSchoolIdTable where rec.Tournament_Id == TournamentId select rec).Single(); invitedSchoolIdRecord.School_Ids = SchoolUniIds; } if (Tournament_For == "University") { var invitedUniversityTable = tdc.GetTable<DAL.Invited_University>(); var invitedUniversityIdRecord = (from rec in invitedUniversityTable where rec.Tournament_Id == TournamentId select rec).Single(); invitedUniversityIdRecord.University_Ids = SchoolUniIds; } tdc.SubmitChanges();

    Read the article

  • KODO: how set up fetch plan for bidirectional relationships?

    - by BestPractices
    Running KODO 4.2 and having an issue inefficient queries being generated by KODO. This happens when fetching an object that contains a collection where that collection has a bidrectional relationship back to the first object. Class Classroom { List<Student> _students; } Class Student { Classroom _classroom; } If we create a fetch plan to get a list of Classrooms and their corresponding Students by setting up the following fetch plan: fetchPlan.addField(Classroom.class,”_students”); This will result in two queries (get the classrooms and then get all students that are in those classrooms), which is what we would expect. However, if we include the reference back to the classroom in our fetch plan in order for the _classroom field to get populated by doing fetchPlan.addField(Student.class, “_classroom”), this will result in X number of additional queries where X is the number of students in each classroom. Can anyone explain how to fix this? KODO already has the original Classroom objects at the point that it's executing the queries to retrieve the Classroom objects and set them in each Student object's _classroom field. So I would expect KODO to simply set those objects in the _classroom field on each Student object accordingly and not go back to the database. Once again, the documentation is sorely lacking with Kodo/JDO/OpenJPA but from what I've read it should be able to do this more efficiently. Note-- EAGER_FETCH.PARALLEL is turned on and I have tried this with caching (query and data caches) turned on and off and there is no difference in the resultant queries.

    Read the article

  • How to increase PermGen memory for eclipselink StaticWeaveAntTask

    - by rayd09
    We are using Eclipselink and need to weave the code in order for lazy fetching to work property. During the weave process I'm getting the following error: weave: BUILD FAILED java.lang.OutOfMemoryError: PermGen space I have the following tasks within my ant build file: <target name="define_weave_task" description="task definition for EclipseLink static weaving"> <taskdef name="eclipse_weave" classname="org.eclipse.persistence.tools.weaving.jpa.StaticWeaveAntTask"/> </target> <target name="weave" depends="compile,define_weave_task" description="weave eclipselink code into compiled classes"> <eclipse_weave source="${path.classes}" target="${path.classes}"> <classpath refid="compile.classpath"/> </eclipse_weave> </target> It has been working great for a long time. Now that the amount of code to be woven has increased I'm getting the PermGen error. I would like to be able to up the amount of perm space. If I was doing a compile I would be able to up the perm space via a compiler argument such as <compilerarg value="-XX:MaxPermSize=256M"/> but this does not appear to be a valid argument for eclipselink weaving. How can I up the perm space for the weave?

    Read the article

  • iphone coredata fetch-request sorting

    - by satyam
    I'm using core data and fetching the results successfully. I've few questions regarding core data 1. When I add a record, will it be added at the end or at the start of entity. 2. I'm using following code to fetch the data. Array is being populated with all the records. But they are not in the same order as I entered records into entity. why? on what basis default sorting is used? NSFetchRequest* allLatest = [[NSFetchRequest alloc] init]; [allLatest setEntity:[NSEntityDescription entityForName:@"Latest" inManagedObjectContext:[self managedObjectContext]]]; NSError* error = nil; NSArray* records = [managedObjectContext executeFetchRequest:allLatest error:&error]; [allLatest release]; 3. The way that I enter the records, 1,2,3,4......... after some time, I want to delete the records that I entered first(i mean oldest data). Something like delete oldest two records. How to do it?

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30  | Next Page >