Search Results

Search found 13889 results on 556 pages for 'results'.

Page 155/556 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • How to combine a Distance and Keyword SQL query?

    - by Jason
    Hi Folks, I have a tables in my database called "points" and "category". A user will input info into both a location input and a keyword input text box. Then I want to find points in my table where the keyword matches either the "title" field in the points table, or the "category" but are within a certain distance from the user's location. I want to order the results by distance. Here are the 2 queries which btoh work independently: $mysql = "SELECT *, ( 3959 * acos( cos( radians('$search_lat') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('$search_lng') ) + sin( radians('$search_lat') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '$radius'"; $mysql2 = "SELECT * FROM `points` LEFT JOIN category USING ( category_id ) WHERE (point_title LIKE '%$esc_catsearch%' OR category.title LIKE '%$esc_catsearch%')"; Here is what I tried: $sql_search = sprintf("SELECT *,point_id FROM points WHERE point_title LIKE '%%%s%%' UNION SELECT *, ( 3959 * acos( cos( radians('%s') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('%s') ) + sin( radians('%s') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '%s' ORDER BY distance LIMIT %d , %d", $esc_catsearch, mysql_real_escape_string($search_lat), mysql_real_escape_string($search_lng), mysql_real_escape_string($search_lat), mysql_real_escape_string($radius), $offset, $rowsPerPage); But it tells me there is no know column "distance". If I remove the "Order By" phrase then it works but I'm still not sure this is giving me the results I want. I also tried the query the other way around with the distance search first but that seems to ignore my keyword. Any thoughts would be much appreciated!

    Read the article

  • Is there any other efficient way to use table variable instead of using temporary table

    - by varta shrimali
    we are writing script to display banners on a web page where we are using temporary table in mysql procedure. Is there any other efficient way to use table variable instead of using temporary table we are using following code: -- banner location CURSOR -- DECLARE banner_location_cursor CURSOR FOR select bm.id as masterId, bm.section as masterName, bs.id as locationId, bs.sectionName as locationName from banner_master as bm inner join banner_section as bs on bm.id=bs.masterId where bm.section=sCode ; -- DECLARE banner CURSORS DECLARE banner_cursor CURSOR FOR SELECT bd.id as bannerId, bd.sectionId, bd.bannerName, bd.websiteURL, bd.paymentType, bd.status, bd.startDate, bd.endDate, bd.bannerDisplayed, bs.id, bs.sectionName from banner_detail as bd inner join banner_section as bs on bs.id=bd.sectionId where bs.id= location_id and bd.status='A' and (dates between cast(bd.startDate as DATE) and cast(bd.endDate as DATE)) order by rand(), bd.bannerDisplayed asc limit 1 ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_rows = 1; SET dates = (select curdate()); -- RESULTS TABLE WHICH WILL BE RETURNED -- CREATE temporary TABLE test ( b_id INT, s_id INT, b_name varchar(128), w_url varchar(128), p_type varchar(128), st char(1), s_date datetime, e_date datetime, b_display int, sec_id int, s_name varchar(128) ); -- OPEN banner location CURSOR OPEN banner_location_cursor; the_loop: LOOP FETCH banner_location_cursor INTO master_id, master_name, location_id, location_name; IF no_more_rows THEN CLOSE banner_location_cursor; leave the_loop; END IF; OPEN banner_cursor; -- select FOUND_ROWS(); the_loop2: LOOP FETCH banner_cursor INTO banner_id, section_id, banner_name, website_url, payment, status, start_date, end_date, banner_displayed, sec_id, section_name; IF no_more_rows THEN set no_more_rows = 0; CLOSE banner_cursor; leave the_loop2; END IF; INSERT INTO test ( b_id, s_id, b_name , w_url, p_type, st, s_date, e_date, b_display, sec_id, s_name ) VALUES ( banner_id, section_id, banner_name, website_url, payment, status, start_date, end_date, banner_displayed, sec_id, section_name ); UPDATE banner_detail set bannerDisplayed = (banner_displayed+1) where id = banner_id; END LOOP the_loop2; END LOOP the_loop; -- RETURN result SELECT * FROM test; -- DROP RESULTS TABLE DROP TABLE test; END

    Read the article

  • accessing parsed JSON on the iPhone SDK

    - by itai alter
    Hello All! I've been following the great tutorial about (iPhone, json and Flickr API and I did manage to access the parsed json info just fine. Now I'm trying to do the same thing with the Twitter API, and I am able to get the json info and parse it, but I can't seem to access it like in Flickr. I noticed that the json info that is retrieved from Twitter is a little different from Flickr. The Flickr json info starts straight with a curly braces ({), while the Twitter json info starts with a square bracket and then a curly braces ([{). I understand that it means it's an array inside the json info, but I don't know how to access it. In the Flickr example, I access the objects like so (the second line takes the number of pages Flickr has reported): NSDictionary *results = [jsonString JSONValue]; pagesString = [[results objectForKey:@"photos"] objectForKey:@"pages"]; but I can't seem to access the Twitter response in the same way... Does anyone know of a solution? (here's an example of the Twitter JSON response: api.twitter.com/1/statuses/public_timeline.json ) Thanks a bunch!

    Read the article

  • Personal Cache vs Memcache?

    - by Kerry
    I have a personal caching class, which can be seen here ( based off WordPress' ): http://pastie.org/988427 I recently learned about memcache and it said to memcache EVERYTHING: http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html My first thought was just to keep my class with the current functions and make it use memcache instead -- is there any downside to doing this? The main difference I see is that memcache stays on with the server from page to page, while mine is for 1 page load. The problem I see arising, and this is with any system, is that they're dynamic. They change all the time. Whether its search results, visible products, etc. etc. If it's all cached, won't the create a problem? Is there a way to handle this? Obviously if something is bringing back the same results everytime it would be cached, but that's why I was doing it on a per page load basis. I'm sure there is a way to handle this, or is the cache time usually set between 5 minutes and an hour?

    Read the article

  • NHibernate.QueryException with dynamic-component

    - by Ken
    OK, this is going to be kind of a long shot, since it's a big system (which I don't claim to fully understand, yet), and the problem might not be with NHibernate itself, and I'm even having trouble reproducing it, but... I've got a class with a <dynamic-component section, and when I run a query on it (through my ASP.NET MVC app), it fails, but only sometimes. (Yeah, the worst kind!) The exception I'm seeing is: NHibernate.QueryException: could not resolve property: Attributes.MyAttributeName of: MyClassName at NHibernate.Persister.Entity.AbstractPropertyMapping.GetColumns(String propertyName) at NHibernate.Persister.Entity.AbstractPropertyMapping.ToColumns(String alias, String propertyName) at NHibernate.Persister.Entity.BasicEntityPropertyMapping.ToColumns(String alias, String propertyName) at NHibernate.Persister.Entity.AbstractEntityPersister.ToColumns(String alias, String propertyName) at NHibernate.Loader.Criteria.CriteriaQueryTranslator.GetColumns(String propertyName, ICriteria subcriteria) at NHibernate.Loader.Criteria.CriteriaQueryTranslator.GetColumnsUsingProjection(ICriteria subcriteria, String propertyName) at NHibernate.Criterion.CriterionUtil.GetColumnNamesUsingPropertyName(ICriteriaQuery criteriaQuery, ICriteria criteria, String propertyName, Object value, ICriterion critertion) at NHibernate.Criterion.CriterionUtil.GetColumnNamesForSimpleExpression(String propertyName, IProjection projection, ICriteriaQuery criteriaQuery, ICriteria criteria, IDictionary`2 enabledFilters, ICriterion criterion, Object value) at NHibernate.Criterion.SimpleExpression.ToSqlString(ICriteria criteria, ICriteriaQuery criteriaQuery, IDictionary`2 enabledFilters) at NHibernate.Loader.Criteria.CriteriaQueryTranslator.GetWhereCondition(IDictionary`2 enabledFilters) at NHibernate.Loader.Criteria.CriteriaJoinWalker..ctor(IOuterJoinLoadable persister, CriteriaQueryTranslator translator, ISessionFactoryImplementor factory, CriteriaImpl criteria, String rootEntityName, IDictionary`2 enabledFilters) at NHibernate.Loader.Criteria.CriteriaLoader..ctor(IOuterJoinLoadable persister, ISessionFactoryImplementor factory, CriteriaImpl rootCriteria, String rootEntityName, IDictionary`2 enabledFilters) at NHibernate.Impl.SessionImpl.List(CriteriaImpl criteria, IList results) at NHibernate.Impl.CriteriaImpl.List(IList results) at NHibernate.Impl.CriteriaImpl.UniqueResult[T]() ...my code below here... Can anybody explain exactly what this QueryException means, i.e., so I can have an idea of what exactly it thinks is going wrong? Thanks!

    Read the article

  • Core Data Inferred Migration – Automatic "lightweight" vs Manual

    - by ohhorob
    I've updated the model of an existing iPhone app in some simple ways (remove attribute, add attribute, remove index), and can use automatic lightweight migration to migrate the persistent store. Due to the typical size of the data set, the processing time is not insignificant, and warrants feedback for the user. NSMigrationManager provides a simple but useful migrationProgress value that sends KVO notifications as the migration is performed. That forms the basis of providing feedback, however attempting to use an inferred model ([NSMappingModel inferredMappingModelForSourceModel:destinationModel:error:]) results in drastically different timing for the exact same dataset. Profile results on and original iPhone (2G) Automatic inferred lightweight migration PROFILE: CacheManager -migrateStore PROFILE: 0.6130 (+0.6130) models loaded PROFILE: 1.1759 (+0.5629) delegate -CacheManagerWillMigrate: PROFILE: 1.2516 (+0.0757) persistent store coordinator loaded PROFILE: 5.1436 (+3.8920) automatic lightweight migration completed PROFILE: 5.5435 (+0.3999) delegate -CacheManagerDidFinishMigration:withError: Manual inferred migration PROFILE: CacheManager -migrateStore PROFILE: 0.6660 (+0.6660) models loaded PROFILE: 1.1471 (+0.4811) inferred mapping model generated PROFILE: 1.4046 (+0.2574) delegate -CacheManagerWillMigrate: PROFILE: 1.5058 (+0.1013) persistent store coordinator loaded PROFILE: 22.6952 (+21.1894) manual migration completed PROFILE: 23.1478 (+0.4525) delegate -CacheManagerDidFinishMigration:withError: So, with an inferred model, the manual migration takes over 5 times longer than automatic! It's a big inconsistency, and the lightweight option that NSPersistentStoreCoordinator -addPersistentStoreWithType:configuration:URL:options:error: provides absolutely no indication of progress while processing. Can anybody provide a supported way to get the migrationProgress values during automatic migration, OR a way to configure an inferred mapping model to be as fast during manual processing as automatic?

    Read the article

  • Mysql error in stored procudure

    - by devuser
    This stored procedure is to search through all tables and columns in database. DELIMITER $$ DROP PROCEDURE IF EXISTS get_table $$ CREATE /*[DEFINER = { user | CURRENT_USER }]*/ PROCEDURE `auradoxdb`.`get_table`(in_search varchar(50)) READS SQL DATA BEGIN DECLARE trunc_cmd VARCHAR(50); DECLARE search_string VARCHAR(250); DECLARE db,tbl,clmn CHAR(50); DECLARE done INT DEFAULT 0; DECLARE COUNTER INT; DECLARE table_cur CURSOR FOR SELECT concat(SELECT COUNT(*) INTO @CNT_VALUE FROM `’,table_schema,’`.`’,table_name,’` WHERE `’, column_name,’` REGEXP ”’,in_search,”’) ,table_schema,table_name,column_name FROM information_schema.COLUMNS WHERE TABLE_SCHEMA NOT IN (‘information_schema’,'test’,'mysql’); DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; # #Truncating table for refill the data for new search. PREPARE trunc_cmd FROM “TRUNCATE TABLE temp_details;” EXECUTE trunc_cmd ; OPEN table_cur; table_loop:LOOP FETCH table_cur INTO search_string,db,tbl,clmn; # #Executing the search SET @search_string = search_string; SELECT search_string; PREPARE search_string FROM @search_string; EXECUTE search_string; SET COUNTER = @CNT_VALUE; SELECT COUNTER; IF COUNTER>0 THEN # # Inserting required results from search to tablehhh INSERT INTO temp_details VALUES(db,tbl,clmn); END IF; IF done=1 THEN LEAVE table_loop; END IF; END LOOP; CLOSE table_cur; # #Finally Show Results SELECT * FROM temp_details; END $$ DELIMITER ; Following error occurs when execute this.. Error Code : 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT COUNT(*) INTO @CNT_VALUE FROM `’,table_schema,’`.`’,table_name,’`' at line 12 (0 ms taken) could any body please help me to solve this?

    Read the article

  • With regards to urllib AttributeError: 'module' object has no attribute 'urlopen'

    - by Matt
    import re import string import shutil import os import os.path import time import datetime import math import urllib from array import array import random filehandle = urllib.urlopen('http://www.google.com/') #open webpage s = filehandle.read() #read print s #display #what i plan to do with it once i get the first part working #results = re.findall('[<td style="font-weight:bold;" nowrap>$][0-9][0-9][0-9][.][0-9][0-9][</td></tr></tfoot></table>]',s) #earnings = '$ ' #for money in results: #earnings = earnings + money[1]+money[2]+money[3]+'.'+money[5]+money[6] #print earnings #raw_input() this is the code that i have so far. now i have looked at all the other forums that give solutions such as the name of the script, which is parse_Money.py, and i have tried doing it with urllib.request.urlopen AND i have tried running it on python 2.5, 2.6, and 2.7. If anybody has any suggestions it would be really welcome, thanks everyone!! --Matt ---EDIT--- I also tried this code and it worked, so im thinking its some kind of syntax error, so if anybody with a sharp eye can point it out, i would be very appreciative. import shutil import os import os.path import time import datetime import math import urllib from array import array import random b = 3 #find URL URL = raw_input('Type the URL you would like to read from[Example: http://www.google.com/] :') while b == 3: #get file name file1 = raw_input('Enter a file name for the downloaded code:') filepath = file1 + '.txt' if os.path.isfile(filepath): print 'File already exists' b = 3 else: print 'Filename accepted' b = 4 file_path = filepath #open file FileWrite = open(file_path, 'a') #acces URL filehandle = urllib.urlopen(URL) #display souce code for lines in filehandle.readlines(): FileWrite.write(lines) print lines print 'The above has been saved in both a text and html file' #close files filehandle.close() FileWrite.close()

    Read the article

  • How to associate static entity instances in a Session without database retrieval

    - by Michael Hedgpeth
    I have a simple Result class that used to be an Enum but has evolved into being its own class with its own table. public class Result { public static readonly Result Passed = new Result(StatusType.Passed) { Id = [Predefined] }; public static readonly Result NotRun = new Result(StatusType.NotRun) { Id = [Predefined] }; public static readonly Result Running = new Result(StatusType.Running) { Id = [Predefined] }; } Each of these predefined values has a row in the database at their predefined Guid Id. There is then a failed result that has an instance per failure: public class FailedResult : Result { public FailedResult(string description) : base(StatusType.Failed) { . . . } } I then have an entity that has a Result: public class Task { public Result Result { get; set; } } When I save a Task, if the Result is a predefined one, I want NHibernate to know that it doesn't need to save that to the database, nor does it need to fetch it from the database; I just want it to save by Id. The way I get around this is when I am setting up the session, I call a method to load the static entities: protected override void OnSessionOpened(ISession session) { LockStaticResults(session, Result.Passed, Result.NotRun, Result.Running); } private static void LockStaticResults(ISession session, params Result[] results) { foreach (var result in results) { session.Load(result, result.Id); } } The problem with the session.Load method call is it appears to be fetching to the database (something I don't want to do). How could I make this so it does not fetch the database, but trusts that my static (immutable) Result instances are both up to date and a part of the session?

    Read the article

  • Foreach loop returning null values in PHP?

    - by Jascha
    Hello, I have a pretty simple problem. Basically I have an array called $list that is a list of titles. If I do a print_r($list) I get these results: Array ( [0] => Another New Title [1] => Awesome Movies and stuff [2] => Jascha's Title ) Now, I'm running a foreach loop to retrieve their values and format them in an <ul> like so... function get_film_list(){ global $categories; $list = $categories->get_film_list(); if(count($list)==0){ echo 'No films are in this category'; }else{ echo '<ul>'; foreach($list as $title){ echo '<li>' . $title . '<li>'; } echo '</ul>'; } } The problem I'm having is my loop is returning two values per value (is it the key value?) The result of the preceding function looks like this: Another New Title   Awesome Movies and stuff   Jascha's Title   I even tried: foreach($list as $key => $title){ echo '<li>' . $title . '<li>'; } With the same results: Another New Title   Awesome Movies and stuff   Jascha's Title   What am I missing here? Thanks in advance.

    Read the article

  • Which is better? OpenCyc or ConceptNet?

    - by Daniel Loureiro
    Hi, I'm doing a NLP project where I need to recognise concepts in sentences to find other similar concepts. I do this to infer word valences from a list I already have. I started using WordNet, but it gave many contradictory results. By contradictory results I mean word expansions that had contradictory valences. So now I'm looking into ConceptNet and OpenCyc. I've already implemented ConceptNet and it was all very easy and I love it. Problem is that OpenCyc appears to have a much larger and more logically rigid database, which is important when I found so many "contradictions" on WordNet... But I wouldn't know because I haven't tried it. Could someone tell me if it's worth going through the (considerable, for me) effort to implement OpenCyc, or is ConceptNet good enough to infer word valences? Are they that different? I'll be happy to explain myself further, if needed. Trying to keep it short for now! Thanks!

    Read the article

  • How to get height for NSAttributedString at a fixed width

    - by bonaldi
    I want to do some drawing of NSAttributedStrings in fixed-width boxes, but am having trouble calculating the right height they'll take up when drawn. So far, I've tried: Calling - (NSSize) size, but the results are useless (for this purpose), as they'll give whatever width the string desires. Calling - (void)drawWithRect:(NSRect)rect options:(NSStringDrawingOptions)options with a rect shaped to the width I want and NSStringDrawingUsesLineFragmentOrigin in the options, exactly as I'm using in my drawing. The results are ... difficult to understand; certainly not what I'm looking for. (As is pointed out in a number of places, including this Cocoa-Dev thread). Creating a temporary NSTextView and doing: [[tmpView textStorage] setAttributedString:aString]; [tmpView setHorizontallyResizable:NO]; [tmpView sizeToFit]; When I query the frame of tmpView, the width is still as desired, and the height is often correct ... until I get to longer strings, when it's often half the size that's required. (There doesn't seem to be a max size being hit: one frame will be 273.0 high (about 300 too short), the other will be 478.0 (only 60-ish too short)). I'd appreciate any pointers, if anyone else has managed this.

    Read the article

  • Design problem with callback functions in android

    - by Franz Xaver
    Hi folks! I'm currently developing an app in android that is accessing wifi values, that is, the application needs to scan for all access point and their specific signal strengths. I know that I have to extend the class BroadcastReceiver overwriting the method BroadcastReceiver.onReceive(Context context, Intent intent) which is called when the values are ready. Perhaps there exist solutions provided by the android system itself but I'm relatively new to android so I could need some help. The problem I encountered is that I got one class (an activity, thus controlled by the user) that needs this scan results for two different things (first to save the values in a database or second, to use them for further calculations but not both at one moment!) So how to design the callback system in order to "transport" the scan results from onReceive(Context context, Intent intent) to the operation intended by the user? My first solution was to define enums for each use case (save or use for calculations) which wlan-interested classes have to submit when querying for the values. But that would force the BroadcastReceiverextending class to save the current enum and use it as a parameter in the callback function of the querying class (this querying class needs to know what it asked for when getting backcalled) But that seems to me kind of dirty ;) So anyone a good idea for this?

    Read the article

  • Jena Effects of Different Entailment Regimes

    - by blueomega
    I am trying sparql and the use of entailment. As a example i used http://www.w3.org/TR/2010/WD-sparql11-entailment-20100126/#t112 i try to put them in jena. OntClass book1= model.createClass(NS+"book1"); OntClass book2=model.createClass(NS+"book2"); OntClass book3=model.createClass(NS+"book3"); OntClass publication=model.createClass(NS+"publication"); OntClass article=model.createClass(NS+"article"); OntClass mit=model.createClass(NS+"MIT"); ObjectProperty a = model.createObjectProperty(NS+"a"); ObjectProperty publishes = model.createObjectProperty(NS+"publishes"); book1.addProperty(a, publication); book2.addProperty(a, article); publication.addSubClass(article); publishes.addRange(publication); mit.addProperty(publishes, book3); where model is type OntModel. and i used the query similar to the problem "PREFIX table: "I have correct namespace here"+ "SELECT *"+ "WHERE"+ "{"+ " ?x ?y table:publication ."+ "}"; The model was created like this. Hope OntModelSpec is ok. OntModel m = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_RDFS_INF, null); i get as results from query x y | http://www.example.com/ontologies/sample.owl#publishes | rdfs:range | | http://www.example.com/ontologies/sample.owl#article | rdfs:subClassOf | | http://www.example.com/ontologies/sample.owl#book1 | http://www.example.com/ontologies/sample.owl#a | | http://www.example.com/ontologies/sample.owl#publication | rdfs:subClassOf | | http://www.example.com/ontologies/sample.owl#book3 | rdf:type | Can anyone give me a example, with and without entailment, so a cant try code, can get the results right.

    Read the article

  • Allowing Google to bypass CAPTCHA verification - sensible or not?

    - by edanfalls
    My web site has a database lookup; filling out a CAPTCHA gives you 5 minutes of lookup time. There is also some custom code to detect any automated scripts. I do this as I don't want someone data mining my site. The problem is that Google does not see the lookup results when it crawls my site. If someone is searching for a string that is present in the result of a lookup, I would like them to find this page by Googling it. The obvious solution to me is to use the PHP variable $_SERVER['HTTP_USER_AGENT'] to bypass the CAPTCHA and custom security code for the Google bots. My question is whether this is sensible or not. People could then use Google's cache to view the lookup results without having to fill out the CAPTCHA, but would Google's own script detection methods prevent them from data mining these pages? Or would there be some way for people to make $_SERVER['HTTP_USER_AGENT'] appear as Google to bypass the security measures? Thanks in advance.

    Read the article

  • Opening a Unicode file with Perl

    - by Jaco Pretorius
    I'm using osql to run several sql scripts against a database and then I need to look at the results file to check if any errors occurred. The problem is that perl doesn't seem to like the fact that the results files are unicode. I wrote a little test script to test it and the output comes out all warbled. $file = shift; open OUTPUT, $file or die "Can't open $file: $!\n"; while (<OUTPUT>) { print $_; if (/Invalid|invalid|Cannot|cannot/) { push(@invalids, $file); print "invalid file - $inputfile - schedule for retry\n"; last; } } Any ideas? I've tried decoding using decode_utf8 but it makes no difference. I've also tried to set the encoding when opening the file. I think the problem might be that osql puts the result file in UTF-16 format, but I'm not sure. When I open the file in textpad it just tells me 'Unicode'. Edit: Using perl v5.8.8

    Read the article

  • Foosball result prediction

    - by Wolf
    In our office, we regularly enjoy some rounds of foosball / table football after work. I have put together a small java program that generates random 2vs2 lineups from the available players and stores the match results in a database afterwards. The current prediction of the outcome uses a simple average of all previous match results from the 4 involved players. This gives a very rough estimation, but I'd like to replace it with something more sophisticated, taking into account things like: players may be good playing as attacker but bad as defender (or vice versa) players do well against a specific opponent / bad against others some teams work well together, others don't skills change over time What would be the best algorithm to predict the game outcome as accurately as possible? Someone suggested using a neural network for this, which sounds quite interesting... but I do not have enough knowledge on the topic to say if that could work, and I also suspect it might take too many games to be reasonably trained. EDIT: Had to take a longer break from this due to some project deadlines. To make the question more specific: Given the following mysql table containing all matches played so far: table match_result match_id int pk match_start datetime duration int (match length in seconds) blue_defense int fk to table player blue_attack int fk to table player red_defense int fk to table player red_attack int fk to table player score_blue int score_red int How would you write a function predictResult(blueDef, blueAtk, redDef, redAtk) {...} to estimate the outcome as closely as possible, executing any sql, doing calculations or using external libraries?

    Read the article

  • How to display my server's current response time to an average user

    - by Jason
    Sorry, I'm not really sure of the right way to ask this one so bear with me... We have a web application that runs on a set of servers at a data center (not in our offices) We want to be able to somehow 'advertise' to our clients/users that the availability or response time of our servers has met a standard throughout the day. I am being asked to come up with a standard metric that we can easily advertise on our login screen that shows current "standard response time" checked every x minutes. My thinking is that I need to capture something like the results of a traceroute from a server (either in our office, amazon, etc..) to one of the data center servers and come up with a Red/Yellow/Green type of a notifier for the login screen to let the user know that our tests are responding normally and if they are having delay issues it could be their network or connection to the internet. We have lots of clients in rural areas that have poor connectivity and we are trying to let them know any slowness might be on their end, not ours. I've got the LAMP stack to work with, but this could also be some other system all together as long as it can update the main server with the results. I already have pingdom reports that are available, but that's a bit more than people want to read sometimes. Any ideas on what I can do?

    Read the article

  • XPATH Query: How to get two elements?

    - by Damiano
    Hello My HTML code is: <table> <tr> <td class="data1"><p>1</td></td> <td class="data1"><p>2</td></td> <td class="data1"><p>3</td></td> <td class="data1"><p>4</td></td> </tr> <tr> <td class="data1"><p>5</td></td> <td class="data1"><p>6</td></td> <td class="data1"><p>7</td></td> <td class="data1"><p>8</td></td> </tr> </table> My query is: xpath='//tr//td[@class="data1"][4]/p' The results is: <p>4</p> <p>8</p> The results is correct! but, if I want to get example: <p>3</p> <p>4</p> <p>7</p> <p>8</p> So [3]/p and [4]/p How to get these two elements each <tr> ? Thank you so much!

    Read the article

  • why doesn't winmain set the errorlevel?

    - by Brian R. Bondy
    int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { MessageBox(NULL, _T("This should return 90 no?"), _T("OK"), MB_OK); return 90; } Why does the above program correctly display the message box, but does not set the error level? I compile this program to the name a.exe. Then from command prompt I type: c:\> a.exe (message box is displayed, I press ok) c:\> echo %ERRORLEVEL% 0 I get the same results if I do exit(90); right before the return. It still says 0. I also tried to start the program via CreateProcess and obtain the result with GetExitCodeProcess but it also returns 0 to me. I did error checking to ensure it was all started correctly. I originally seen this problem in a more complex program. But made this simple program to verify the problem. Results are the same, both programs that have WinMain return 0 always. I tried both x64, x86 and unicode and MBCS compiling options. All give 0 as an error level/status code.

    Read the article

  • Why does Hibernate 2nd level cache only cache within a session?

    - by Synesso
    Using a named query in our application and with ehcache as the provider, it seems that the query results are tied to the session. Any attempt to access the value from the cache for a second time results in a LazyInitializationException We have set lazy=true for the following mapping because this object is also used by another part of the system which does not require the reference... and we want to keep it lean. <class name="domain.ReferenceAdPoint" table="ad_point" mutable="false" lazy="false"> <cache usage="read-only"/> <id name="code" type="long" column="ad_point_id"> <generator class="assigned" /> </id> <property name="name" column="ad_point_description" type="string"/> <set name="synonyms" table="ad_point_synonym" cascade="all-delete-orphan" lazy="true"> <cache usage="read-only"/> <key column="ad_point_id" /> <element type="string" column="synonym_description" /> </set> </class> <query name="find.adpoints.by.heading">from ReferenceAdPoint adpoint left outer join fetch adpoint.synonyms where adpoint.adPointField.headingCode = ?</query> Here's a snippet from our hibernate.cfg.xml <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property> <property name="hibernate.cache.use_query_cache">true</property> It doesn't seem to make sense that the cache would be constrained to the session. Why are the cached queries not usable outside of the (relatively short-lived) sessions?

    Read the article

  • How can I improve this search usability?

    - by Craig Whitley
    This is the first real programming attempt of mine, and theres some major flaws. It's a learning project, and I'm currently re-writing the entire thing as my php is is really messy. I really want to get an idea on how I can improve the actual usability and accesibility of the site at the same time though - so I know how to implement it correctly. The website is basically a comparison website for gameserver hosting. As I mentioned, its a learning project and I don't actually expect any revenue from it. At the moment theres only test data in it, so in the game input box select either 'Battlefied Bad Company 2' or 'Call of Duty 4: Modern Warfare' and ignore the actual search results. http://www.laglessfrag.com I wasn't really sure how to work the search functionality. Basically when you click a game in the drop down box, it sends an ajax request and finds all the locations available to that specific game in the database. After selecting the country theres another ajax to find all the citys available to the game in that country - which gives me the two unique identifiers I need to create the search results. One major and fundamental flaw is that without javascript enabled, the site ceases to function. I'll overcome that in the next re-write, but without the ajax functionality stopping the user 'going wrong' - how can I implement a search that requires two fields without creating extra steps in new pages after form submissions? I'm also no designer so my whole layout and css is a bit rubbish, but this was mainly a learning project as I'm interested in applications / programming rather than design. It's also slow as its on shared hosting, but if I can get it to work correctly then I'm not impartial to chucking a bit of money at it for faster hosting and maybe a bit of advertising and seeing where it goes (if anywhere!). Any info appreciated.

    Read the article

  • Optimal two variable linear regression SQL statement (censoring outliers)

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here (with five outliers highlighted): Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • P values in wilcox.test gone mad :(

    - by Error404
    I have a code that isn't doing what it should do. I am testing P value for a wilcox.test for a huge set of data. the code i am using is the following library(MASS) data1 <- read.csv("file1path.csv",header=T,sep=",") data2 <- read.csv("file2path.csv",header=T,sep=",") data3 <- read.csv("file3path.csv",header=T,sep=",") data4 <- read.csv("file4path.csv",header=T,sep=",") data1$K <- with(data1,{"N"}) data2$K <- with(data2,{"E"}) data3$K <- with(data3,{"M"}) data4$K <- with(data4,{"U"}) new=rbind(data1,data2,data3,data4) i=3 for (o in 1:4800){ x1 <- data1[,i] x2 <- data2[,i] x3 <- data3[,i] x4 <- data4[,i] wt12 <- wilcox.test(x1,x2, na.omit=TRUE) wt13 <- wilcox.test(x1,x3, na.omit=TRUE) wt14 <- wilcox.test(x1,x4, na.omit=TRUE) if (wt12$p.value=="NaN"){ print("This is wrong") } else if (wt12$p.value < 0.05){ print(wt12$p.value) mypath=file.path("C:", "all1-less-05", (paste("graph-data1-data2",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt13$p.value=="NaN"){ print("This is wrong") } else if (wt13$p.value < 0.05){ print(wt13$p.value) mypath=file.path("C:", "all2-less-05", (paste("graph-data1-data3",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt14$p.value=="NaN"){ print("This is wrong") } else if (wt14$p.value < 0.05){ print(wt14$p.value) mypath=file.path("C:", "all3-less-05", (paste("graph-data1-data4",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } i=i+1 } I am having 2 problems with this long command: 1- Without specifying a certain P value, the code gives me arouind 14,000 graphs, when specifying a p value less than 0.05 the number of graphs generated goes down to 9,0000. THE FIRST PROBLEM IS: Some P value are more than 0.05 and are still showing up! 2- I designed the program to give me a result of "This is wrong" when the Value of P is "NaN", I am getting results of "NaN" Here's a screenshot from the results do you know what the mistake i made with the command to get these errors? Thanks in advance

    Read the article

  • BitShifting with BigIntegers in Java

    - by ThePinkPoo
    I am implementing DES Encryption in Java with use of BigIntegers. I am left shifting binary keys with Java BigIntegers by doing the BigInteger.leftShift(int n) method. Key of N (Kn) is dependent on the result of the shift of Kn-1. The problem I am getting is that I am printing out the results after each key is generated and the shifting is not the expected out put. The key is split in 2 Cn and Dn (left and right respectively). I am specifically attempting this: "To do a left shift, move each bit one place to the left, except for the first bit, which is cycled to the end of the block. " It seems to tack on O's on the end depending on the shift. Not sure how to go about correcting this. Results: c0: 11110101010100110011000011110 d0: 11110001111001100110101010100 c1: 111101010101001100110000111100 d1: 111100011110011001101010101000 c2: 11110101010100110011000011110000 d2: 11110001111001100110101010100000 c3: 1111010101010011001100001111000000 d3: 1111000111100110011010101010000000 c4: 111101010101001100110000111100000000 d4: 111100011110011001101010101000000000 c5: 11110101010100110011000011110000000000 d5: 11110001111001100110101010100000000000 c6: 1111010101010011001100001111000000000000 d6: 1111000111100110011010101010000000000000 c7: 111101010101001100110000111100000000000000 d7: 111100011110011001101010101000000000000000 c8: 1111010101010011001100001111000000000000000 d8: 1111000111100110011010101010000000000000000 c9: 111101010101001100110000111100000000000000000 d9: 111100011110011001101010101000000000000000000 c10: 11110101010100110011000011110000000000000000000 d10: 11110001111001100110101010100000000000000000000 c11: 1111010101010011001100001111000000000000000000000 d11: 1111000111100110011010101010000000000000000000000 c12: 111101010101001100110000111100000000000000000000000 d12: 111100011110011001101010101000000000000000000000000 c13: 11110101010100110011000011110000000000000000000000000 d13: 11110001111001100110101010100000000000000000000000000 c14: 1111010101010011001100001111000000000000000000000000000 d14: 1111000111100110011010101010000000000000000000000000000 c15: 11110101010100110011000011110000000000000000000000000000 d15: 11110001111001100110101010100000000000000000000000000000

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >