Search Results

Search found 25852 results on 1035 pages for 'linq query syntax'.

Page 629/1035 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • SSIS Transaction with Sql Transaction

    - by Mike
    I started with a package to make sure Transactions are working correctly. The package level transaction is set to Required. I have two Execute Sql Task, one deletes rows from a table and one does 1/0, to throw the error. Both task are set to supported transaction level and Serializable IsolationLevel. That works. Now when I replace my two sql task to two separate procedure calls, the first one, ChargeInterest, runs successful but the second one, PaymentProcess, fails always saying. [Execute SQL Task] Error: Executing the query "Exec [proc_xx_NotesReceivable_PaymentProcess] ..." failed with the following error: "Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. PaymentProcess being the second stored procedure. Both procedures have there own BEGIN, COMMIT AND ROLLBACKS inside the SP. I believe that the transactions are being successfully handed in the Charge Interest because I can run the following without issues or the dreaded you started with 0 and now have 1 transaction. EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 --OR GO BEGIN TRAN EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 ROLLBACK TRAN Now I have noticed that DTC does get kicked off in both instances? Why I am not sure because it is using the same connection. In the live example I can see the transaction get started but disappears if I put a breakpoint on the PreExecute event of the second stored procedure. What is the correct way to mingle SP transactions with SSIS transactions?

    Read the article

  • FTP "PUT" fails from Virtual Machine, but not host PC: 504 Command not implemented for that paramete

    - by BrianH
    I have an FTP Script I'm using to automate a file transfer. The transfer works fine on my PC (XP SP2), but when I try and run it on a VM on my PC (XP SP2), the "put" commands gives off: 504 Command not implemented for that parameter. FTP File: open [ftp site] [username] [password] cd [directory on FTP server] binary hash put ..\[subfolder1]\[Subfolder2]\[subfolder3]\[filename] bye The FTP site/server is around the world, and not under my control. From what I understand of a 504, that means the command should NEVER work, but since the same script DOES work on my PC (hosting the VM), that eliminates syntax, file naming, etc. The put command when triggered from the VM, actually creates a 0 length file on the target FTP server, but doesn't populate the file.

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • Is it right that Strophe.addHandler reads only first node from response?

    - by markcial
    I'm starting to learn strophe library usage and when i use addHandler to parse response it seems to read only first node of xml response so when i receive a xml like that : <body xmlns='http://jabber.org/protocol/httpbind'> <presence xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='avaliable' id='5593:sendIQ'> <status/> </presence> <presence xmlns='jabber:client' from='test@localhost' to='test2@localhost' xml:lang='en'> <status /> </presence> <iq xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='result'> <query xmlns='jabber:iq:roster'> <item subscription='both' name='test' jid='test@localhost'> <group>test group</group> </item> </query> </iq> </body> With the handler testHandler used like that : connection.addHandler(testHandler,null,"presence"); function testHandler(stanza){ console.log(stanza); } It only logs : <presence xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='avaliable' id='5593:sendIQ'> <status/> </presence> What i am missing? is it a right behaviour? Should i add more handlers to get the other stanzas? Thanks for advance

    Read the article

  • PHP compiled on Mac OSX 10.6 - using /usr/lib when trying to start apache... rather than /opt/local/lib specified when php was configured

    - by Anthony
    PHP 5.3.3 compiled on Mac OSX 10.6 - using /usr/lib when trying to start apache... rather than /opt/local/lib specified when php was configured Why is it trying to load from /usr/lib when I specified in my configure not to? httpd: Syntax error on line 115 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/libphp5.so into server: dlopen(/usr/libexec/apache2/libphp5.so, 10): Library not loaded: /opt/local/lib/libiconv.2.dylib\n Referenced from: /usr/libexec/apache2/libphp5.so\n Reason: Incompatible library version: libphp5.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 The error message above refers to /opt/local/lib which when I run: otool -LD /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /opt/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.0) It shows that the version is different than what http is erring out as.

    Read the article

  • injection attack (I thought I was protected!) <?php /**/eval(base64_decode( everywhere

    - by Cyprus106
    I've got a fully custom PHP site with a lot of database calls. I just got injection hacked. This little chunk of code below showed up in dozens of my PHP pages. <?php /**/ eval(base64_decode(big string of code.... I've been pretty careful about my SQL calls and such; they're all in this format: $query = sprintf("UPDATE Sales SET `Shipped`='1', `Tracking_Number`='%s' WHERE ID='%s' LIMIT 1 ;", mysql_real_escape_string($trackNo), mysql_real_escape_string($id)); $result = mysql_query($query); mysql_close(); For the record, I rarely use mysql_close() at the end though. That just happened to be the code I grabbed. I can't think of any places where I don't use mysql_real_escape_string(), (although I'm sure there's probably a couple. I'll be grepping soon to find out) There's also no places where users can put in custom HTML or anything. In fact, most of the user-accessible pages, if they use SQL calls at all, are almost inevitably "SELECT * FROM" pages that use a GET or POST, depending. Obviously I need to beef up my security, but I've never had an attack like this and I'm not positive what I should do. I've decided to put limits on all my inputs and go through looking to see if i missed a mysql_real_escape_string somewhere... Anybody else have any suggestions? Also... what does this type of code do? Why is it there?

    Read the article

  • Long string insertion with sed

    - by Luis Varca
    I am trying to use this expression to insert the contents of one text file into another after a give string. This is a simple bash script: TEXT=`cat file1.txt` sed -i "/teststring/a \ $TEXT" file2.txt This returns an error, "sed: -e expression #1, char 37: unknown command: `M'" The issue is in the fact that the contents of file1.txt are actually a private certificate so it's a large amount of text and unusual characters which seems to be causing an issue. If I replace $TEXT with a simple ASCII value it works but when it reads the large content of file1.txt it fails with that error. Is there some way to carry out this action? Is my syntax off with sed or my quote placement wrong?

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Using wget to recursively download whole FTP directories

    - by user9406
    I want to copy all of the files and folders from one host to another. The files on the old host sit at /var/www/html and I only have FTP access to that server, and I can't TAR all the files. Regular connection to the old host through FTP brings me to the /home/admin folder. I tried running the following command form my new server: wget -r ftp://username:[email protected] But all I get is a made up index.html file. What the right syntax for using wget recursively over FTP?

    Read the article

  • treeview dynamically populated

    - by Laziale
    Hello everyone - I have this treeview control where I want to put uploaded files on the server. I want to be able to create the nodes and the child nodes dynamically from the database. I am using this query for getting the data from DB: SELECT c.Category, d.DocumentName FROM Categories c INNER JOIN DocumentUserFile d ON c.ID = d.CategoryId WHERE d.UserId = '9rge333a-91b5-4521-b3e6-dfb49b45237c' The result from that query is this one: Agendas transactions.pdf Minutes accounts.pdf I want to have the treeview sorted that way too. I am trying with this piece of code: TreeNode tn = new TreeNode(); TreeNode tnSub = new TreeNode(); foreach (DataRow dt in tblTreeView.Rows) { tn.Text = dt[0].ToString(); tn.Value = dt[0].ToString(); tnSub.Text = dt[1].ToString(); tnSub.NavigateUrl = "../downloading.aspx?file=" + dt[1].ToString() +"&user=" + userID; tn.ChildNodes.Add(tnSub); tvDocuments.Nodes.Add(tn); } I am getting the treeview populated nicely for the 1st category and the document under that category, but I can't get it to work when I want to show more documents under that category, or even more complicate to show new category beneath the 1st one with documents from that category. How can I solve this? I appreciate the answers a lot. Thanks, Laziale

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • Raising events and object persistence in Django

    - by Mridang Agarwalla
    Hi, I have a tricky Django problem which didn't occur to me when I was developing it. My Django application allows a user to sign up and store his login credentials for a sites. The Django application basically allows the user to search this other site (by scraping content off it) and returns the result to the user. For each query, it does a couple of queries of the other site. This seemed to work fine but sometimes, the other site slaps me with a CAPTCHA. I've written the code to get the CAPTCHA image and I need to return this to the user so he can type it in but I don't know how. My search request (the query, the username and the password) in my Django application gets passed to a view which in turn calls the backend that does the scraping/search. When a CAPTCHA is detected, I'd like to raise a client side event or something on those lines and display the CAPTCHA to the user and wait for the user's input so that I can resume my search. I would somehow need to persist my backend object between calls. I've tried pickling it but it doesn't work because I get the Can't pickle 'lock' object error. I don't know to implement this though. Any help/ideas? Thanks a ton.

    Read the article

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • I am not able to update form data to MySQL using PHP and jQuery

    - by Jimson Jose
    My problem is that I am unable to update the values entered in the form. I have attached all the files. I'm using MYSQL database to fetch data. What happens is that I'm able to add and delete records from form using jQuery and PHP scripts to MYSQL database, but I am not able to update data which was retrieved from the database. The file structure is as follows: index.php is a file with jQuery functions where it displays form for adding new data to MYSQL using save.php file and list of all records are view without refreshing page (calling load-list.php to view all records from index.php works fine, and save.php to save data from form) - Delete is an function called from index.php to delete record from MySQL database (function calling delete.php works fine) - Update is an function called from index.php to update data using update-form.php by retriving specific record from MySQL table, (works fine) Problem lies in updating data from update-form.php to update.php (in which update query is written for MySQL) I have tried in many ways - at last I had figured out that data is not being transferred from update-form.php to update.php; there is a small problem in jQuery AJAX function where it is not transferring data to update.php page. Something is missing in calling update.php page it is not entering into that page. I am new bee in programming. I had collected this script from many forums and made this one. So I was limited in solving this problem. I came to know that this is good platform for me and many where we get a help to create new things. Please find the link below to download all files which is of 35kb (virus free assurance): download mysmallform files in ZIPped format, including mysql query

    Read the article

  • Hibernate noob fetch join problem

    - by Bruce
    Hi all I have two classes, Test2 and Test3. Test2 has an attribute test3 that is an instance of Test3. In other words, I have a unidirectional OneToOne association, with test2 having a reference to test3. When I select Test2 from the db, I can see that a separate select is being made to get the details of the associated test3 class. This is the famous 1+N selects problem. To fix this to use a single select, I am trying to use the fetch=join annotation, which I understand to be @Fetch(FetchMode.JOIN) However, with fetch set to join, I still see separate selects. Here are the relevant portions of my setup.. hibernate.cfg.xml: <property name="max_fetch_depth">2</property> Test2: public class Test2 { @OneToOne (cascade=CascadeType.ALL , fetch=FetchType.EAGER) @JoinColumn (name="test3_id") @Fetch(FetchMode.JOIN) public Test3 getTest3() { return test3; } NB I set the FetchType to EAGER out of desperation, even though it defaults to EAGER anyway for OneToOne mappings, but it made no difference. Thanks for any help! Edit: I've pretty much given up on trying to use FetchMode.JOIN - can anyone confirm that they have got it to work ie produce a left outer join? In the docs I see that "Usually, the mapping document is not used to customize fetching. Instead, we keep the default behavior, and override it for a particular transaction, using left join fetch in HQL" If I do a left join fetch instead: query = session.createQuery("from Test2 t2 left join fetch t2.test3"); then I do indeed get the results I want - ie a left outer join in the query.

    Read the article

  • How do I include a frameset under CGI.pm?

    - by neversaint
    I want to have a cgi-script that does two things. Take the input from a form. Generate results base on the input values on a frame. I also want the frame to exist only after the result is generated/printed. Below is the simplified code of what I want to do. But somehow it doesn't work. What's the right way to do it? #!/usr/local/bin/perl use CGI ':standard'; print header; print start_html('A Simple Example'), h1('A Simple Example'), start_form, "What's your name? ",textfield('name'), p, "What's the combination?", p, checkbox_group(-name=>'words', -values=>['eenie','meenie','minie','moe'], -defaults=>['eenie','minie']), p, "What's your favorite color? ", popup_menu(-name=>'color', -values=>['red','green','blue','chartreuse']), p, submit, end_form, hr; if (param()) { # begin create the frame print <<EOF; <html><head><title>$TITLE</title></head> <frameset rows="10,90"> <frame src="$script_name/query" name="query"> <frame src="$script_name/response" name="response"> </frameset> EOF # Finish creating frame print "Your name is: ",em(param('name')), p, "The keywords are: ",em(join(", ",param('words'))), p, "Your favorite color is: ",em(param('color')), hr; } print end_html;

    Read the article

  • Ordering by formula fields in NHibernate

    - by Darin Dimitrov
    Suppose that I have the following mapping with a formula property: <class name="Planet" table="planets"> <id name="Id" column="id"> <generator class="native" /> </id> <!-- somefunc() is a native SQL function --> <property name="Distance" formula="somefunc()" /> </class> I would like to get all planets and order them by the Distance calculated property: var planets = session .CreateCriteria<Planet>() .AddOrder(Order.Asc("Distance")) .List<Planet>(); This is translated to the following query: SELECT Id as id0, somefunc() as formula0 FROM planets ORDER BY somefunc() Desired query: SELECT Id as id0, somefunc() as formula0 FROM planets ORDER BY formula0 If I set a projection with an alias it works fine: var planets = session .CreateCriteria<Planet>() .SetProjection(Projections.Alias(Projections.Property("Distance"), "dist")) .AddOrder(Order.Asc("dist")) .List<Planet>(); SQL: SELECT somefunc() as formula0 FROM planets ORDER BY formula0 but it populates only the Distance property in the result and I really like to avoid projecting manually over all the other properties of my object (there could be many other properties). Is this achievable with NHibernate? As a bonus I would like to pass parameters to the native somefunc() SQL function. Anything producing the desired SQL is acceptable (replacing the formula field with subselects, etc...), the important thing is to have the calculated Distance property in the resulting object and order by this distance inside SQL.

    Read the article

  • mod_rewrite directory path to deeper directory

    - by DA.
    I don't usually work with LAMP and am a bit stumped getting a site working locally. The site is set up to be used via localhost: 1) http://localhost/mysite However, the way the site files are physically on the server the root is located as such: 2) /var/www/mysite/trunk/site I'm trying to figure out a way where I could type #1 but have apache actually looking for the files in #2 so that all of the asset/page links in the web application work. Is mod_rewrite the solution? If so, I'm stumped on the syntax. I have this but it won't work (due, I assume, to it causing an infinite loop) RewriteRule ^mysite/ mysite/trunk/site I have a hunch I need to sprinkle on some regex?

    Read the article

  • [MySQL/PHP] Avoid using RAND()

    - by Andrew Ellis
    So... I have never had a need to do a random SELECT on a MySQL DB until this project I'm working on. After researching it seems the general populous says that using RAND() is a bad idea. I found an article that explains how to do another type of random select. Basically, if I want to select 5 random elements, I should do the following (I'm using the Kohana framework here)? If not, what is a better solution? Thanks, Andrew <?php final class Offers extends Model { /** * Loads a random set of offers. * * @param integer $limit * @return array */ public function random_offers($limit = 5) { // Find the highest offer_id $sql = ' SELECT MAX(offer_id) AS max_offer_id FROM offers '; $max_offer_id = DB::query(Database::SELECT, $sql) ->execute($this->_db) ->get('max_offer_id'); // Check to make sure we're not trying to load more offers // than there really is... if ($max_offer_id < $limit) { $limit = $max_offer_id; } $used = array(); $ids = ''; for ($i = 0; $i < $limit; ) { $rand = mt_rand(1, $max_offer_id); if (!isset($used[$rand])) { // Flag the ID as used $used[$rand] = TRUE; // Set the ID if ($i > 0) $ids .= ','; $ids .= $rand; ++$i; } } $sql = ' SELECT offer_id, offer_name FROM offers WHERE offer_id IN(:ids) '; $offers = DB::query(Database::SELECT, $sql) ->param(':ids', $ids) ->as_object(); ->execute($this->_db); return $offers; } }

    Read the article

  • Show users a list of unique items on Java Google App Engine

    - by James
    I've been going round in circles with what must be a very simple challenge but I want to do it the most efficient way from the start. So, I've watched Brett Slatkin's Google IO videos (2008 & 2009) about building scalable apps including http://www.youtube.com/watch?v=AgaL6NGpkB8 and read the docs but as a n00b, I'm still not sure. I'm trying to build an app on GAEJ similar to the original 'hotornot' where a user is presented with an item which they rate. Once they rate it, they are presented with another one which they haven't seen before. My question is this; is it most efficient to do a query up front to grab x items (say 100) and put them in a list (stored in memcache?) or is it better to simply make a query for a new item after each rating. To keep track of the items a user has seen, I'm planning to keep those items' keys in a list property of the user's entity. Does that sound sensible? I've really got myself confused about this so any help would be much appreciated.

    Read the article

  • vsFTPd and iptables - how to configure them in CentOS 5.5?

    - by Vincenzo
    I've installed vsFTPd in CentOS 5.5, on TWO servers, and added this rule to their iptable-s: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT Looks like this is not enough, since when I'm trying to upload a file from one server to another, I'm getting this result (IP address is masked): # ftp 99.99.99.99 Connected to …com (99.99.99.99). 220 (vsFTPd 2.0.5) Name (99.99.99.99:root): vinny 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> ls 227 Entering Passive Mode (99,99,99,99,107,74) ftp: connect: No route to host I've found a few articles in the net about the second rule I have to add to iptables, but I didn't find the right syntax for it. Could you please help?

    Read the article

  • Cannot deploy reports on localhost/reports

    - by Jackson Sunuwar
    I am using Microsoft SQL Server 2008 R2, Sql Server Reporting Services(SSRS) on an xp virtual machine.. I have created a report and am trying to deploy it... but getting this error... The specified report server URL http://localhost/Reports could not be found. Verify the syntax of the URL and that the report server exists. I went to see my "services".... SQL Server (SQLEXPRESS) is "started", but SQL Server (MSSQLSERVER) is not. When I try to start it, it says windows could not start the sql server on local computer error code 10048 I tried to go in cmd and tried C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\sqlservr.exe -sMSSQLSERVER I get this, Server Error: 17058, Severity: 16, State: 1. can someone please help me...

    Read the article

  • Linux Live CD for old computer

    - by Joel Coehoorn
    I have a pentium II (that's right, pentium II) with a scant 200MB of ram. This was a high-end workstation in it's day. The machine currently runs dos on a raid array, and I need to pull some data from it. I figure my best chance at this is to use a linux live cd to copy the data to one of our active directory network shares (there is a network card in the machine). Unfortunately, my linux skills are abysmal, so I'm not sure where to get started: Where should I look to find a linux cd that will run well on such an old system Since I'm likely gonna need to be command-line only, what do I need to do to configure the network card and mount the network share via the command line? Bonus points: exact syntax needed to copy and convert the entire volume for use in VMware server 2.0, but really just copying all the data should be enough.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >