Search Results

Search found 21942 results on 878 pages for 'named query'.

Page 480/878 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • creating new instance fails PHP

    - by as3isolib
    I am relatively new to PHP and having some decent success however I am running into this issue: If I try to create a new instance of the class GenericEntryVO, I get a 500 error with little to no helpful error information. However, if I use a generic object as the result, I get no errors. I'd like to be able to cast this object as a GenericEntryVO as I am using AMFPHP to communicate serialize data with a Flex client. I've read a few different ways to create constructors in PHP but the typical 'public function Foo()' for a class Foo was recommended for PHP 5.4.4 //in my EntryService.php class public function getEntryByID($id) { $link = mysqli_connect("localhost", "root", "root", "BabyTrackingAppDB"); if (mysqli_connect_errno()) { printf("Connect failed: %s\n", mysqli_connect_error()); exit(); } $query = "SELECT * FROM Entries WHERE id = '$id' LIMIT 1"; if ($result = mysqli_query($link, $query)) { // $entry = new GenericEntryVO(); this is where the problem lies! while ($row = mysqli_fetch_row($result)) { $entry->id = $row[0]; $entry->entryType = $row[1]; $entry->title = $row[2]; $entry->description = $row[3]; $entry->value = $row[4]; $entry->created = $row[5]; $entry->updated = $row[6]; } } mysqli_free_result($result); mysqli_close($link); return $entry; } //my GenericEntryVO.php class <?php class GenericEntryVO { public function __construct() { } public $id; public $title; public $entryType; public $description; public $value; public $created; public $updated; // public $properties; } ?>

    Read the article

  • What characters are NOT escaped with a mysqli prepared statement?

    - by barfoon
    Hey everyone, I'm trying to harden some of my PHP code and use mysqli prepared statements to better validate user input and prevent injection attacks. I switched away from mysqli_real_escape_string as it does not escape % and _. However, when I create my query as a mysqli prepared statement, the same flaw is still present. The query pulls a users salt value based on their username. I'd do something similar for passwords and other lookups. Code: $db = new sitedatalayer(); if ($stmt = $db->_conn->prepare("SELECT `salt` FROM admins WHERE `username` LIKE ? LIMIT 1")) { $stmt->bind_param('s', $username); $stmt->execute(); $stmt->bind_result($salt); while ($stmt->fetch()) { printf("%s\n", $salt); } $stmt->close(); } else return false; Am I composing the statement correctly? If I am what other characters need to be examined? What other flaws are there? What is best practice for doing these types of selects? Thanks,

    Read the article

  • How can I determine when an InnoDB table was last changed?

    - by David M
    I've had success in the past storing the (heavily) processed results of a database query in memcached, using the last update time of the underlying tables(s) as part of the cache key. For MyISAM tables, that last changed time is available in SHOW TABLE STATUS. Unfortunately, that's usually NULL for InnoDB tables. In MySQL 4.1, the ctime for an InnoDB in its SHOW TABLE STATUS line was usually its actual last update time, but that doesn't seem to be true for MySQL 5.1. There is a DATETIME field in the table, but it only shows when a row has been modified - it cannot show the deletion time of a row that's not there anymore! So, I really cannot use MAX(update_time). Here's the really tricky part. I have a number of replicas that I do reads from. Can I figure out the state of the table that doesn't rely on when the changes have actually been applied? My conclusion after working on this for a while is that it's not going to be possible to get this information as cheaply as I'd like. I'm probably going to cache data until the time that I expect the table to change (it's updated once a day), and let the query cache help out where it can.

    Read the article

  • SugarmCRM REST API always returns "null"

    - by TuomasR
    I'm trying to test out the SugarCRM REST API, running latest version of CE (6.0.1). It seems that whenever I do a query, the API returns "null", nothing else. If I omit the parameters, then the API returns the API description (which the documentation says it should). I'm trying to perform a login, passing as parameter the method (login), input_type and response_type (json) and rest_data (JSON encoded parameters). The following code does the query: $api_target = "http://example.com/sugarcrm/service/v2/rest.php"; $parameters = json_encode(array( "user_auth" => array( "user_name" => "admin", "password" => md5("adminpassword"), ), "application_name" => "Test", "name_value_list" => array(), )); $postData = http_build_query(array( "method" => "login", "input_type" => "json", "response_type" => "json", "rest_data" => $parameters )); echo $parameters . "\n"; echo $postData . "\n"; echo file_get_contents($api_target, false, stream_context_create(array( "http" => array( "method" => "POST", "header" => "Content-Type: application/x-www-form-urlencoded\r\n", "content" => $postData ) ))) . "\n"; I've tried different variations of parameters and using username instead of user_name, and all provide the same result, just a response "null" and that's it.

    Read the article

  • Solr : how do i index and search several fields?

    - by sbrattla
    Hi, I've set up my first 'installation' of Solr, where each index (document) represents a musical work (with properties like number (int), title (string), version (string), composers (string) and keywords (string)). I've set the field 'title' as the default search field. However, what do I do when I would like to do a query on all fields? I'd like to give users the opportunity to search in all fields, and as far as I've understood there is at least two options for this: (1) Specify which fields the query should be made against. (2) Set up the Solr configuration with copyfields, so that values added to each of the fields will be copied to a 'catch-all'-like field which can be used for searching. However, in this case, i am uncertain how things would turn out when i take into consideration that the data types are not all the same for the various fields (the various fields will to a lesser og greater degree go through filters, but as copyfield values are taken from their original fields before the values have been run through their original fields' filters, i would have to apply one single filter to all values on the copyfield. This, again, would result in integers being 'filtered' just as strings would). Is this a case where i should use copyfields? At first glance, it seems a bit more 'flexible' to rather just search on all fields. However, maybe there's a cost? All feedback appreciated! Thanks!

    Read the article

  • Python program to search for specific strings in hash values (coding help)

    - by Diego
    Trying to write a code that searches hash values for specific string's (input by user) and returns the hash if searchquery is present in that line. Doing this to kind of just learn python a bit more, but it could be a real world application used by an HR department to search a .csv resume database for specific words in each resume. I'd like this program to look through a .csv file that has three entries per line (id#;applicant name;resume text) I set it up so that it creates a hash, then created a string for the resume text hash entry, and am trying to use the .find() function to return the entire hash for each instance. What i'd like is if the word "gpa" is used as a search query and it is found in s['resumetext'] for three applicants(rows in .csv file), it prints the id, name, and resume for every row that has it.(All three applicants) As it is right now, my program prints the first row in the .csv file(print resume['id'], resume['name'], resume['resumetext']) no matter what the searchquery is, whether it's in the resumetext or not. lastly, are there better ways to doing this, by searching word documents, pdf's and .txt files in a folder for specific words using python (i've just started reading about the re module and am wondering if this may be the route, rather than putting everything in a .csv file.) def find_details(id2find): resumes_f=open("resume_data.csv") for each_line in resumes_f: s={} (s['id'], s['name'], s['resumetext']) = each_line.split(";") resumetext = str(s['resumetext']) if resumetext.find(id2find): return(s) else: print "No data matches your search query. Please try again" searchquery = raw_input("please enter your search term") resume = find_details(searchquery) if resume: print resume['id'], resume['name'], resume['resumetext']

    Read the article

  • How to use chain.p7b with Apache?

    - by Debianuser
    I wanted to setup a SSL website on Apache and applied for a certificate from my local ISP. All they sent me was a single file named chain.p7b. I have always used certificates from other vendors without any issues but they usually provide two files to be configured as SSLCertificateFile and SSLCertificateChainFile in Apache. Following instructions from several online resources, I opened the p7b file in Windows and extracted 4 certificates from the file. I then tried configuring Apache with one of the files and it worked, but shows a warning: The certificate is not trusted because no issuer chain was provided. I though I have to use remaining 3 files as SSLCertificateChainFile and/or SSLCACertificateFile. I tried that but it didn't work so I am assuming it might be something completely different. Anyone faced this issue before? The following page http://www-01.ibm.com/support/docview.wss?uid=swg21458997 talks about using a keystore but is that relevant to Apache?

    Read the article

  • Heroku: Postgres type operator error after migrating DB from MySQL

    - by sevennineteen
    This is a follow-up to a question I'd asked earlier which phrased this as more of a programming problem than a database problem. http://stackoverflow.com/questions/2935985/postgres-error-with-sinatra-haml-datamapper-on-heroku I believe the problem has been isolated to the storage of the ID column in Heroku's Postgres database after running db:push. In short, my app runs properly on my original MySQL database, but throws Postgres errors on Heroku when executing any query on the ID column, which seems to have been stored in Postgres as TEXT even though it is stored as INT in MySQL. My question is why the ID column is being created as INT in Postgres on the data transfer to Heroku, and whether there's any way for me to prevent this. Here's the output from a heroku console session which demonstrates the issue: Ruby console for myapp.heroku.com >> Post.first.title => "Welcome to First!" >> Post.first.title.class => String >> Post.first.id => 1 >> Post.first.id.class => Fixnum >> Post[1] PostgresError: ERROR: operator does not exist: text = integer LINE 1: ...", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. Query: SELECT "id", "name", "email", "url", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER BY "id" LIMIT 1 Thanks!

    Read the article

  • Exploring other windows 8 machine's root

    - by moswald
    (Note, this is without a domain.) I used to be able to type start \\other_machine\c$ in PowerShell and explorer would pop up with a login dialog. Then I'd type other_machine\moswald and my password (the account is a local admin, of course) and I'm in. Now both machines have Windows 8 installed, and this no longer works (the dialog re-prompts with an "Access is denied" displayed). I've verified (through whoami) that the user is named other_machine\moswald, and I'm positive I'm typing the correct password. So what gives? (edited to include specific result)

    Read the article

  • How to multiplex subtitles with avi or mp4 in Linux?

    - by woofeR
    I have been searching something which can multiplex subtitle with video files in Linux environment. The key thing is that it should softly embed the subtitle to video, not encode again. (like avidemux). After this multiplexing process, user should be able to open/close subtitle using VLC for example. While searching that, I found a software which can do exactly what I need, named AVI-Mux GUI in Windows environment. However, I need these software's Linux alternative. Thanks.

    Read the article

  • How to avoid hard coding credentials into Sharepoint webpart?

    - by SeeBees
    I am building a Sharepoint web part that will be used by all users. The web part connects to a web service which needs credentials with higher privileges than common users. I hard coded credentials in the web part's code. query.Credentials = new System.Net.NetworkCredential("username", "password", "domain"); query is an instance of the web service class This may not be a good approach. In regard with security, source code of the web apart is available to people who are not allowed to see the credential. This is bad enough, But is there any other drawback of this approach? A web part doesn't have a .config file associated. The .config file is in application-level of the sharepoint site, and I don't want to modify it for a single webpart. I wonder if there is a webpart-specific way to solve this problem? Say provide a WebBrowsable property to an admin so that he/she can set credentials. Is this possible? Thanks

    Read the article

  • Why S3 website redirect location is not followed by CloudFront?

    - by ychaze
    I have a website hosted on Amazon S3. It is the new version of an old website hosted on WordPress. I have set up some files with the metadata Website Redirect Locationto handle old location and redirect them to the new website pages. For example: I had http://www.mysite.com/solution that I want to redirect to http://mysite.s3-website-us-east-1.amazonaws.com/product.html So I created an empty file named solutioninside my bucket with the correct metadata: Website Redirect Location= /product.html The S3 redirect metadata is equivalent to a 301 Moved Permanentlythat is great for SEO. This works great when accessing the URL directly from S3 domain. I have also set up a CloudFront distribution based on the website bucket. And when I try to access through my distribution, the redirect does not work, ie: http://xxxx123.cloudfront.net/solution does not redirect but download the empty file instead. So my question is how to keep the redirection through the CloudFront distribution ? Or any idea on how to handle the redirection without deteriorate SEO ? Thanks

    Read the article

  • Selectind DOM elements based on attribute content

    - by sameold
    I have several types of <a> in my document. I'd like to select only the <a> whose title attribute begins with the text "more data ..." like <a title="more data *"> For example given the markup below, I'd like to select the first and second <a>, but skip the third because title doesn't begin with more data, and skip the 4th because <a> doesn't even have a title attribute. <a title="more data some text" href="http://mypage.com/page.html">More</a> <a title="more data other text" href="http://mypage.com/page.html">More</a> <a title="not needed" href="http://mypage.com/page.html">Not needed</a> <a href="http://mypage.com/page.html">Not needed</a> I'm using DOMXPath. How would my query look like? $xpath = new DOMXPath($doc); $q = $xpath->query('//a');

    Read the article

  • Strange Windows Server 2008 R2 (FTP Server) Error - Caused by a specific combination of characters in the filename of uploaded file

    - by Steven
    We are running Windows Server 2008 R2, which is setup to be a FTP server. Everything seemed to be working fine until one our our cilents started complaining about their uploads being halted with the message "Connection with server reset". Further diagnosis revealed that a specific combination of characters in the filename will cause a repeatable error. I am hoping that a form expert can confirm the error or perhaps provide a solution. This is an example filename that will always cause the error: REPORT_FILED_000000001 (extension does not matter) Any help would be greatly appreciated! We need files named like this to work properly with our FTP server.

    Read the article

  • Install user scripts on Chrome

    - by Ahatius
    I created a JavaScript file, and named it test.user.js. I tried double clicking it, dragging & dropping it into Chrome and opening it with Chrome, but it won't install. I've already installed the dev version of Chrome, because it's stated somewhere that it's needed, but I can't get it to work. Then, I tried searching for a Chrome Greasemonkey addon, just to see that it was no further developed on, because Chrome has built-in support. Could someboy tell me what I have to do in order to get user scripts working on Chrome v21?

    Read the article

  • Entity Sql Group By problem, please help

    - by Zviadi
    Hello, help me please with this simple E-sql query: var qStr = "SELECT SqlServer.Month(o.DatePaid) as month, SqlServer.Sum(o.PaidMoney) as PaidMoney FROM XACCModel.OrdersIncomes as o group by SqlServer.Month(o.DatePaid)"; heres what I have. I have simple Entity called OrdersIncomes with ID,PaidMoney,DatePaid,Order_ID properties I want to select Month and Summed PaidMoney like this: month Paidmoney 1 500 2 700 3 1200 T-SQL looks like this and works fine: select MONTH(o.DatePaid), SUM(o.PaidMoney) from OrdersIncomes as o group by MONTH(o.DatePaid) results: 3 31.0000 4 127.0000 5 20.0000 (3 row(s) affected) but E-SQL doesnot work and I dont know what to do. here my E-SQL which needs refactoring: var qStr = "SELECT SqlServer.Month(o.DatePaid) as month, SqlServer.Sum(o.PaidMoney) as PaidMoney FROM XACCModel.OrdersIncomes as o group by SqlServer.Month(o.DatePaid)"; theres exception: ErrorDescription = "The identifier 'o' is not valid because it is not contained either in an aggregate function or in the GROUP BY clause." if I include o in group by clause, like: FROM XACCModel.OrdersIncomes as o group by o then I dont get summed and agregated results. is this some bug? or what Im doing wrong. heres Linq to Entities query and it works too: var incomeResult = from ic in _context.OrdersIncomes group ic by ic.DatePaid.Month into gr select new { Month = gr.Key, PaidMoney = gr.Sum(i = i.PaidMoney) };

    Read the article

  • Setting minOccurs="0" (not required) on web service parameters of type int

    - by Alex Angas
    I have an ASP.NET 2.0 web method with the following signature: [WebMethod] public QueryResult[] GetListData( string url, string list, string query, int noOfItems, string titleField) I'm running the disco.exe tool to generate .wsdl and .disco files from this web service for use in SharePoint. The following WSDL for the parameters is being generated: <s:element minOccurs="0" maxOccurs="1" name="url" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="list" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="query" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="noOfItems" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="titleField" type="s:string" /> Why does the int parameter have minOccurs set to 1 instead of 0 and how do I change it? I've tried the following without success: [XmlElementAttribute(IsNullable=false)] in the parameter declaration: makes no difference (as expected when you think about it) [XmlElementAttribute(IsNullable=true)] in the parameter declaration: gives error "IsNullable may not be 'true' for value type System.Int32. Please consider using Nullable instead." changing the parameter type to int? : keeps minOccurs="1" and adds nillable="true" [XmlIgnore] in the parameter declaration: the parameter is never output to the WSDL at all

    Read the article

  • Routing connections to passthrough a local machine

    - by xiamx
    Please tell me if what I'm trying to do is feasible. I have a router named "R" which is connected to WAN. R allows adding rules to the routing table. There are numerous of machines connected to the LAN port of R, they all have ip addresses 192.168.1.* assigned with DHCP on R. Among those machines, there's a machine C with ip address 192.168.1.100. I want all traffic of other machines in the subnet to pass-through machine C where some filtering and logging will be done. Is this possible? Is there a name for what I'm trying to do? (so i can do more googling later)

    Read the article

  • Bings Search API always returns the same 10 results

    - by Dofs
    I am trying to figure Bings Seach API out. I have added the SOAP service to my solution, and I do receive results. The issue, is that the displayed results are always the same, not matter what I have set the request.Web to. When I do the search, it displays 98 results, so it isn't the lack of results. BingService soapClient = new BingService(); string resp = string.Empty; SearchRequest request = new SearchRequest(); request.AppId = ConfigurationManager.AppSettings["BingKey"]; request.Sources = new BingLiveSearchService.SourceType[] { SourceType.Web }; request.Query = query; request.Web = new BingLiveSearchService.WebRequest { Count = 10, Offset = 10 }; var response = soapClient.Search(request); if (response.Web != null && response.Web.Total > 0) { resp += "TOTAL COUNT:"+response.Web.Total +"<br/><br />"; foreach (var item in response.Web.Results) { resp += "<div style='padding-bottom:10px;'>" + item.Title + "</div>"; } }

    Read the article

  • xcopy files and directory

    - by user1044937
    I have a folder named "C:\Jobs\job#1" , "C:\Jobs\job#2" "C:\Jobs\job#3" etc and a lot of directories and sub-directories under it. I want to get the all the directories under Jobs and xcopy them to C:\backup. Then I want to xcopy all the files under each Job#1, 2 ,3 etc. to C:\backup\job#1\month\\*.* To make it clearer. Source dir = C:\Jobs\job#1\"myfiles&dir" Destination dir = C:\Backup\job#1\month\"myfiles&dir" then do the next folder Source dir = C:\Jobs\job#2\"myfiles&dir" Destination dir = C:\Backup\job#2\month\"myfiles&dir" ...until all folders are back-up. Since the job folder keep increasing, by doing it this way I don't have to add extra code on this script except modify the month. Thank you.

    Read the article

  • jquery mouse events masked by img in IE8 - all other platforms/browsers work fine

    - by Bruce
    Using the jQuery mouse events for mouseenter, mouseleave or hover all work swimmingly on all Mac or Windows browsers except for IE8/Windows. I am using mouseenter and mouseleave to test for hot rectangles (absolutely positioned and dimensioned divs that have no content) over an image used as hotspots to make visible the navigation buttons when different regions of the main enclosing rectangle (image) are touched by the cursor. Windows/IE jQuery never sends notifications (mouseenter our mouseleave) as the cursor enters or exits one of the target divs. If I turn off the visibility of the image everything works as expected (like it does in every other browser), so the image is effectively blocking all messages (the intention was for the image to be a background and all the divs float above it, where they can be clicked on). I understand that there's a z-index gotcha (so explicitly specifying z-index for each absolute positioned div does not work), but unclear as to how to nest or order multiple divs to allow a single set of jQuery rules to support all browsers. The image tag seems to trump all other divs and always appear in front of them... BTW: I could not use i m g as a tag in this text so it is spelled image in the following, so the input editor does not think that I am trying to pull a fast one on stackoverflow... How used? "mainview" is the background image, "zoneleft" and "zoneright" are the active areas where when the cursor enters the nav buttons "leftarrow" and rightarrow" are supposed to appear. Javascript $("div#zoneleft").bind("mouseenter",function () // enters left zone see left arrow { arrowVisibility("left"); document.getElementById("leftarrow").style.display="block"; }).bind("mouseleave",function () { document.getElementById("leftarrow").style.visibility="hidden"; document.getElementById("rightarrow").style.visibility="hidden"; }); HTML <div id="zoneleft" style="position:absolute; top:44px; left:0px; width:355px; height:372px; z-index:40;"> <div id="leftarrow" style="position:absolute; top:158px; left:0px; z-index:50;"><img src="images/newleft.png" width="59" height="56"/></div></div> <div id="zoneright" style="position:absolute; top:44px; left:355px; width:355px; height:372px; z-index:40;"> <div id="rightarrow" style="position:absolute; top:158px; left:296px; z-index:50;"> (tag named changed so that I could include it here) <image src="images/newright.png" width="59" height="56" /></div></div> </div><!-- navbuttons --> <image id="mainview" style="z-index:-1;" src="images/projectPhotos/photo1.jpg" width:710px; height:372px; /> (tag named changed so that I could include it here) </div><!--photo-->

    Read the article

  • Backup Solr home

    - by user226188
    I'm new to Solr: I've successfully installed Tomcat and Solr 4.3.1 webapp, and two collections on a CentOS 6.4 machine. Now, my server is in production and I need to make backups of solr. So, I would like to know what is the best way to backup solr... For the moment I'm dooing: stop tomcat = tar of my solr home = start tomcat, but I've read that is not a good solution? Moreover, this implie to stop all the tomcat which have other webapps than solr. I've also heard that there is a script named "backup" in solr home bin's folder ? but my bin folder is empty :( I don't want to make an another slave server with replication, for me it's not a backup solution because my backup are supposed to be send to a bacula backup server all nights. There is no builtin solution that I can work around to make a script ? like a mysqldump for Mysql servers. Thanks for help !

    Read the article

  • Fastest way to perform subset test operation on a large collection of sets with same domain

    - by niktech
    Assume we have trillions of sets stored somewhere. The domain for each of these sets is the same. It is also finite and discrete. So each set may be stored as a bit field (eg: 0000100111...) of a relatively short length (eg: 1024). That is, bit X in the bitfield indicates whether item X (of 1024 possible items) is included in the given set or not. Now, I want to devise a storage structure and an algorithm to efficiently answer the query: what sets in the data store have set Y as a subset. Set Y itself is not present in the data store and is specified at run time. Now the simplest way to solve this would be to AND the bitfield for set Y with bit fields of every set in the data store one by one, picking the ones whose AND result matches Y's bitfield. How can I speed this up? Is there a tree structure (index) or some smart algorithm that would allow me to perform this query without having to AND every stored set's bitfield? Are there databases that already support such operations on large collections of sets?

    Read the article

  • After a period of time, nslookup still works, but pinging, and an auto-refeshed website, fails.

    - by Mark Hurd
    Contrary to this SO question this is for a dotted name (gw.localnet.au), and it doesn't happen straight away. Only after some period of time (quite a long time, possibly days). In fact this is for my ADSL router and its internal IP address which I have named within the router itself and in my Windows Server 2003 Domain Controller DNS Service. Specifically, localnet.au is a Active-Directory-backed primary domain. In fact, an ipconfig /flushdns may fix the problem, but only after a while (about the time it took me to type in this question :-) ). That doesn't explain the root cause though... Manually transferred from stackoverflow.com

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >