Search Results

Search found 36111 results on 1445 pages for 'mysql update'.

Page 463/1445 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • Dynamic evaluation of a table column within an insert before trigger

    - by Tim Garver
    HI All, I have 3 tables, main, types and linked. main has an id column and 32 type columns. types has id, type linked has id, main_id, type_id I want to create an insert before trigger on the main table. It needs to compare its 32 type columns to the values in the types table if the main table column has an 'X' for its value and insert the main_id and types_id into the linked table. i have done a lot of searching, and it looks like a prepared statement would be the way to go, but i wanted to ask the experts. The issue, is i dont want to write 32 IF statements, and even if i did, i need to query the types table to get the ID for that type, seems like a huge waist of resources. Ideally i want to do this inside of my trigger: BEGIN DECLARE @types results_set -- (not sure if this is a valid type); -- (iam sure my loop syntax is all wrong here)... SET @types = (select * from types) for i=0;i<types.records;i++ { IF NEW.[i.type] = 'X' THEN insert into linked (main_id,type_id) values (new.ID, i.id); END IF; } END; Anyway, This is what i was hoping to do, maybe there is a way to dynamically set the field name inside of a results loop, but i cant find a good example of this. Thanks in advance Tim

    Read the article

  • Accessing data entered into multiple Django forms and generating them onto a new URL

    - by pedjk
    I have a projects page where users can start up new projects. Each project has two forms. The two forms are: class ProjectForm(forms.Form): Title = forms.CharField(max_length=100, widget=_hfill) class SsdForm(forms.Form): Status = forms.ModelChoiceField(queryset=P.ProjectStatus.objects.all()) With their respective models as follows: class Project(DeleteFlagModel): Title = models.CharField(max_length=100) class Ssd(models.Model): Status = models.ForeignKey(ProjectStatus) Now when a user fills out these two forms, the data is saved into the database. What I want to do is access this data and generate it onto a new URL. So I want to get the "Title" and the "Status" from these two forms and then show them on a new page for that one project. I don't want the "Title" and "Status" from all the projects to show up, just for one project at a time. If this makes sense, how would I do this? I'm very new to Django and Python (though I've read the Django tutorials) so I need as much help as possible. Thanks in advance Edit: The ProjectStatus code is (under models): class ProjectStatus(models.Model): Name = models.CharField(max_length=30) def __unicode__(self): return self.Name

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

  • get_post_meta return empty string

    - by Jean-philippe Emond
    I guest it is a little issues but I running a SQL to get some post id. $result = $wpdb->get_results("SELECT wppm.post_id FROM wp_postmeta wppm INNER JOIN wp_posts wpp ON wppm.post_id=wpp.ID WHERE wppm.meta_key LIKE 'activity'"); (count: 302) After that, I get all id and I run get_post_meta like that: foreach($result as $id){ $activity = get_post_meta($id); var_dump($activity); foreach($activity as $key=>$value){ if(is_array($value) && $key=="age"){ var_dump($value); } } } (var_dump result: string "") samething if I run with: $activity = get_post_meta($id,'activity',true); Where we need to get a result. What is wrong? Thank you for your help!!! [Bonus Question] If the "activity" meta_key as an array Value. and I get directly like: $result = $wpdb->get_results("SELECT wppm.meta_value FROM wp_postmeta wppm INNER JOIN wp_posts wpp ON wppm.post_id=wpp.ID WHERE wppm.meta_key LIKE 'activity'"); How I parse it? Thanks again!

    Read the article

  • How to structure this query...?

    - by SpikETidE
    Hi Everyone... Consider the following table.... hotel facilities 1 internet 1 swimming pool 1 wi-fi 1 parking 2 swimming pool 2 sauna 2 parking 3 toilets 3 bungee-jumping 3 internet 4 parking 4 swimming pool I need to select only the hotels that have parking, swimming pool and internet....? I worked out the following.... SELECT hotel FROM table WHERE facilties IN(internet, swimming pool, parking) This query selects the hotels that has atleast one among the choices. But what i need is a query that selects the hotels that has ALL of the selected facilities... Thanks for your suggestions....

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • jQuery selectable() elements update values

    - by Josh
    Basically what I'm trying to do is update the value attribute of a hidden input field contained within the selected element when the selectable() UI stops running. If the element is selected, then the input's value should be the name attribute of that particular LI, whereas if the element is not selected, the value should be updated as empty. HTML Sample: <ul id="selector"> <li class="networkicon shr-digg" name="shr-digg"> <div></div> <label>Digg</label> <input type="hidden" value="" name="bookmark[]" /> </li> <li class="networkicon shr-reddit" name="shr-reddit"> <div></div> <label>Reddit</label> <input type="hidden" value="" name="bookmark[]" /> </li> <li class="networkicon shr-newsvine" name="shr-newsvine"> <div></div> <label>Newsvine</label> <input type="hidden" value="" name="bookmark[]" /> </li> </ul> Script Sample: $(function() { $("#selector").selectable({ filter: 'li', selected: function(event, ui) { $(".ui-selected").each(obj, function() { $(this).children('input').val($(this).attr('name')); }); }, unselected: function(event, ui) { $(".ui-selected").each(obj, function() { $(this).children('input').val(''); }); } }); });

    Read the article

  • Fetch image from folder via datatable does not work after placing image in subdirectory

    - by Arnold Bishkoff
    I am having trouble wrapping my head around the following I have code that fetches an image via smarty in a line img src="getsnap.php?picid={$data[$smarty.section.sec.index].picno|default:$nextpic}&typ=pic&width={$config.disp_snap_width}&height={$config.disp_snap_height}" class="smallpic" alt="" / this works if i pull the image from /temp/userimages/userid/imageNo.ext but because an OS can segfault if you store too many folders or images in a directory i have code that assigns the user image to a subdirectory based upon division of a subdir per 1000 userids. so in thise case i have user id 94 whos images get stored in /siteroot/temp/userimages/000000/94/pic_1.jpg (through 10) or tn_1 (through 10).jpg here is the code for getsnap.php <?php ob_start(); if ( !defined( 'SMARTY_DIR' ) ) { include_once( 'init.php' ); } include('core/snaps_functions.php'); if (isset($_REQUEST['username']) && $_REQUEST['username'] != '') { $userid = $osDB-getOne('select id from ! where username = ?',array(USER_TABLE, $_REQUEST['username']) ); } else { // include ( 'sessioninc.php' ); if( !isset($_GET['id']) || (isset($_GET['id'])&& (int)$_GET['id'] <= 0 ) ) { $userid = $_SESSION['UserId']; } else { $userid = $_GET['id']; } } if (!isset($_GET['picid']) ) { if ((isset($_REQUEST['type']) && $_REQUEST['type'] != 'gallery') || !isset($_REQUEST['type']) ) { $defpic = $osDB-getOne('select picno from ! where userid = ? and ( album_id is null or album_id = ?) and default_pic = ? and active = ? ',array(USER_SNAP_TABLE, $userid,'0','Y','Y' ) ); if ($defpic != '') { $picid = $defpic; } else { $picid = $osDB-getOne('select picno from ! where userid = ? and ( album_id is null or album_id = ?) and active=? order by rand()',array(USER_SNAP_TABLE, $userid,'0','Y' ) ); } unset( $defpic); } } else { $picid = $_GET['picid']; } $typ = isset( $_GET['typ'])?$_GET['typ']:'pic' ; $cond = ''; if ( ($config['snaps_require_approval'] == 'Y' || $config['snaps_require_approval'] == '1') && $userid != $_SESSION['UserId'] ) { $cond = " and active = 'Y' "; } $sql = 'select * from ! where userid = ? and picno = ? '.$cond; //Get the pic $row =& $osDB-getRow ( $sql, array( USER_SNAP_TABLE, $userid, $picid ) ); //Okay pic was found in the DB, Lets actually do something // $id = $userid; $dir = str_pad(($id - ($id % 1000))/100000,6,'0',STR_PAD_LEFT); $zimg = USER_IMAGES_DIR.$dir; $img = getPicture($zimg, $userid, $picid, $typ, $row); //$img = getPicture($userid, $picid, $typ, $row); //$img = getPicture($dir, $userid, $picid, $typ, $row); $ext = ($typ = 'tn')?$row['tnext']:$row['picext']; // Now pic is built as // something pic_x.ext ie pic_2.jpg if ( $img != '' && ( ( hasRight('seepictureprofile') && ( $config['snaps_require_approval'] == 'Y' && $row['active'] == 'Y' ) ||$config['snaps_require_approval'] == 'N' ) || $userid == $_SESSION['UserId'] ) ) { $img2 = $img; //$img2 = $dir.'/'.$img; } else { $gender = $osDB-getOne( 'select gender from ! where id = ?', array( USER_TABLE, $userid ) ) ; if ($gender == 'M') { $nopic = SKIN_IMAGES_DIR.'male.jpg'; } elseif ($gender == 'F') { $nopic = SKIN_IMAGES_DIR.'female.jpg'; } elseif ($gender == 'D') { $nopic = SKIN_IMAGES_DIR.'director.jpg'; } $img2 = imagecreatefromjpeg($nopic); $ext = 'jpg'; } ob_end_clean(); header("Pragma: public"); header("Content-Type: image/".$ext); header("Content-Transfer-Encoding: binary"); header("Cache-Control: must-revalidate"); $ExpStr = "Expires: " . gmdate("D, d M Y H:i:s", time() - 30) . " GMT"; header($ExpStr); $id = $userid; $dir = str_pad(($id - ($id % 1000))/100000,6,'0',STR_PAD_LEFT); $zimg = USER_IMAGES_DIR.$dir; //header("Content-Disposition: attachment; filename=profile_".$userid."_".$typ.".".$ext); //header("Content-Disposition: attachment; filename=$dir.'/'.profile_".$userid."".$typ.".".$ext); //header("Content-Disposition: attachment; filename=profile"$dir".'/'.".$userid."_".$typ.".".$ext); header("Content-Disposition: attachment; filename=profile_".$userid."_".$typ.".".$ext); /* if ($_SESSION['browser'] != 'MSIE') { header("Content-Disposition: inline" ); } */ if ($ext == 'jpg') { imagejpeg($img2); } elseif ($ext == 'gif') { imagegif($img2); } elseif ($ext == 'png') { imagepng($img2); } elseif ($ext == 'bmp') { imagewbmp($img2); } imagedestroy($img2); ?

    Read the article

  • Trouble creating stored procedure

    - by MatW
    I'm messing around with stored procedures for the first time, but can't even create a simple select! I'm using phpMyAdmin and this is my SQL: DELIMITER // CREATE PROCEDURE test_select() BEGIN SELECT * FROM products LIMIT 10; END // DELIMITER ; After submitting that, my localhost does some thinking for a loooong time and eventually loads a page with no content called /phpmyadmin/import.php. After reloading phpMyAdmin and trying to invoke the procedure: CALL test_select(); I get a "PROCEDURE doesn't exist" error. Any ideas?

    Read the article

  • How do you create a transaction that spans multiple statements in Python with MySQLdb?

    - by Fast Fish
    I know that with an InnoDB table, transactions are autocommit, however I understand that to mean for a single statement? For example, I want to check if a user exists in a table, and then if it doesn't, create it. However there lies a race condition. I believe using a transaction prior to doing the select, will ensure that the table remains untouched until the subsequent insert, and the transaction is committed. How can you do this with MySQLdb and Python?

    Read the article

  • Which field is explain telling me to index?

    - by shady
    I don't understand what this explain statement is saying. Which field needs an index?. The first line to me is confusing because ref is null. Here's the query I'm using: SELECT pp.property_id AS 'good_prop_id', pr.site_number AS 'pr.site_number', CONCAT(pr.site_street_name, ' ', pr.site_street_type) AS 'pr.partial_addr', pr.county FROM realval_newdb.preforeclosures AS pr INNER JOIN realval_newdb.properties_preforeclosures AS pp USE INDEX (mee_id) ON (pr.mee_id = pp.mee_id) INNER JOIN listings_copy AS lc ON (pr.site_number = lc.site_number) AND (lc.site_street_name = CONCAT(pr.site_street_name, ' ', pr.site_street_type)) WHERE lc.site_county = pr.county LIMIT 1; Can anyone help me optimize this query?

    Read the article

  • Incorrect string encodings

    - by James
    Note: I have read all of the related PHP, UTF-8, character encoding articles that are usually suggested, but my question relates to data inserted before I applied such techniques. I am wishing to retrospectively fix all character encoding problems. Now all connections are set as utf8 using PDO. PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8' Unfortunately, a large amount of data was inserted that is of questionable encoding before I had implemented correct character encoding practices. As displayed by: $sql = "SELECT name FROM data LIMIT 3"; foreach ($pdo->query($sql) as $row) { $name = $row['name']; echo $name . "\n"; echo utf8_encode($name) . "\n"; echo utf8_decode($name) . "\n"; echo htmlspecialchars($name, ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_encode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_decode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo '<hr/>'; } Which produces: Antonín Dvořák AntonÃÆín DvoÃâ¦Ãâ¢ÃÆák Anton??­n Dvo??????¡k Antonín Dvořák AntonÃÆín DvoÃâ¦Ãâ¢ÃÆák ---------- Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ????? ?????????? Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ---------- Tiësto Tiësto Tiësto Tiësto Tiësto Tiësto ---------- When removing 'SET NAMES utf8' with PDO it produces the data: Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák ---------- ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ---------- Tiësto Tiësto Ti?sto Tiësto Tiësto ---------- And here is a dump of the database rows concerned: DROP TABLE IF EXISTS `data`; CREATE TABLE IF NOT EXISTS `data` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(80) NOT NULL, PRIMARY KEY (`id`), KEY `name` (`name`(10)), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0; INSERT INTO `data` (`id`, `name`) VALUES (0, 'Antonín Dvořák'), (1, 'Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶'), (2, 'Tiësto'); The 3rd and 6th lines of the 3rd row "Tiësto" are then correctly echoed. I'm just unsure what is the best way to correct encodings/detect the encodings of bad strings and correct, etc.

    Read the article

  • How to build a SQL statement when any combination of user input to the table is possible?

    - by Greg McNulty
    Example: the user fills in everything but the product name. I need to search on what is supplied, so in this case everything but productName= This example could be for any combination of input. Is there a way to do this? Thanks. $name = $_POST['n']; $cat = $_POST['c']; $price = $_POST['p']; if( !($name) ) { $name = some character to select all? } $sql = "SELECT * FROM products WHERE productCategory='$cat' and productName='$name' and productPrice='$price' "; EDIT Solution does not have to protect from attacks. Specifically looking at the dynamic part of it.

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • GROUP BY a date, with ordering by date.

    - by standard
    Take this simple query: SELECT DATE_FORMAT(someDate, '%y-%m-%d') as formattedDay FROM someTable GROUP BY formatterDay This will select rows from a table with only 1 row per date. How do I ensure that the row selected per date is the earliest for that date, without doing an ordered subquery in the FROM? Cheers

    Read the article

  • Mysqli prepared insert statements always returning false

    - by user1754679
    I'm writing prepared statements that are supposed to insert data into a table, on a database that's been pre-selected in the variable $GLOBALS['mysqli']. The connection has been tested, and that's not the problem I'm having. I'm only running into trouble whenever my prepared statement involves INSERT INTO. I know the tablename, and field names are correct, but $stmt is ALWAYS false. What gives? $stmt = $GLOBALS['mysqli']->prepare("INSERT INTO audit_RefreshCount (user, count, lastrefresh) values (?,?,?)"); if ($stmt == TRUE) { $stmt->bindParam('ssi', $_SESSION['username'], '0', time()); //$stmt->bind_Param('ssi', $_SESSION['username'], '0', time()); // Also doesn't work. $stmt->execute(); }

    Read the article

  • Google App-Engine Java Batch Update

    - by Manjoor
    I need to upload a .csv file and save the records in bigtable. My application successfully parse 200 the records in the csv files and save to table. Here is my code to save the data. for (int i=0;i<lines.length -1;i++) //lines hold total records in csv file { String line = lines[i]; //The record have 3 columns integer,integer,Text if(line.length() > 15) { int n = line.indexOf(","); if (n>0) { int ID = lInteger.parseInt(ine.substring(0,n)); int n1 = line.indexOf(",", n + 2); if(n1 > n) { int Col1 = Integer.parseInt(line.substring(n + 1, n1)); String Col2 = line.substring(n1 + 1); myTable uu = new myTable(); uu.setId(ID); uu.setCol1(MobNo); Text t = new Text(Col2); uu.setCol2(t); PersistenceManager pm = PMF.get().getPersistenceManager(); pm.makePersistent(uu); pm.close(); } } } } But when no of records grow it gives timeout error. The csv file may have upto 800 records. Is it possible to do that in App-Engine? (something like batch update)

    Read the article

  • Want to calculate the sum of the count rendered by group by option..

    - by Vijay
    i have a table with the columns such id, tid, companyid, ttype etc.. the id may be same for many companyid but unique within the companyid and tid is always unique and i want to calculate the total no of transactions entered in the table, a single transaction may be inserted in more than one row, for example, id tid companyid ttype 1 1 1 xxx 1 2 1 may be null 2 3 1 yyy 2 4 1 may be null 2 5 1 may be null the above entries should be counted as only 2 transactions .. it may be repeated for many companyids.. so how do i calculate the total no of transactions entered in the table i tried select sum(count(*)) from transaction group by id,companyId; but doesn't work select count(*) from transaction group by id; wont work because the id may be repeated for different companyids.

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >