Search Results

Search found 6311 results on 253 pages for 'limit clause'.

Page 181/253 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Codesample with bufferoverflow (gets method). Why does it not behave as expected?

    - by citronas
    This an extract from an c program that should demonstrate a bufferoverflow. void foo() { char arr[8]; printf(" enter bla bla bla"); gets(arr); printf(" you entered %s\n", arr); } The question was "How many input chars can a user maximal enter without a creating a buffer overflow" My initial answer was 8, because the char-array is 8 bytes long. Although I was pretty certain my answer was correct, I tried a higher amount of chars, and found that the limit of chars that I can enter, before I get a segmentation fault is 11. (Im running this on A VirtualBox Ubuntu) So my question is: Why is it possible to enter 11 chars into that 8 byte array?

    Read the article

  • Adding a Third Table to a Two-Table Join Query

    - by John
    Hello, The query below works just fine. It pulls fields from two MySQL tables, "comment" and "login". It does this for rows where "username" in the table "login" equals the variable "$profile." It also pulls fields for rows where "loginid" in the table "comment" equals the "loginid" that is also being pulled from "login." I would like to pull data from a third table called "submission," which has the following fields: submissionid loginid title url displayurl datesubmitted I would like to pull fields from rows in "submission" where "loginid" equals the "loginid" that is already being pulled from the other two tables, "login" and "comment." How can I do this? Thanks in advance, John Query: $sqlStrc = "SELECT l.username, l.loginid, c.loginid, c.commentid, c.submissionid, c.comment, c.datecommented FROM comment AS c INNER JOIN login AS l ON c.loginid = l.loginid WHERE l.username = '$profile' ORDER BY c.datecommented DESC LIMIT 10";

    Read the article

  • problem in decreasing page's queries

    - by Mac Taylor
    hey guys i have a tag table in my php/mysql project that looks like this Table name : bt_tags Table fileds : tid,tag and for every story rows there is a filed named : tags Table name: stories table filed : tags that saved in this field as ids 1 5 6 space between them now problem : when using while loop to fetch all fields in story table , the page uses 1 query to show every stories' detail but for showing tag's names , i should query another table to find names , we have ids stored in story table now i used for loop between while loop to show tag names but im sure there is a better way to decrease page queries how can i improve this script and show tag's names without using *for loop ?* $result = $db->sql_query("SELECT * FROM ".STORY_TABLE." "); while ($row = $db->sql_fetchrow($result)) { //fetching other $vars ---- $tags_id = explode(" ",$row['tags']); $c = count($tags_id); for($i=1;$i<$c-1;$i++){ list($tag_name,$slug) = $db->sql_fetchrow($db->sql_query( 'SELECT `tag`,`slug` FROM `bt_tags` WHERE `tid` = "'.tags_id[$i].'" LIMIT 1' )); $sow_tags = '$tag_name,'; }

    Read the article

  • MySQL command-line tool: How to find out number of rows affected by a DELETE?

    - by ambivalence
    I'm trying to run a script that deletes a bunch of rows in a MySQL (innodb) table in batches, by executing the following in a loop: mysql --user=MyUser --password=MyPassword MyDatabase < SQL_FILE where SQL_FILE contains a DELETE FROM ... LIMIT X command. I need to keep running this loop until there's no more matching rows. But unlike running in the mysql shell, the above command does not return the number of rows affected. I've tried -v and -t but neither works. How can I find out how many rows the batch script affected? Thanks!

    Read the article

  • Best place to store large amounts of session data

    - by audiopleb
    I'm building an application that needs to store and re-use large amounts of data per session. So for example, the user selects a large list of list items (say 2000 or significantly more) which have a numeric value as their key then they save that selection and go off to another page, do something else and then come back to the original page and need to load their selections into that page. What is the quickest and most efficient way of storing and reusing that data? In a text file saved with the session id? In a temp db table? In the session data itself (db sessions so size isn't a limit) using a serialised string or using gzcompress or gzencode? Any advice or insight would be great! Thank you!!!!

    Read the article

  • Walking through an SQLite Table

    - by galford13x
    I would like to implement or use functionality that allows stepping through a Table in SQLite. If I have a Table Products that has 100k rows, I would like to retrive perhaps 10k rows at a time. Somthing similar to how a webpage would list data and have a < Previous .. Next > link to walk through the data. Are there select statements that can make this simple? I see and have tried using the ROWID in conjunction with LIMIT which seems ok if not ordering the data. // This seems works if not ordering. SELECT * FROM Products WHERE ROWID BETWEEN x AND y;

    Read the article

  • Returning more than 1000 rows in classic asp adodb.recordset

    - by peg_leg
    My code in asp classic, doing a mssql database query: rs.pagesize = 1000 ' this should enable paging rs.maxrecords = 0 ' 0 = unlimited maxrecords response.write "hello world 1<br>" rs.open strSql, conn response.write "hello world 2<br>" My output when there are fewer than 1000 rows returned is good. More than 1000 rows and I don't get the "hello world 2". I thought that setting pagesize sets up paging and thus allows all rows to be returned regardless of how many rows there are. Without setting pagesize, paging is not enable and the limit is 1000 rows. However my page is acting as if pagesize is not working at all. Please advise.

    Read the article

  • How do you automatically refresh part of a page automatically using Javascript or AJAX?

    - by Ryan
    $messages = $db->query("SELECT * FROM chatmessages ORDER BY datetime DESC, displayorderid DESC LIMIT 0,10"); while($message = $db->fetch_array($messages)) { $oldmessages[] = $message['message']; } $oldmessages = array_reverse($oldmessages); ?> <div id="chat"> <?php for ($count = 0; $count < 9; $count++) { echo $oldmessages[$count]; } ?> <script language="javascript" type="text/javascript"> <!-- setInterval( "document.getElementById('chat').innerHTML='<NEW CONTENT OF #CHAT>'", 1000 ); --> </script> </div> I'm trying to create a PHP chatroom script but I'm having a lot of trouble getting it to AutoRefresh The content should automatically update to , how do you make it do that? I've been searching for almost an hour

    Read the article

  • QT4, paginated showing elements

    - by matiit
    I am going to write an application that uses QT4 (with C++ or python it isnt important in that moment). One of functionality is "Showing all items in database". One item has a Title, author, description and photo (constant size) And there could be very many items. Let's say 400. There won't be enough space to show'em all at once time. One row will have 200px, so i need at most 4 for once time. How to paginate them? I have no idea. I can use limit and offset in SQL queries, but how to tell window: "that's 5th page"? Any solutions?

    Read the article

  • Migrating from mssql to firebird: pro and cons

    - by user193655
    i am considering the migration for 3 reasons: 1) SQLSERVER installation is a nightmar, expecially for 1-user software. Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express eidtion has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platiforms, but this will backfire I fear.

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • PHP preg_match: a pattern which satisfies all MySQL field names (including 'table.field' formations)

    - by gsquare567
    i need a pattern which satisfies mysql field names, but also with the option of having a table name before it examples: mytable.myfield myfield my4732894__7289FiEld here's what i tried: $pattern = "/^[a-zA-Z0-9_]*?[\.[a-zA-Z0-9_]]?$/"; this worked for what i needed before, which was just the field name: $pattern = "/^[a-zA-Z0-9_]*$/"; any ideas why my addition isnt working? maybe i'm making up regex, so i'll explain what i added... the first '?' is to say that it isn't greedy, ie. it will stop if the next part, namely "[.[a-zA-Z0-9_]]?" is satisfied. now, that second part is just the same as the first except it is optional (hence the '?' at the end) and it starts with a period (hence the '[.' and ']' wrapping my old clause. and obviously, the "^" and "$" rep the beginning and end of the string so... any ideas? (also, i'm a tad confused as to why i need to put in those "/"s in the begining/end anyways, so if you could tell me why it's required, that'd be awesome) thanks a lot! (and thanks for reading this all if you actually did... it's quite a ramble)

    Read the article

  • How to find/display the Upload File limits on IIS with ASP.NET?

    - by NVRAM
    I have a web service on which the end users will be uploading ZIP archives that can be very large (one test file is over 200MB). I'd like to handle oversized files proactively and size-limited upload failures gracefully. Since the web app will be deployed on customers' machines, so I cannot easily ensure that the configuration matches any fixed size. I've documented how to use the appcmd command for them to set the requestLimits.maxAllowedContentLength value beyond the 30MB default. But I'd like to handle it in the web app; I'm hoping for two things: To show the current limit on the page where they initiate the file upload, something along the lines of: Each file upload is limited to 15MB. If your archive is larger, (etc., etc., etc.) To give a meaningful error when that size is exceeded. Currently, it takes a long time for the data to be sent, and then I see a misleading 404 page. Any thoughts?

    Read the article

  • Which type of Rails model association should I use in this situation?

    - by jstayton
    I have two models/tables in my Rails application: discussions and comments. Each discussion has_many comments, and each comment belongs_to a discussion. My discussions table also includes a first_comment_id column and last_comment_id column for convenience and speed. I want to be able to call discussion.last_comment for the last comment model, but the following (in my discussion model) isn't working to make this happen: has_one :first_comment, :class_name => "Comment" has_one :last_comment, :class_name => "Comment" When I call discussion.last_comment, the following SQL is run: SELECT * FROM `comments` WHERE (`comments`.discussion_id = 1) LIMIT 1 It's using the discussions.id column to join against comments.discussion_id, when I want it to join discussions.last_comment_id against comments.id. Am I using the wrong type of association here? Thanks for your help!

    Read the article

  • Sub Query making Query slow.

    - by Muhammad Kashif Nadeem
    Please copy and paste following script. DECLARE @MainTable TABLE(MainTablePkId int) INSERT INTO @MainTable SELECT 1 INSERT INTO @MainTable SELECT 2 DECLARE @SomeTable TABLE(SomeIdPk int, MainTablePkId int, ViewedTime1 datetime) INSERT INTO @SomeTable SELECT 1, 1, DATEADD(dd, -10, getdate()) INSERT INTO @SomeTable SELECT 2, 1, DATEADD(dd, -9, getdate()) INSERT INTO @SomeTable SELECT 3, 2, DATEADD(dd, -6, getdate()) DECLARE @SomeTableDetail TABLE(DetailIdPk int, SomeIdPk int, Viewed INT, ViewedTimeDetail datetime) INSERT INTO @SomeTableDetail SELECT 1, 1, 1, DATEADD(dd, -7, getdate()) INSERT INTO @SomeTableDetail SELECT 2, 2, NULL, DATEADD(dd, -6, getdate()) INSERT INTO @SomeTableDetail SELECT 3, 2, 2, DATEADD(dd, -8, getdate()) INSERT INTO @SomeTableDetail SELECT 4, 3, 1, DATEADD(dd, -6, getdate()) SELECT m.MainTablePkId, (SELECT COUNT(Viewed) FROM @SomeTableDetail), (SELECT TOP 1 s2.ViewedTimeDetail FROM @SomeTableDetail s2 INNER JOIN @SomeTable s1 ON s2.SomeIdPk = s1.SomeIdPk WHERE s1.MainTablePkId = m.MainTablePkId) FROM @MainTable m Above given script is just sample. I have long list of columns in SELECT and around 12+ columns in Sub Query. In my From clause there are around 8 tables. To fetch 2000 records full query take 21 seconds and if I remove Subquiries it just take 4 seconds. I have tried to optimize query using 'Database Engine Tuning Advisor' and on adding new advised indexes and statistics but these changes make query time even bad. Note: As I have mentioned that this is test data to explain my question the real data has lot of tables joins columns but without Sub-Query the results us fine. Any help thanks.

    Read the article

  • Efficient way to combine results of two database queries.

    - by ensnare
    I have two tables on different servers, and I'd like some help finding an efficient way to combine and match the datasets. Here's an example: From server 1, which holds our stories, I perform a query like: query = """SELECT author_id, title, text FROM stories ORDER BY timestamp_created DESC LIMIT 10 """ results = DB.getAll(query) for i in range(len(results)): #Build a string of author_ids, e.g. '1314,4134,2624,2342' But, I'd like to fetch some info about each author_id from server 2: query = """SELECT id, avatar_url FROM members WHERE id IN (%s) """ values = (uid_list) results = DB.getAll(query, values) Now I need some way to combine these two queries so I have a dict that has the story as well as avatar_url and member_id. If this data were on one server, it would be a simple join that would look like: SELECT * FROM members, stories WHERE members.id = stories.author_id But since we store the data on multiple servers, this is not possible. What is the most efficient way to do this? Thanks.

    Read the article

  • Send data to webserver from C#, what's the most efficient way?

    - by Brian
    I am sending gps coordinates from a windows mobile phone to a webserver using a basic program I wrote in C#. The problem is the data plan on the phone only allows 4 MB per month. I was planning on updating the location every 10 seconds. Currently I am just creating a webrequest every 10 seconds to a php page on the server and the coordinates are passed over in the url, the php page saves them to the database. This generates about 1K of data per request, at this rate I will hit my data limit in less than a day. Is there a more efficient way to do this?

    Read the article

  • Do bit operations cause programs to run slower?

    - by flashnik
    I'm dealing with a problem which needs to work with a lot of data. Currently its values are represented as an unsigned int. I know that real values do not exceed a limit of 1000. Questions I can use unsigned short to store it. An upside to this is that it'll use less storage space to store the value. Will performance suffer? If I decided to store data as short but all the calling functions use int, it's recognized that I need to convert between these datatypes when storing or extracting values. Will performance suffer? Will the loss in performance be dramatic? If I decided to not use short but just 10 bits packed into an array of unsigned int. What will happen in this case comparing with previous ones?

    Read the article

  • Retrive multiple values from single dimension value / key JSON

    - by jonnypixel
    I'm busting my head trying to work this out. "ContentBlock1":["2","22"] I have been trying to get the 2 and the 22 into a comma sepertaed string so i can use it within a MySQL IN(2,22) query. I currently have tried several ways but none seem to work for me. $ContentBlock = my json data; $cid = json_decode($ContentBlock,true); foreach ($cid as $key){ $jsoncid = "$key ,"; } And then: SELECT * FROM content WHERE featured=1 AND state=1 AND catid IN($jsoncid) ORDER BY ordering ASC LIMIT 4");

    Read the article

  • SSIS - Limiting Concurrent Connections

    - by Bigtoe
    Hi Folks, I am using SSIS to connect to a legecy mainframe database and this allows only 5 concurrent connections at a time. I have a dataflow task with many tables to transfer and it kicks outs because of this limitation. I have split up the Data Flow task into seperate data flows and this is working for the moment, but it is not optiomal as they need to be sequenced and 1 large transfer in a flow is holding up subsequent transfers. Anyone any idea of how to limit the number of connections in a single data flow, I had a look at using the Engine Threads but this did not make any difference. Any help much appericated.

    Read the article

  • Google Maps API limitations

    - by Henrik Skogmo
    I am working on a project where I am going to create a map over a specific area, and have some points-of-interest included. Thinking of the ones already included in Google Maps. My first thought was that this could be done in the Google Maps API, but I've never worked with if before so therefor I have some questions about it's limitations and capability. Can I limit the map to one specific area? Can I filter the points-of-interest? Ex. only gas stations and hotels etc. Thats about it to get me started at least. Thanks!

    Read the article

  • Ajax Reload same page onclick

    - by user277891
    Hi, This is my situation.I have some 10 links on a page. So when user clicks on those links ajax-same page reload must take place. To be clear I have something like this. <a href="test.php?name=one">one</a> | <a href="test.php?name=Two">Two</a> If javascript is enabled, Onclick, ajax load must take place. If javascript is disabled, Then the above should work. Basically I am using "name" to limit some values of my search page.

    Read the article

  • iphone app store rss failed to open

    - by suvendu
    I write this, $opts = array( 'http' = array( 'user_agent' = 'PHP libxml agent', ) ); $context = stream_context_create($opts); libxml_set_streams_context($context); $doc = new DOMDocument(); $doc-load("http://ax.itunes.apple.com/WebObjects/MZStoreServices.woa/ws/RSS/toppaidapplications/limit=25/xml"); foreach ($doc-getElementsByTagName('item') as $node) { $title = $node-getElementsByTagName('title')-item(0)-nodeValue; $desc = $node-getElementsByTagName('description')-item(0)-nodeValue; $link = $node-getElementsByTagName('link')-item(0)-nodeValue; $date = $node-getElementsByTagName('pubDate')-item(0)-nodeValue; echo $title; echo '<br>'; echo $desc; } ?

    Read the article

  • Reload AJAX data every X minutes/seconds, jQuery

    - by NightMICU
    Hi everyone, I programmed a CMS that has a log of who has recently logged into the system. Presently, this data is fed into a jQuery UI tab via Ajax. I would like to put this information into a sidebar on the main page and load it via AJAX every 30 seconds (or some set period of time). How would I go about doing this? Does the PHP response need to be JSON coded? I am fairly new to AJAX and JSON data. Here is the PHP I am currently using to pull details from the users table- <?php $loginLog = $db->query("SELECT name_f, name_l, DATE_FORMAT(lastLogin, '%a, %b %D, %Y %h:%i %p') AS lastLogin FROM user_control ORDER BY lastLogin ASC LIMIT 10"); while ($recentLogin = $loginLog->fetch()) { echo $recentLogin['name_f'] . " " . $recentLogin['name_l'] . " - " . $recentLogin['lastLogin']; } ?> Thanks!

    Read the article

  • If my application doesn't use a lot of memory, can I ignore viewDidUnload:?

    - by iPhoneToucher
    My iPhone app generally uses under 5MB of living memory and even in the most extreme conditions stays under 8MB. The iPhone 2G has 128MB of RAM and from what I've read an app should only expect to have 20-30MB to use. Given that I never expect to get anywhere near the memory limit, do I need to care about memory warnings and setting objects to nil in viewDidUnload:? The only way I see my app getting memory warnings is if something else on the phone is screwing with the memory, in which case the entire phone would be acting silly. I built my app without ever using viewDidUnload:, so there's more than a hundred classes that I'd need to inspect and add code to if I did need to implement it.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >