Search Results

Search found 6172 results on 247 pages for 'limit choices to'.

Page 112/247 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • ServicedComponent not being disposed in finaliser

    - by David Gray Wright
    Questions needing answers : Does the finalizer of the client side ServicedComponent call ServicedComponent.DisposeObject or Dispose? How should destruction (release of memory) occur in the com server in realtion to its usage in the client? Basically - we are reaching a 2 gig limit on process size (memory) of the COM server as memory is not being released - is the solution to call explicitly call Dispose or use the using statement in the client?

    Read the article

  • Kohana 3 - Query builder gives 0 rows

    - by pigfox
    The following query returns one row as expected when run from phpmyadmin. SELECT units . * , locations . * FROM units, locations WHERE units.id = '1' AND units.location_id = locations.id LIMIT 0 , 30 But when I try to do it in Kohana 3: $unit = DB::select('units.*', 'locations.*') ->from('units', 'locations') ->where('units.id', '=', $id)->and_where('units.location_id', '=', 'locations.id') ->execute()->as_array(); var_dump($unit); It prints array(0) { } What am I doing wrong?

    Read the article

  • count and fetch rows in php

    - by Mac Taylor
    hey guys i have a table in my mysql database named (names) now everyone can save their real names now i want to query this table and find out how many times these names used forexample the output should be : Jakob (20) Jenny (17) now this is my own code : list($usernames) =mysql_fetch_row(mysql_query('SELECT name FROM table_user GROUP BY name ORDER BY COUNT(name) DESC LIMIT 50 ')); list($c) =mysql_num_rows(mysql_query('SELECT COUNT(name) FROM table_user GROUP BY name ')); print $usernames.'('.$c.')' is this a correct approach ?!

    Read the article

  • Curl CONNECTION OPTIONS

    - by cinek1lol
    HI I'd like to know how to check out the speed of a file being uploaded in real time using the curl library in c++. This is what I have written: curl_easy_getinfo(curl,CURLINFO_SPEED_UPLOAD,&c); But the manual says that it shows average speed, but even this doesn't seem to work with me, because I can only see a 0. There is one more thing: How to set an upload limit that works, because if I write this: curl_easy_setopt(curl, CURLOPT_MAX_SEND_SPEED_LARGE, 100); I get an error 502 message plis help

    Read the article

  • ORACLE SQL ROWNUM execution order

    - by iwan
    Dear expert, in Oracle SQL, there is a possible criteria called rownum. Can i confirm that rownum will be executed at last as just a limit for number of records return? or could it be executed first, before other WHERE SQL criteria (let's if we put rownum prior to the others)?

    Read the article

  • iPhone Image Resources, ICO vs PNG, app bundle filesize

    - by Jasarien
    My application has a collection of around 1940 icons that are used throughout. They're currently in ICO and new images provided to me come in ICO format too. I have noticed that they contain a 16x16 and 32x32 representation of each icon in one file. Each file is roughly 4KB in filesize (as reported by finder, but ls reports that they vary from being ~1000 bytes to 5000 bytes) A very small number of these icons only contain the 32x32 representation, and as a result are only around 700 bytes in size. Currently I am bundling these icons with my application and they are inflating the size of the app a bit more than I would like. Altogether, the images total just about 25.5MB. Xcode must do some kind of compression because the resulting app bundle is about 12.4MB. Compressing this further into a ZIP (as it would be when submitted to the App Store), results in a final file of 5.8MB. I'm aware that the maximum limit for over the air App Store downloads has been raised to 20MB since the introduction of the iPad (I'm not sure if that extends to iPhone apps as well as iPad apps though, if not the limit would be 10MB). My worry is that new icons are going to be added (sometimes up to 10 icons per week), and will continue to inflate the app bundle over time. What is the best way to distribute these icons with my app? Things I've tried and not had much success with: Converting the icons from ICO to PNG: I tried this in the hopes that the pngcrush utility would help out with the filesize. But it appears that it doesn't make much of a difference between a normal PNG and a crushed png (I believe it just optimises the image for display on the iPhone's GPU rather than compress it's size). Also in going from ICO to PNG actually increased the size of the icon file... Zipping the images, and then uncompressing them on first run. While this did reduce the overall image sizes, I found that the effort needed to unzip them, copy them to the documents folder and ensure that duplication doesn't happen on upgrades was too much hassle to be worth the benefit. Also, on original and 3G iPhones unzipping and copying around 25MB of images takes too long and creates a bad experience... Things I've considered but not yet tried: Instead of distributing the icons within the app bundle, host them online, and download each icon on demand (it depends on the user's data as to which icons will actually be displayed and when). Issues with this is that bandwidth costs money, and image downloads will be bandwidth intensive. However, my app currently has a small userbase of around 5,500 users (of which I estimate around 1500 to be active based on Flurry stats), and I have a huge unused bandwidth allowance with my current hosting package. So I'm open to thoughts on how to solve this tricky issue.

    Read the article

  • I'm new to OOP/PHP. What's the practicality of visibility and extensibility in classes?

    - by marcdev
    I'm obviously brand new to these concepts. I just don't understand why you would limit access to properties or methods. It seems that you would just write the code according to intended results. Why would you create a private method instead of simply not calling that method? Is it for iterative object creation (if I'm stating that correctly), a multiple developer situation (don't mess up other people's work), or just so you don't mess up your own work accidentally?

    Read the article

  • Maximum Row in DBMS

    - by Am1rr3zA
    Is there any limit to maximum row of table in DBMS (specially MySQL)? I want create table for saving logfile and it's row increase so fast I want know what shoud I do to prevent any problem.

    Read the article

  • Is it really best to make site without using <div>, using semantic tags only?

    - by jitendra
    I found this on net in google search and see article here: http://www.thatcssguy.com/limit-your-divs/ See his final layout here: http://www.nodivs.com/ Some quotes from article 1 When I limited the use of my divs all the major browser including both IE6 and IE7 would render the sites nearly perfectly. Or with very little fixes needed. 2 it’s magic but proves divs nor tables are necessary for layout Should we try to make sites like this?

    Read the article

  • What's the purpose of the maxPostSize for Tomcat's HTTP Connector?

    - by Bytecode Ninja
    According to Tomcat docs: The maximum size in bytes of the POST which will be handled by the container FORM URL parameter parsing. The limit can be disabled by setting this attribute to a value less than or equal to 0. If not specified, this attribute is set to 2097152 (2 megabytes). But what's "the container FORM URL parameter parsing"? Any ideas what is the purpose of "maxPostSize"? Thanks in advance.

    Read the article

  • Rails 2.3: How to create this SQL into a named_scope

    - by randombits
    Having a bit of difficulty figuring out how to create a named_scope from this SQL query: select * from foo where id NOT IN (select foo_id from bar) AND foo.category = ? ORDER BY RAND() LIMIT 1; Category should be variable to change. What's the most efficient way the named_scope can be written for the problem above?

    Read the article

  • Exception in thread "main" java.lang.OutOfMemoryError, How to find and fix??

    - by or.nomore
    hey, I'm trying to programming a crossword creator. using a given dictionary txt file and a given pattern txt file. The basic idea is using DFS algorithm. the problem begin when the dictionary file is v-e-r-y big (about 50000 words). then i recive the : Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded i know that there is a part in my program that waists memory, but i don't know where it is, how to find it and how to fix it

    Read the article

  • What are the most useful software development metrics?

    - by kchad
    I would like to track metrics that can be used to improve my team’s software development process, improve time estimates, and detect special case variations that need to be addressed during the project execution. Please limit each answer to a single metric, describe how to use it, and vote up the good answers.

    Read the article

  • MySQL subqueries

    - by swamprunner7
    Can we do this query without subqueries? SELECT login, post_n, (SELECT SUM(vote) FROM votes WHERE votes.post_n=posts.post_n)AS votes, (SELECT COUNT(comments.post_n) FROM comments WHERE comments.post_n=posts.post_n)AS comments_count FROM users, posts WHERE posts.id=users.id AND (visibility=2 OR visibility=3) ORDER BY date DESC LIMIT 0, 15 tables: Users: id, login Posts: post_n, id, visibility Votes: post_n, vote id — it`s user id, Users the main table.

    Read the article

  • How to use Facebook graph API to retrieve fan photos uploaded to wall of fan page?

    - by Joe
    I am creating an external photo gallery using PHP and the Facebook graph API. It pulls thumbnails as well as the large image from albums on our Facebook Fan Page. Everything works perfect, except I'm only able to retrieve photos that an ADMIN posts to our page. (graph.facebook.com/myalbumid/photos) Is there a way to use graph api to load publicy uploaded photos from fans? I want to retrieve the pictures from the "Photos from" album, but trying to get the ID for the graph query is not like other albums... it looks like this: http://www.facebook.com/media/set/?set=o.116860675007039 Another note: The only way i've come close to retreiving this data is by using the "feed" option.. ie: graph.facebook.com/pageid/feed EDIT: This is about as far as I could get- it works, but has certain issues stated below. Maybe someone could expand on this, or provide a better solution. (Using FB PHP SDK) <?php require_once ('config.php'); // get all photos for album $photos = $facebook->api("/YourID/tagged"); $maxitem =10; $count = 0; foreach($photos['data'] as $photo) { if ($photo['type'] == "photo"): echo "<img src='{$photo['picture']}' />", "<br />"; endif; $count+= 1; if ($count >= "$maxitem") break; } ?> Issues with this: 1) The fact that I don't know a method for graph querying specific "types" of Tags, I had to run a conditional statement to display photos. 2) You cannot effectively use the "?limit=#" with this, because as I said the "tagged" query contains all types (photo, video, and status). So if you are going for a photo gallery and wish to avoid running an entire query by using ?limit, you will lose images. 3) The only content that shows up in the "tagged" query is from people that are not Admins of the page. This isn't the end of the world, but I don't understand why Facebook wouldn't allow yourself to be shown in this data as long as you posted it "as yourself" and not as the page.

    Read the article

  • Server Error Message: No File Access

    - by iMayne
    Hello. Im having an issues but dont know where to solve it. My template works great in xampp but not on the host server. I get this message: Warning: file_get_contents() [function.file-get-contents]: URL file-access is disables in the server configuration in homepage/......./twitter.php. The error is on line 64. <?php /* For use in the "Parse Twitter Feeds" code below */ define("SECOND", 1); define("MINUTE", 60 * SECOND); define("HOUR", 60 * MINUTE); define("DAY", 24 * HOUR); define("MONTH", 30 * DAY); function relativeTime($time) { $delta = time() - $time; if ($delta < 2 * MINUTE) { return "1 min ago"; } if ($delta < 45 * MINUTE) { return floor($delta / MINUTE) . " min ago"; } if ($delta < 90 * MINUTE) { return "1 hour ago"; } if ($delta < 24 * HOUR) { return floor($delta / HOUR) . " hours ago"; } if ($delta < 48 * HOUR) { return "yesterday"; } if ($delta < 30 * DAY) { return floor($delta / DAY) . " days ago"; } if ($delta < 12 * MONTH) { $months = floor($delta / DAY / 30); return $months <= 1 ? "1 month ago" : $months . " months ago"; } else { $years = floor($delta / DAY / 365); return $years <= 1 ? "1 year ago" : $years . " years ago"; } } /* Parse Twitter Feeds */ function parse_cache_feed($usernames, $limit, $type) { $username_for_feed = str_replace(" ", "+OR+from%3A", $usernames); $feed = "http://twitter.com/statuses/user_timeline.atom?screen_name=" . $username_for_feed . "&count=" . $limit; $usernames_for_file = str_replace(" ", "-", $usernames); $cache_file = dirname(__FILE__).'/cache/' . $usernames_for_file . '-twitter-cache-' . $type; if (file_exists($cache_file)) { $last = filemtime($cache_file); } $now = time(); $interval = 600; // ten minutes // check the cache file if ( !$last || (( $now - $last ) > $interval) ) { // cache file doesn't exist, or is old, so refresh it $cache_rss = file_get_contents($feed); (this is line 64) Any help on how to give this access on my host server?

    Read the article

  • System call time out?

    - by Arnold
    Hi, I'm using unix system() calls to gunzip and gzip files. With very large files sometimes (i.e. on the cluster compute node) these get aborted, while other times (i.e. on the login nodes) they go through. Is there some soft limit on the time a system call may take? What else could it be?

    Read the article

  • apc_delete() not working in background script

    - by Jared
    I have a shell background convertor on my video website and I can't seem to get APC to delete a key as a file is uploaded and its visibility is updated. The script is structured like so: if(file_exists($output_file)) { $conn->query("UPDATE `foo` SET `bar` = 1 WHERE `id` = ".$id." LIMIT 1"); apc_delete('feed:'.$id); } Everything works fine except for the APC and this is the only script on the site that has had this problem. I'm stumped.

    Read the article

  • SEC_TO_TIME() convert to java.sql.Time error

    - by chun
    hi I have a aggregate column present the microsecond, a report(with jasper) have to show HH:mm:ss of this indicator What I did is using SEC_TO_TIME(sum(col)/1000) , but when mapping to java.sql.Time, i doesn't work when the value of hour in result pass over 24(ex:36:33:33) Then I think another way, not using sec_to_time, just mapping the microsecond as Bigdecimal, but dunno what java class shoud i use to format date as the default format of hh:mm:ss is limit to 24...?

    Read the article

  • Question regarding MySQL indices and their functionality

    - by user281434
    Hi Say I have an ordinary table in my db like so ---------------------------- | id | username | password | ---------------------------- | 24 | blah | blah | ---------------------------- A primary key is assigned to the id column. Now when I run a Mysql query like this: SELECT id FROM table WHERE username = 'blah' LIMIT 1 Does that primary key index even help? If I am telling it to match usernames, then shouldn't the username column be indexed instead? Thanks for your time

    Read the article

  • Committed JDO writes do not apply on local GAE HRD, or possibly reused transaction

    - by eeeeaaii
    I'm using JDO 2.3 on app engine. I was using the Master/Slave datastore for local testing and recently switched over to using the HRD datastore for local testing, and parts of my app are breaking (which is to be expected). One part of the app that's breaking is where it sends a lot of writes quickly - that is because of the 1-second limit thing, it's failing with a concurrent modification exception. Okay, so that's also to be expected, so I have the browser retry the writes again later when they fail (maybe not the best hack but I'm just trying to get it working quickly). But a weird thing is happening. Some of the writes which should be succeeding (the ones that DON'T get the concurrent modification exception) are also failing, even though the commit phase completes and the request returns my success code. I can see from the log that the retried requests are working okay, but these other requests that seem to have committed on the first try are, I guess, never "applied." But from what I read about the Apply phase, writing again to that same entity should force the apply... but it doesn't. Code follows. Some things to note: I am attempting to use automatic JDO caching. So this is where JDO uses memcache under the covers. This doesn't actually work unless you wrap everything in a transaction. all the requests are doing is reading a string out of an entity, modifying part of the string, and saving that string back to the entity. If these requests weren't in transactions, you'd of course have the "dirty read" problem. But with transactions, isolation is supposed to be at the level of "serializable" so I don't see what's happening here. the entity being modified is a root entity (not in a group) I have cross-group transactions enabled Another weird thing is happening. If the concurrent modification thing happens, and I subsequently edit more than 5 more entities (this is the max for cross-group transactions), then nothing happens right away, but when I stop and restart the server I get "IllegalArgumentException: operating on too many entity groups in a single transaction". Could it be possible that the PMF is returning the same PersistenceManager every time, or the PM is reusing the same transaction every time? I don't see how I could possibly get the above error otherwise. The code inside the transaction just edits one root entity. I can't think of any other way that GAE would give me the "too many entity groups" error. The relevant code (this is a simplified version) PersistenceManager pm = PMF.getManager(); Transaction tx = pm.currentTransaction(); String responsetext = ""; try { tx.begin(); // I have extra calls to "makePersistent" because I found that relying // on pm.close didn't always write the objects to cache, maybe that // was only a DataNucleus 1.x issue though Key userkey = obtainUserKeyFromCookie(); User u = pm.getObjectById(User.class, userkey); pm.makePersistent(u); // to make sure it gets cached for next time Key mapkey = obtainMapKeyFromQueryString(); // this is NOT a java.util.Map, just FYI Map currentmap = pm.getObjectById(Map.class, mapkey); Text mapData = currentmap.getMapData(); // mapData is JSON stored in the entity Text newMapData = parseModifyAndReturn(mapData); // transform the map currentmap.setMapData(newMapData); // mutate the Map object pm.makePersistent(currentmap); // make sure to persist so there is a cache hit tx.commit(); responsetext = "OK"; } catch (JDOCanRetryException jdoe) { // log jdoe responsetext = "RETRY"; } catch (Exception e) { // log e responsetext = "ERROR"; } finally { if (tx.isActive()) { tx.rollback(); } pm.close(); } resp.getWriter().println(responsetext); EDIT: so I have verified that it fails after exactly 5 transactions. Here's what I do: I create a Foo (root entity), do a bunch of concurrent operations on that Foo, and some fail and get retried, and some commit but don't apply (as described above). Then, I start creating more Foos, and do a few operations on those new Foos. If I only create four Foos, stopping and restarting app engine does NOT give me the IllegalArgumentException. However if I create five Foos (which is the limit for cross-group transactions), then when I stop and restart app engine, I do get the exception. So it seems that somehow these new Foos I am creating are counting toward the limit of 5 max entities per transaction, even though they are supposed to be handled by separate transactions. It's as if a transaction is still open and is being reused by the servlet when it handles the new requests for the 2nd through 5th Foos. EDIT2: it looks like the IllegalArgument thing is independent of the other bug. In other words, it always happens when I create five Foos, even if I don't get the concurrent modification exception. I don't know if it's a symptom of the same problem or if it's unrelated. EDIT3: I found out what was causing the (unrelated) IllegalArgumentException, it was a dumb mistake on my part. But the other issue is still happening. EDIT4: added pseudocode for the datastore access EDIT5: I am pretty sure I know why this is happening, but I will still award the bounty to anyone who can confirm it. Basically, I think the problem is that transactions are not really implemented in the local version of the datastore. References: https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/gVMS1dFSpcU https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/deGasFdIO-M https://groups.google.com/forum/?hl=en&fromgroups=#!msg/google-appengine-java/4YuNb6TVD6I/gSttMmHYwo0J Because transactions are not implemented, rollback is essentially a no-op. Therefore, I get a dirty read when two transactions try to modify the record at the same time. In other words, A reads the data and B reads the data at the same time. A attempts to modify the data, and B attempts to modify a different part of the data. A writes to the datastore, then B writes, obliterating A's changes. Then B is "rolled back" by app engine, but since rollbacks are a no-op when running on the local datastore, B's changes stay, and A's do not. Meanwhile, since B is the thread that threw the exception, the client retries B, but does not retry A (since A was supposedly the transaction that succeeded).

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >