Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 34/201 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • How can I set `less` or `more` max lines (scrollable height) limit/boundary in linux?

    - by Rudie
    (Sorry for the title. Any suggestions?) I've set my commandline PS1 to cover 3 lines: white space user, server and pwd $ or # to input I think less (or more?) is configured to break after window's height - 1, because when I do a $ git log, the first two lines are invisible at the top of the window and the rest is scrollable. I'm not sure who handles this scrolling and its configuration, but I assume GIT uses less/more. Where can I configure that my scrollable window is window height - 3 lines and not window height - 1? More info: If I cat lines.txt | less with a 23 line file, it shows the entire file and no scrolling. If I do the same with a 24 line file, it doesn't show line 1 (and no scrolling). With 25 lines: doesn't show lines 1 and 2 (and no scrolling). With 26 lines: shows line 1 and scrolling! The less breakpoint is at the wrong height...

    Read the article

  • How can I limit other (administrator) users access to my profile?

    - by kojo
    Hi, We in our club have a computer with Windows 7 Professional that every club member may use. And everyone has their own separate account. Those accounts have to have administrator priveleges since I want everyone to be able to install any software and use any feature they want. However, there is a single thing that they shouldn't be allowed to do - that is, look into another users' profiles. Now when anyone goes to 'c:\Users(Any User Name)' a little prompt appears that this folder is secured and whether you really want to look inside. Simply clickinh 'ok' give you access to any profile. I tried disabling taking ownership for Administrators group in Group Policies but that had no effect. How can I effectively prohibit administrators looking into each others' profiles and documents?

    Read the article

  • vb.net, How can I limit a textchanged event for a textbox to keyboard input only?

    - by Luay
    Hi everyone, Please allow me to explain what I have and what I am trying to achieve. I have a textbox (called txtb1) and a button under it (called btn_browse) on a winform in a vb.net project. When the user clicks the button a folder browser dialog appears. The user selects his desired folder and when he/she clicks 'ok' the dialog closes and the path of the folder selected appears in the textbox. I also want to store that value in a variable to be used somewhere else(the value will be copied to an xml file when the user clicks 'apply' on the form, but this has no effect nor is related to my problem). To achieve that I have the following code: Public myVar As String Private Sub btn_browse_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btn_browse.Click Dim f As New FolderBrowserDialog If f.ShowDialog() = DialogResult.OK Then txtb1.Text = f.SelectedPath End If myVar = txtb1.text f.Dispose() End Sub This part works with no problems. Now, what if the user either: 1- decides to enter the path manually rather than use the browse button. or, 2- after using the browse button and selecting the folder they decide to manually change the location In trying to solve this I added a textchanged event to the textbox as follows: Private Sub txtb1_TextChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles txtb1.TextChanged myVar = txtb1.Text End Sub However, this is not working. Apparently, and I don't know if this is relevant, when the user selects the desired folder using the browse button the textchanged event is also triggered. and when I click on the textbox (to give it focus) and press any keyboard key the application simply stops responding. So my questions are: am I going about this the right way? if my logic is flawed, could someone point me to how usually such a thing could be achieved? is it possible to limit the triggering events to only keyboard input as a way around this? I tried the keydown and keypress events but I am getting the freeze. I would be grateful for your help. Thanks

    Read the article

  • Is there a limit for the number of files in a directory on an SD card?

    - by jamesh
    I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden $ ls | wc -w 9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touching a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene: The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.

    Read the article

  • Twitter API Rate Limit - Overcoming on an unauthenticated JSON Get with Objective C?

    - by Cian
    I see the rate limit is 150/hr per IP. This'd be fine, but my application is on a mobile phone network (with shared IP addresses). I'd like to query twitter trends, e.g. GET /trends/1/json. This doesn't require authorization, however what if the user first authorized with my application using OAuth, then hit the JSON API? The request is built as follows: - (void) queryTrends:(NSString *) WOEID { NSString *urlString = [NSString stringWithFormat:@"http://api.twitter.com/1/trends/%@.json", WOEID]; NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *theRequest=[NSURLRequest requestWithURL:url cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:10.0]; NSURLConnection *theConnection=[[NSURLConnection alloc] initWithRequest:theRequest delegate:self startImmediately:YES]; if (theConnection) { // Create the NSMutableData to hold the received data. theData = [[NSMutableData data] retain]; } else { NSLog(@"Connection failed in Query Trends"); } //NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]]; } I have no idea how I'd build this request as an authenticated one however, and haven't seen any examples to this effect online. I've read through the twitter OAuth documentation, but I'm still puzzled as to how it should work. I've experimented with OAuth using Ben Gottlieb's prebuild library, and calling this in my first viewDidLoad: OAuthViewController *oAuthVC = [[OAuthViewController alloc] initWithNibName:@"OAuthTwitterDemoViewController" bundle:[NSBundle mainBundle]]; // [self setViewController:aViewController]; [[self navigationController] pushViewController:oAuthVC animated:YES]; This should store all the keys required in the app's preferences, I just need to know how to build the GET request after authorizing! Maybe this just isn't possible? Maybe I'll have to proxy the requests through a server side application? Any insight would be appreciated!

    Read the article

  • How can I limit asp.net control actions based on user role?

    - by Duke
    I have several pages or views in my application which are essentially the same for both authenticated users and anonymous users. I'd like to limit the insert/update/delete actions in formviews and gridviews to authenticated users only, and allow read access for both authed and anon users. I'm using the asp.net configuration system for handling authentication and roles. This system limits access based on path so I've been creating duplicate pages for authed and anon paths. The solution that comes to mind immediately is to check roles in the appropriate event handlers, limiting what possible actions are displayed (insert/update/delete buttons) and also limiting what actions are performed (for users that may know how to perform an action in the absence of a button.) However, this solution doesn't eliminate duplication - I'd be duplicating security code on a series of pages rather than duplicating pages and limiting access based on path; the latter would be significantly less complicated. I could always build some controls that offered role-based configuration, but I don't think I have time for that kind of commitment right now. Is there a relatively easy way to do this (do such controls exist?) or should I just stick to path-based access and duplicate pages? Does it even make sense to use two methods of authorization? There are still some pages which are strictly for either role so I'll be making use of path-based authorization anyway. Finally, would using something other than path-based authorization be contrary to typical asp.net design practices, at least in the context of using the asp.net configuration system?

    Read the article

  • How to limit a user to entering 10 keywords or less using PHP & MySQL?

    - by G4TV
    I'm trying to limit my users to entering at least 10 keywords and was wondering how would I be able to do this using PHP & MySQL with my current Keyword script? Here is the add keywords PHP MySQL code. if (isset($_POST['tag']) && trim($_POST['tag'])!=='') { $tags = explode(",", $_POST['tag']); for ($x = 0; $x < count($tags); $x++){ $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $query1 = "INSERT INTO tags (tag) VALUES ('" . mysqli_real_escape_string($mysqli, strtolower(htmlentities(trim(strip_tags($tags[$x]))))) . "')"; if (!mysqli_query($mysqli, $query1)) { print mysqli_error($mysqli); return; } $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT id FROM tags WHERE tag='" . mysqli_real_escape_string($mysqli, strtolower(htmlentities(trim(strip_tags($tags[$x]))))) . "'"); if (!$dbc) { print mysqli_error($mysqli); } else { while($row = mysqli_fetch_array($dbc)){ $id = $row["id"]; } } $query2 = "INSERT INTO question_tags (tag_id, question_id, user_id, date_created) VALUES ('$id', '$question', '$user', NOW())"; if (!mysqli_query($mysqli, $query2)) { print mysqli_error($mysqli); return; } } }

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • iptables logging not working?

    - by vps_newcomer
    OS: Ubuntu 10.04 Logging daemon: rsyslog For some reason i'm not getting any iptables logs, even thought i don't look through them very often i'd still like to get it working for the sake of it working XD Here is my /etc/ryslog.d/iptables.conf :msg, contains, "[IPTABLES]" -/var/log/iptables.log & ~ My iptables logging prefix is "[IPTABLES]" followed by whatever else (example [IPTABLES] Denied xyz) the /var/log/iptables.log file is being created, however its not getting any entries. I can see the logging entries in dmesg but not in syslog or messages. Whats going on? EDIT: My iptables logging rules: # logging limit LoggingLimit=5/min LoggingPrefix=IPTABLES # Logging chain iptables -N LOG_REJECT iptables -A LOG_REJECT -j LOG # join INPUT to LOG_REJECT iptables -A INPUT -j LOG_REJECT # logging iptables -A LOG_REJECT -p tcp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied TCP: " #--log-level 7 iptables -A LOG_REJECT -p udp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied UDP: " #--log-level 7 iptables -A LOG_REJECT -p icmp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied ICMP: " #--log-level 7 Update: I found a thread that has the same symptoms as i do, apparently is a kernel bug. I am using a VPS so could anyone point me on how to upgrade my kernel or apply a workaround? I couldn't find a 2.6.34 kernel listed in apt-cache. Thread: http://www.linode.com/forums/viewtopic.php?t=5533

    Read the article

  • Is there limit of "join" or the "where" or length of SQL query ?

    - by Chetan sharma
    Actually i was trying to get data from elgg database based on multiple joins. It generated very big query with lots of JOIN statements and query never respond back. SELECT distinct e.* from test_entities e JOIN test_metadata m1 on e.guid = m1.entity_guid JOIN test_metastrings ms1 on ms1.id = m1.name_id JOIN test_metastrings mv1 on mv1.id = m1.value_id JOIN test_objects_entity obj on e.guid = obj.guid JOIN test_metadata m2 on e.guid = m2.entity_guid JOIN test_metastrings ms2 on ms2.id = m2.name_id JOIN test_metastrings mv2 on mv2.id = m2.value_id JOIN test_metadata m3 on e.guid = m3.entity_guid JOIN test_metastrings ms3 on ms3.id = m3.name_id JOIN test_metastrings mv3 on mv3.id = m3.value_id JOIN test_metadata m4 on e.guid = m4.entity_guid JOIN test_metastrings ms4 on ms4.id = m4.name_id JOIN test_metastrings mv4 on mv4.id = m4.value_id JOIN test_metadata m5 on e.guid = m5.entity_guid JOIN test_metastrings ms5 on ms5.id = m5.name_id JOIN test_metastrings mv5 on mv5.id = m5.value_id JOIN test_metadata m6 on e.guid = m6.entity_guid JOIN test_metastrings ms6 on ms6.id = m6.name_id JOIN test_metastrings mv6 on mv6.id = m6.value_id where ms1.string='expire_date' and mv1.string <= 1272565800 and ms2.string='homecity' and mv2.string LIKE "%dasf%" and ms3.string='schoolname' and mv3.string LIKE "%asdf%" and ms4.string='award_amount' and mv4.string <= 123 and ms5.string='no_of_awards' and mv5.string <= 7 and ms6.string='avg_rating' and mv6.string <= 2 and e.type = 'object' and e.subtype = 5 and e.site_guid = 1 and (obj.title like '%asdf%') OR (obj.description like '%asdf%') and ( (e.access_id = -2 AND e.owner_guid IN ( SELECT guid_one FROM test_entity_relationships WHERE relationship='friend' AND guid_two=5 )) OR (e.access_id IN (2,1) OR (e.owner_guid = 5) OR ( e.access_id = 0 AND e.owner_guid = 5 ) ) and e.enabled='yes') and ( (m1.access_id = -2 AND m1.owner_guid IN ( SELECT guid_one FROM test_entity_relationships WHERE relationship='friend' AND guid_two=5 )) OR (m1.access_id IN (2,1) OR (m1.owner_guid = 5) OR ( m1.access_id = 0 AND m1.owner_guid = 5 ) ) and m1.enabled='yes') and ( (m2.access_id = -2 AND m2.owner_guid IN ( SELECT guid_one FROM test_entity_relationships WHERE relationship='friend' AND guid_two=5 )) OR (m2.access_id IN (2,1) OR (m2.owner_guid = 5) OR ( m2.access_id = 0 AND m2.owner_guid = 5 ) ) and m2.enabled='yes') and ( (m3.access_id = -2 AND m3.owner_guid IN ( SELECT guid_one FROM test_entity_relationships WHERE relationship='friend' AND guid_two=5 )) OR (m3.access_id IN (2,1) OR (m3.owner_guid = 5) OR ( m3.access_id = 0 AND m3.owner_guid = 5 ) ) and m3.enabled='yes') and ( (m4.access_id = -2 AND m4.owner_guid IN ( SELECT guid_one FROM test_entity_relationships WHERE relationship='friend' AND guid_two=5 )) OR (m4.access_id IN (2,1) OR (m4.owner_guid = 5) OR ( m4.access_id = 0 AND m4.owner_guid = 5 ) ) and m4.enabled='yes') and ( (m5.access_id = -2 AND m5.owner_guid IN ( SELECT guid_one FROM test_entity_relationships WHERE relationship='friend' AND guid_two=5 )) OR (m5.access_id IN (2,1) OR (m5.owner_guid = 5) OR ( m5.access_id = 0 AND m5.owner_guid = 5 ) ) and m5.enabled='yes') and ( (m6.access_id = -2 AND m6.owner_guid IN ( SELECT guid_one FROM test_entity_relationships WHERE relationship='friend' AND guid_two=5 )) OR (m6.access_id IN (2,1) OR (m6.owner_guid = 5) OR ( m6.access_id = 0 AND m6.owner_guid = 5 ) ) and m6.enabled='yes') order by obj.title limit 0, 10 this is the query that i am running.

    Read the article

  • for loop with count from array, limit output? PHP

    - by Philip
    print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; $news_comments is a 3 diemensional array from mysqli_fetch_assoc returned from a function elsewhere, for some reason my for loop returns the total of the array sets such as [0][2] etc until it reaches the max amount from the counted $news_comments var which is a return function of LIMIT 10. my problem is if I add any text/html/icons inside the for loop it prints it in this case 11 times even though only array sets 1 and 2 have data inside them. How do I get around this? My function query is as follows: function news_comments() { require_once '../data/queries.php'; // get newsID from the url $urlID = $_GET['news_id']; // run our query for newsID information $news_comments = selectQuery('*', 'news_comments', 'WHERE news_id='.$urlID.'', 'ORDER BY comment_date', 'DESC', '10'); // requires 6 params // check query for results if(!$news_comments) { // loop error session and initiate var foreach($_SESSION['errors'] as $error=>$err) { print htmlentities($err) . 'for News Comments, be the first to leave a comment!'; } } else { print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; } }// End function Any help is greatly appreciated.

    Read the article

  • C#: Object having two constructors: how to limit which properties are set together?

    - by Dr. Zim
    Say you have a Price object that accepts either an (int quantity, decimal price) or a string containing "4/$3.99". Is there a way to limit which properties can be set together? Feel free to correct me in my logic below. The Test: A and B are equal to each other, but the C example should not be allowed. Thus the question How to enforce that all three parameters are not invoked as in the C example? AdPrice A = new AdPrice { priceText = "4/$3.99"}; // Valid AdPrice B = new AdPrice { qty = 4, price = 3.99m}; // Valid AdPrice C = new AdPrice { qty = 4, priceText = "2/$1.99", price = 3.99m};// Not The class: public class AdPrice { private int _qty; private decimal _price; private string _priceText; The constructors: public AdPrice () : this( qty: 0, price: 0.0m) {} // Default Constructor public AdPrice (int qty = 0, decimal price = 0.0m) { // Numbers only this.qty = qty; this.price = price; } public AdPrice (string priceText = "0/$0.00") { // String only this.priceText = priceText; } The Methods: private void SetPriceValues() { var matches = Regex.Match(_priceText, @"^\s?((?<qty>\d+)\s?/)?\s?[$]?\s?(?<price>[0-9]?\.?[0-9]?[0-9]?)"); if( matches.Success) { if (!Decimal.TryParse(matches.Groups["price"].Value, out this._price)) this._price = 0.0m; if (!Int32.TryParse(matches.Groups["qty"].Value, out this._qty)) this._qty = (this._price > 0 ? 1 : 0); else if (this._price > 0 && this._qty == 0) this._qty = 1; } } private void SetPriceString() { this._priceText = (this._qty > 1 ? this._qty.ToString() + '/' : "") + String.Format("{0:C}",this.price); } The Accessors: public int qty { get { return this._qty; } set { this._qty = value; this.SetPriceString(); } } public decimal price { get { return this._price; } set { this._price = value; this.SetPriceString(); } } public string priceText { get { return this._priceText; } set { this._priceText = value; this.SetPriceValues(); } } }

    Read the article

  • Numpy zero rank array indexing/broadcasting

    - by Lemming
    I'm trying to write a function that supports broadcasting and is fast at the same time. However, numpy's zero-rank arrays are causing trouble as usual. I couldn't find anything useful on google, or by searching here. So, I'm asking you. How should I implement broadcasting efficiently and handle zero-rank arrays at the same time? This whole post became larger than anticipated, sorry. Details: To clarify what I'm talking about I'll give a simple example: Say I want to implement a Heaviside step-function. I.e. a function that acts on the real axis, which is 0 on the negative side, 1 on the positive side, and from case to case either 0, 0.5, or 1 at the point 0. Implementation Masking The most efficient way I found so far is the following. It uses boolean arrays as masks to assign the correct values to the corresponding slots in the output vector. from numpy import * def step_mask(x, limit=+1): """Heaviside step-function. y = 0 if x < 0 y = 1 if x > 0 See below for x == 0. Arguments: x Evaluate the function at these points. limit Which limit at x == 0? limit > 0: y = 1 limit == 0: y = 0.5 limit < 0: y = 0 Return: The values corresponding to x. """ b = broadcast(x, limit) out = zeros(b.shape) out[x>0] = 1 mask = (limit > 0) & (x == 0) out[mask] = 1 mask = (limit == 0) & (x == 0) out[mask] = 0.5 mask = (limit < 0) & (x == 0) out[mask] = 0 return out List Comprehension The following-the-numpy-docs way is to use a list comprehension on the flat iterator of the broadcast object. However, list comprehensions become absolutely unreadable for such complicated functions. def step_comprehension(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) out.flat = [ ( 1 if x_ > 0 else ( 0 if x_ < 0 else ( 1 if l_ > 0 else ( 0.5 if l_ ==0 else ( 0 ))))) for x_, l_ in b ] return out For Loop And finally, the most naive way is a for loop. It's probably the most readable option. However, Python for-loops are anything but fast. And hence, a really bad idea in numerics. def step_for(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) for i, (x_, l_) in enumerate(b): if x_ > 0: out[i] = 1 elif x_ < 0: out[i] = 0 elif l_ > 0: out[i] = 1 elif l_ < 0: out[i] = 0 else: out[i] = 0.5 return out Test First of all a brief test to see if the output is correct. >>> x = array([-1, -0.1, 0, 0.1, 1]) >>> step_mask(x, +1) array([ 0., 0., 1., 1., 1.]) >>> step_mask(x, 0) array([ 0. , 0. , 0.5, 1. , 1. ]) >>> step_mask(x, -1) array([ 0., 0., 0., 1., 1.]) It is correct, and the other two functions give the same output. Performance How about efficiency? These are the timings: In [45]: xl = linspace(-2, 2, 500001) In [46]: %timeit step_mask(xl) 10 loops, best of 3: 19.5 ms per loop In [47]: %timeit step_comprehension(xl) 1 loops, best of 3: 1.17 s per loop In [48]: %timeit step_for(xl) 1 loops, best of 3: 1.15 s per loop The masked version performs best as expected. However, I'm surprised that the comprehension is on the same level as the for loop. Zero Rank Arrays But, 0-rank arrays pose a problem. Sometimes you want to use a function scalar input. And preferably not have to worry about wrapping all scalars in at least 1-D arrays. >>> step_mask(1) Traceback (most recent call last): File "<ipython-input-50-91c06aa4487b>", line 1, in <module> step_mask(1) File "script.py", line 22, in step_mask out[x>0] = 1 IndexError: 0-d arrays can't be indexed. >>> step_for(1) Traceback (most recent call last): File "<ipython-input-51-4e0de4fcb197>", line 1, in <module> step_for(1) File "script.py", line 55, in step_for out[i] = 1 IndexError: 0-d arrays can't be indexed. >>> step_comprehension(1) array(1.0) Only the list comprehension can handle 0-rank arrays. The other two versions would need special case handling for 0-rank arrays. Numpy gets a bit messy when you want to use the same code for arrays and scalars. However, I really like to have functions that work on as arbitrary input as possible. Who knows which parameters I'll want to iterate over at some point. Question: What is the best way to implement a function as the one above? Is there a way to avoid if scalar then like special cases? I'm not looking for a built-in Heaviside. It's just a simplified example. In my code the above pattern appears in many places to make parameter iteration as simple as possible without littering the client code with for loops or comprehensions. Furthermore, I'm aware of Cython, or weave & Co., or implementation directly in C. However, the performance of the masked version above is sufficient for the moment. And for the moment I would like to keep things as simple as possible.

    Read the article

  • Proftpd on Ubuntu - Create directory permission denied (550 ) after upgrade to 9.10

    - by Ian
    Hi all, I am having problems with ProFTPD since I upgraded to Ubuntu 9.10 from 9.04. When I login as my ftp user (userftp) in the terminal I can create dirs fine in their home dir. But when I use ftp as this user permission is denied (550 asl: permission denied) when I try and do the same operation (creating a dir). Uploading files is fine though. I am using the same config for proftpd as I was before, I can't understand what's wrong. Any help appreciated! Config follows: Include /etc/proftpd/modules.conf UseIPv6 on IdentLookups off ServerName "whatever" ServerType inetd DeferWelcome off MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ DefaultRoot ~ Port 21 <IfModule mod_dynmasq.c> </IfModule> MaxInstances 8 User proftpd Group nogroup Umask 022 022 AllowOverwrite on TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # My additions # MaxLoginAttempts 5 # # My user config # #VALID LOGINS <Limit LOGIN> AllowUser userftp DenyALL </Limit> <Directory /home/userftp> Umask 022 022 AllowOverwrite off <Limit MKD STOR DELE XMKD RNRF RNTO RMD XRMD> DenyAll </Limit> </Directory> <Directory /home/userftp/upload/> Umask 022 022 AllowOverwrite on <Limit READ> DenyAll </Limit> <Limit STOR CWD MKD RMD DELE> AllowAll </Limit> </Directory>

    Read the article

  • Set automatic axis limits without defaulting to zero (Excel)

    - by djeidot
    I am build a bar chart in Excel with data values ranging from e.g. 10 to 20. I want the X axis limits to be automatic, but although the right limit (near 20) works correctly, the left limit always defaults to 0. I'd like the left limit to be near 10, instead of zero, without having to have the limit fixed. Is there any way to do this?

    Read the article

  • SQL Server 2008 ContainsTable, CTE, and Paging

    - by David Murdoch
    I'd like to perform efficient paging using containstable. The following query selects the top 10 ranked results from my database using containstable when searching for a name (first or last) that begins with "Joh". DECLARE @Limit int; SET @Limit = 10; SELECT TOP @Limit c.ChildID, c.PersonID, c.DOB, c.Gender FROM [Person].[vFullName] AS v INNER JOIN CONTAINSTABLE( [Person].[vFullName], (FullName), IS ABOUT ( "Joh*" WEIGHT (.4), "Joh" WEIGHT (.6)) ) AS k3 ON v.PersonID = k3.[KEY] JOIN [Child].[Details] c ON c.PersonID = v.PersonID JOIN [Person].[Details] p ON p.PersonID = c.PersonID ORDER BY k3.RANK DESC, FullName ASC, p.Active DESC, c.ChildID ASC I'd like to combine it with the following CTE which returns the 10th-20th results ordered by ChildID (the primary key): DECLARE @Start int; DECLARE @Limit int; SET @Start = 10; SET @Limit = 10; WITH ChildEntities AS ( SELECT ROW_NUMBER() OVER (ORDER BY ChildID) AS Row, ChildID FROM Child.Details ) SELECT c.ChildID, c.PersonID, c.DOB, c.Gender FROM ChildEntities cte INNER JOIN Child.Details c ON cte.ChildID = c.ChildID WHERE cte.Row BETWEEN @Start+1 AND @Start+@Limit ORDER BY cte.Row ASC

    Read the article

  • [Android] For-Loop Performance Oddity

    - by Jack Holt
    I just noticed something concerning for-loop performance that seems to fly in the face of the recommendations given by the Google Android team. Look at the following code: package com.jackcholt; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class Main extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); loopTest(); finish(); } private void loopTest() { final long loopCount = 1228800; final int[] image = new int[8 * 320 * 480]; long start = System.currentTimeMillis(); for (int i = 0; i < (8 * 320 * 480); i++) { image[i] = i; } for (int i = 0; i < (8 * 320 * 480); i++) { image[i] = i; } Log.i("loopTest", "Elapsed time (recompute loop limit): " + (System.currentTimeMillis() - start)); start = System.currentTimeMillis(); for (int i = 0; i < 1228800; i++) { image[i] = i; } for (int i = 0; i < 1228800; i++) { image[i] = i; } Log.i("loopTest", "Elapsed time (literal loop limit): " + (System.currentTimeMillis() - start)); start = System.currentTimeMillis(); for (int i = 0; i < loopCount; i++) { image[i] = i; } for (int i = 0; i < loopCount; i++) { image[i] = i; } Log.i("loopTest", "Elapsed time (precompute loop limit): " + (System.currentTimeMillis() - start)); } } When I run this code I get the following output in logcat: I/loopTest( 726): Elapsed time (recompute loop limit): 759 I/loopTest( 726): Elapsed time (literal loop limit): 755 I/loopTest( 726): Elapsed time (precompute loop limit): 1317 As you can see the code that seems to recompute the loop limit value on every iteration of the loop compares very well to the code that uses a literal value for the loop limit. However, the code that uses a variable which contains the precomputed value for the loop limit is significantly slower than either of the others. I'm not surprised that accessing a variable should be slower that using a literal but why does code that looks like it should be using two multiply instructions on every iteration of the loop so comparable in performance to a literal? Could it be that because literals are the only thing being multiplied, the Java compiler is optimizing out the multiplication and using a precomputed literal?

    Read the article

  • How to add limit option in Magento API call.

    - by ritesh
    I am creating web service for my store. I am using magento API to collect product list from store. But it display all the 500 records. And i want it 25 records per page. What to add in API call? Or What filter will be apply for this? //create soap object $proxy = new SoapClient('http://localhsot/magento/api/soap/?wsdl'); // create authorized session id using api user name and api key // $sessionId = $proxy->login('apiUser', 'apiKey'); $sessionId = $proxy->login('test_admin', '12345678'); $filters = array( ); // Get list of product $productlist = $proxy->call($sessionId, 'product.list', array($filters)); print_r($productlist );

    Read the article

  • Get JVM to grow memory demand as needed up to size of VM limit?

    - by Ira Baxter
    We ship a Java application whose memory demand can vary quite a lot depending on the size of the data it is processing. If you don't set the max VM (virtual memory) size, quite often the JVM quits with an GC failure on big data. What we'd like to see, is the JVM requesting more memory, as GC fails to provide enough, until the total available VM is exhausted. e.g., start with 128Mb, and increase geometrically (or some other step) whenever the GC failed. The JVM ("Java") command line allows explicit setting of max VM sizes (various -Xm* commands), and you'd think that would be designed to be adequate. We try to do this in a .cmd file that we ship with the application. But if you pick any specific number, you get one of two bad behaviors: 1) if your number is small enough to work on most target systems (e.g., 1Gb), it isn't big enough for big data, or 2) if you make it very large, the JVM refuses to run on those systems whose actual VM is smaller than specified. How does one set up Java to use the available VM when needed, without knowing that number in advance, and without grabbing it all on startup?

    Read the article

  • How to limit an upload speed in java servlet?

    - by den-javamaniac
    Hi. I'm working on an app (based on Spring as DI and MVC framework) that has a file upload function which is currently implemented using Spring Multipart Upload (which in it's turn utilizes commons fileupload libs). So what I'm looking for is a way to lower the upload bandwidth consumption. How can I accomplish that?

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • (Oracle Performance) Will a query based on a view limit the view using the where clause?

    - by BestPractices
    In Oracle (10g), when I use a View (not Materialized View), does Oracle take into account the where clause when it executes the view? Let's say I have: MY_VIEW = SELECT * FROM PERSON P, ORDERS O WHERE P.P_ID = O.P_ID And I then execute the following: SELECT * FROM MY_VIEW WHERE MY_VIEW.P_ID = '1234' When this executes, does oracle first execute the query for the view and THEN filter it based on my where clause (where MY_VIEW.P_ID = '1234') or does it do this filtering as part of the execution of the view? If it does not do the latter, and P_ID had an index, would I also lose out on the indexing capability since Oracle would be executing my query against the view which doesn't have the index rather than the base table which has the index?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >