Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 46/201 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Can I be too old to be just a programmer? [closed]

    - by Tigran
    Possible Duplicate: How old is "too old"? Looking on this post Can I be "too young" to get a programming job? I would like to ask: I have 35 years, am I too old to be just a programmer and not jumping into marketing meetings, mails, clients management in your country? In country were I live now, for example, I'm very close to limit of age where I could ever have a chance to get a phone call for just soft engineer position. What about you? Is there any age limit in that sence?

    Read the article

  • Runtime error in C code (strange double conversion)

    - by Miro Hassan
    I have a strange runtime error in my C code. The Integers comparison here works fine. But in the Decimals comparison, I always get that the second number is larger than the first number, which is false. I am pretty new to C and programming in general, so this is a complex application to me. #include <stdio.h> #include <stdbool.h> #include <stdlib.h> int choose; long long neLimit = -1000000000; long long limit = 1000000000; bool big(a,b) { if ((a >= limit) || (b >= limit)) return true; else if ((a <= neLimit) || (b <= neLimit)) return true; return false; } void largerr(a,b) { if (a > b) printf("\nThe First Number is larger ..\n"); else if (a < b) printf("\nThe Second Number is larger ..\n"); else printf("\nThe Two Numbers are Equal .. \n"); } int main() { system("color e && title Numbers Comparison && echo off && cls"); start:{ printf("Choose a Type of Comparison :\n\t1. Integers\n\t2. Decimals \n\t\t I Choose Number : "); scanf("%i", &choose); switch(choose) { case 1: goto Integers; break; case 2: goto Decimals; break; default: system("echo Please Choose a Valid Option && pause>nul && cls"); goto start; } } Integers: { system("title Integers Comparison && cls"); long x , y; printf("\nFirst Number : \t"); scanf("%li", &x); printf("\nSecond Number : "); scanf("%li", &y); if (big(x,y)) { printf("\nOut of Limit .. Too Big Numbers ..\n"); system("pause>nul && cls") ; goto Integers; } largerr(x,y); printf("\nFirst Number : %li\nSecond Number : %li\n",x,y); goto exif; } Decimals: { system("title Decimals Comparison && cls"); double x , y; printf("\nFirst Number : \t"); scanf("%le", &x); printf("\nSecond Number : "); scanf("%le", &y); if (big(x,y)) { printf("\nOut of Limit .. Too Big Numbers ..\n"); system("pause>nul && cls") ; goto Decimals; } largerr(x,y); goto exif; } exif:{ system("pause>nul"); system("cls"); main(); } }

    Read the article

  • ????: PostgreSQL??Oracle RAC????

    - by Kumiko Fujita
    ?????????????????????????????????????????????????????????????????????????? ????????????????????????? * * * ?????????????????????????????????????DBMS??????????????????????????????DBMS????????????????????????????????????????????? 1. ???? ?????????????????????????????????????????????????????????????????????1?????? ???????????????? ?????????????????????????????DB???????OSS?PostgreSQL?????AP?????DB??????????????????? ???????? ?????10?????????????GB????????????????????????????DB?????????????????????????? ?????????????3,500?????????24????????????????????????????????????? ??AP?????????????????????????????????????????DB??PostgreSQL??????????????????PostgreSQL ????????????????????Vacuum????????????????????????????????????????????????????? ??????????????????PostgreSQL?OSS??????????????????????????????????????????????????DB MS??????Oracle Database 11gR2???????????????????????500GB???????????????????????????Partitioning ???????? Oracle Database Enterprise Edition?????????????????????????????????????????????? ????SAN?????Active/Standby???HA????????????????? 2. ????? 2.1. ???? PostgreSQL??????Oracle??????????????????????????????????????????????????????TEXT????? ????????????????????Oracle??????????????????????????PostgreSQL??csv???????Oracle Database?SQL*Loa der????????????? ??????????????????????????????DB??????????????Windows?Liunx??????????????????????? ????????????????????????????????????????????????? ?????????????PostgreSQL?NULL?????''????????????Oracle Database???????????????????????? ?????????? table { border-collapse: collapse; } th { border: solid 1px #666666; color: #000000; background-color: #ff9999; } td { border: solid 1px #666666; color: #000000; background-color: #ffffff; } ???? PostgreSQL Oracle Database ??? CHAR(n) CHAR(n),CLOB VARCHAR(n) VARCHAR2(n),CLOB TEXT CLOB ??? NUMERIC NUMBER INTEGER NUMBER SMALLINT NUMBER BIGINT NUMBER REAL NUMBER DOUBLE PRECISION NUMBER ??? DATE DATE TIMESTAMP TIMESTAMP ????? Bytea BLOB LOB BFILE/SecureFiles ??? OID ROWID 2.2. ????? ?????????????PostgreSQL?Oracle Database??????????SQL???????????????????????????????????Postg reSQL?LIMIT?OFFSET??Oracle Database?????????????????????? LIMIT,OFFSET???SELECT?????? /* PostgreSQL LIMIT,OFFSET */ SELECT ??? FROM ????? ORDER BY ???? LIMIT 2 OFFSET 5; /* Oracle Database????? */ SELECT ??? FROM (SELECT ???, ROWNUM line_no FROM (SELECT??? FROM ????? OREDR BY ???? ) ) WHERE line_no BETWEEN 6 AND 7; ??????????????????????????????????????????????????????????????????????????? ?????????????????? ????????????????????????????????????????????????Oracle Database??????????????????????Oracle Database????WHERE??????????????????????????????????????????????????????WHERE?????????????????????? 3. ???? ???????????????????????30%~40%????????????????????80%????????????????????? ?ITpro???:???????4????? ??????????????????????????????????? ·?????·??????????????????????????? ·????????????????????????????? ????????????????????????????????????????? 3.1. ??????? ????????????????????????????????????????·??????????????????????????????????? ???????????????????????????????????????????????????????·?????????????????? ???????????????????????????? (1)???????????????????? (2)???????????????????????????????????????????? (3)??????????????? (4)???????????????????????????????? ???????????·???????????????????????????????????????????????????????????????? ????????????????????? ????????·?? table { border-collapse: collapse; } th { border: solid 1px #666666; color: #000000; background-color: #ff9999; } td { border: solid 1px #666666; color: #000000; background-color: #ffffff; } ?? ?? ?? (1) ?????????? ????????????·???????????????????????? (2) ???????????????????? ?????????????????????????????? (3) ?????4????????????????? ???????????????????????DB????????? (4) ??????????(3)???????? ???????????????????????? ?????????????????????GB???????????????????????????????????????????(3)?????????? ??????? ??????????????????????????????????????????????csv??????????SQL*Loader?Oracle Database?????????????????????Oracle Database???????????????????????????INSERT????????????? ???????????????????????????????????????????????????????????????????????????? ?????????????????????? 3.2. ????? ???????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 3.3. ????? ??????????????????????????????????????????????????????????????????????????? ??????????????????????? DBMS????????????????????????SQL??????????????????????????????????????????????????PostgreSQL?Oracle Database???????????MVCC?????????????????????????Read Committed??????????????????????????????????????????????????????????????????????????????????? ????????????????DBMS?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 4. ??? PostgreSQL??Oracle Database?????????????????????????????? ????????????·????????????????????????????????????? ??????4???????????????????????·??????????????????? ???????????????????????????????????????????????? ?????????????????????????????????????????????DBMS???????????????????DBMS???????? ?????SQL?????????????????????????????DB???????????????????????????? ???????????????????????????DBMS?????????????????????????????????????????????????????? ??????????????????????????????

    Read the article

  • Modify code to change timestamp timezone in sitemap

    - by Aahan Krish
    Below is the code from a plugin I use for sitemaps. I would like to know if there's a way to "enforce" a different timezone on all date variable and functions in it. If not, how do I modify the code to change the timestamp to reflect a different timezone? Say for example, to America/New_York timezone. Please look for date to quickly get to the appropriate code blocks. Code on Pastebin. <?php /** * @package XML_Sitemaps */ class WPSEO_Sitemaps { ..... /** * Build the root sitemap -- example.com/sitemap_index.xml -- which lists sub-sitemaps * for other content types. * * @todo lastmod for sitemaps? */ function build_root_map() { global $wpdb; $options = get_wpseo_options(); $this->sitemap = '<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">' . "\n"; $base = $GLOBALS['wp_rewrite']->using_index_permalinks() ? 'index.php/' : ''; // reference post type specific sitemaps foreach ( get_post_types( array( 'public' => true ) ) as $post_type ) { if ( $post_type == 'attachment' ) continue; if ( isset( $options['post_types-' . $post_type . '-not_in_sitemap'] ) && $options['post_types-' . $post_type . '-not_in_sitemap'] ) continue; $count = $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(ID) FROM $wpdb->posts WHERE post_type = %s AND post_status = 'publish' LIMIT 1", $post_type ) ); // don't include post types with no posts if ( !$count ) continue; $n = ( $count > 1000 ) ? (int) ceil( $count / 1000 ) : 1; for ( $i = 0; $i < $n; $i++ ) { $count = ( $n > 1 ) ? $i + 1 : ''; if ( empty( $count ) || $count == $n ) { $date = $this->get_last_modified( $post_type ); } else { $date = $wpdb->get_var( $wpdb->prepare( "SELECT post_modified_gmt FROM $wpdb->posts WHERE post_status = 'publish' AND post_type = %s ORDER BY post_modified_gmt ASC LIMIT 1 OFFSET %d", $post_type, $i * 1000 + 999 ) ); $date = date( 'c', strtotime( $date ) ); } $this->sitemap .= '<sitemap>' . "\n"; $this->sitemap .= '<loc>' . home_url( $base . $post_type . '-sitemap' . $count . '.xml' ) . '</loc>' . "\n"; $this->sitemap .= '<lastmod>' . htmlspecialchars( $date ) . '</lastmod>' . "\n"; $this->sitemap .= '</sitemap>' . "\n"; } } // reference taxonomy specific sitemaps foreach ( get_taxonomies( array( 'public' => true ) ) as $tax ) { if ( in_array( $tax, array( 'link_category', 'nav_menu', 'post_format' ) ) ) continue; if ( isset( $options['taxonomies-' . $tax . '-not_in_sitemap'] ) && $options['taxonomies-' . $tax . '-not_in_sitemap'] ) continue; // don't include taxonomies with no terms if ( !$wpdb->get_var( $wpdb->prepare( "SELECT term_id FROM $wpdb->term_taxonomy WHERE taxonomy = %s AND count != 0 LIMIT 1", $tax ) ) ) continue; // Retrieve the post_types that are registered to this taxonomy and then retrieve last modified date for all of those combined. $taxobj = get_taxonomy( $tax ); $date = $this->get_last_modified( $taxobj->object_type ); $this->sitemap .= '<sitemap>' . "\n"; $this->sitemap .= '<loc>' . home_url( $base . $tax . '-sitemap.xml' ) . '</loc>' . "\n"; $this->sitemap .= '<lastmod>' . htmlspecialchars( $date ) . '</lastmod>' . "\n"; $this->sitemap .= '</sitemap>' . "\n"; } // allow other plugins to add their sitemaps to the index $this->sitemap .= apply_filters( 'wpseo_sitemap_index', '' ); $this->sitemap .= '</sitemapindex>'; } /** * Build a sub-sitemap for a specific post type -- example.com/post_type-sitemap.xml * * @param string $post_type Registered post type's slug */ function build_post_type_map( $post_type ) { $options = get_wpseo_options(); ............ // We grab post_date, post_name, post_author and post_status too so we can throw these objects into get_permalink, which saves a get_post call for each permalink. while ( $total > $offset ) { $join_filter = apply_filters( 'wpseo_posts_join', '', $post_type ); $where_filter = apply_filters( 'wpseo_posts_where', '', $post_type ); // Optimized query per this thread: http://wordpress.org/support/topic/plugin-wordpress-seo-by-yoast-performance-suggestion // Also see http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/ $posts = $wpdb->get_results( "SELECT l.ID, post_content, post_name, post_author, post_parent, post_modified_gmt, post_date, post_date_gmt FROM ( SELECT ID FROM $wpdb->posts {$join_filter} WHERE post_status = 'publish' AND post_password = '' AND post_type = '$post_type' {$where_filter} ORDER BY post_modified ASC LIMIT $steps OFFSET $offset ) o JOIN $wpdb->posts l ON l.ID = o.ID ORDER BY l.ID" ); /* $posts = $wpdb->get_results("SELECT ID, post_content, post_name, post_author, post_parent, post_modified_gmt, post_date, post_date_gmt FROM $wpdb->posts {$join_filter} WHERE post_status = 'publish' AND post_password = '' AND post_type = '$post_type' {$where_filter} ORDER BY post_modified ASC LIMIT $steps OFFSET $offset"); */ $offset = $offset + $steps; foreach ( $posts as $p ) { $p->post_type = $post_type; $p->post_status = 'publish'; $p->filter = 'sample'; if ( wpseo_get_value( 'meta-robots-noindex', $p->ID ) && wpseo_get_value( 'sitemap-include', $p->ID ) != 'always' ) continue; if ( wpseo_get_value( 'sitemap-include', $p->ID ) == 'never' ) continue; if ( wpseo_get_value( 'redirect', $p->ID ) && strlen( wpseo_get_value( 'redirect', $p->ID ) ) > 0 ) continue; $url = array(); $url['mod'] = ( isset( $p->post_modified_gmt ) && $p->post_modified_gmt != '0000-00-00 00:00:00' ) ? $p->post_modified_gmt : $p->post_date_gmt; $url['chf'] = 'weekly'; $url['loc'] = get_permalink( $p ); ............. } /** * Build a sub-sitemap for a specific taxonomy -- example.com/tax-sitemap.xml * * @param string $taxonomy Registered taxonomy's slug */ function build_tax_map( $taxonomy ) { $options = get_wpseo_options(); .......... // Grab last modified date $sql = "SELECT MAX(p.post_date) AS lastmod FROM $wpdb->posts AS p INNER JOIN $wpdb->term_relationships AS term_rel ON term_rel.object_id = p.ID INNER JOIN $wpdb->term_taxonomy AS term_tax ON term_tax.term_taxonomy_id = term_rel.term_taxonomy_id AND term_tax.taxonomy = '$c->taxonomy' AND term_tax.term_id = $c->term_id WHERE p.post_status = 'publish' AND p.post_password = ''"; $url['mod'] = $wpdb->get_var( $sql ); $url['chf'] = 'weekly'; $output .= $this->sitemap_url( $url ); } } /** * Build the <url> tag for a given URL. * * @param array $url Array of parts that make up this entry * @return string */ function sitemap_url( $url ) { if ( isset( $url['mod'] ) ) $date = mysql2date( "Y-m-d\TH:i:s+00:00", $url['mod'] ); else $date = date( 'c' ); $output = "\t<url>\n"; $output .= "\t\t<loc>" . $url['loc'] . "</loc>\n"; $output .= "\t\t<lastmod>" . $date . "</lastmod>\n"; $output .= "\t\t<changefreq>" . $url['chf'] . "</changefreq>\n"; $output .= "\t\t<priority>" . str_replace( ',', '.', $url['pri'] ) . "</priority>\n"; if ( isset( $url['images'] ) && count( $url['images'] ) > 0 ) { foreach ( $url['images'] as $img ) { $output .= "\t\t<image:image>\n"; $output .= "\t\t\t<image:loc>" . esc_html( $img['src'] ) . "</image:loc>\n"; if ( isset( $img['title'] ) ) $output .= "\t\t\t<image:title>" . _wp_specialchars( html_entity_decode( $img['title'], ENT_QUOTES, get_bloginfo('charset') ) ) . "</image:title>\n"; if ( isset( $img['alt'] ) ) $output .= "\t\t\t<image:caption>" . _wp_specialchars( html_entity_decode( $img['alt'], ENT_QUOTES, get_bloginfo('charset') ) ) . "</image:caption>\n"; $output .= "\t\t</image:image>\n"; } } $output .= "\t</url>\n"; return $output; } /** * Get the modification date for the last modified post in the post type: * * @param array $post_types Post types to get the last modification date for * @return string */ function get_last_modified( $post_types ) { global $wpdb; if ( !is_array( $post_types ) ) $post_types = array( $post_types ); $result = 0; foreach ( $post_types as $post_type ) { $key = 'lastpostmodified:gmt:' . $post_type; $date = wp_cache_get( $key, 'timeinfo' ); if ( !$date ) { $date = $wpdb->get_var( $wpdb->prepare( "SELECT post_modified_gmt FROM $wpdb->posts WHERE post_status = 'publish' AND post_type = %s ORDER BY post_modified_gmt DESC LIMIT 1", $post_type ) ); if ( $date ) wp_cache_set( $key, $date, 'timeinfo' ); } if ( strtotime( $date ) > $result ) $result = strtotime( $date ); } // Transform to W3C Date format. $result = date( 'c', $result ); return $result; } } global $wpseo_sitemaps; $wpseo_sitemaps = new WPSEO_Sitemaps();

    Read the article

  • Is there a way to tell what the download speed is from a site/server

    - by Memor-X
    i'm looking into which ISP i should go with at the place i'm moving into, one ISP which i have been told good things about has data limits (which when breached will drop your speed to dial-up speed) but multiple memberships which, apart from the cheapest membership, have the same data limits (the cheapest has a 10GB data limit) in their fine print, they say that each different membership has different port speeds, one particular part jumps out at me These speeds are the NBN (National Broadband Network) port speed and not the actual Internet data speed which will vary based on numerous factors including destination you are reaching, your network equipment, network congestion etc. i plan to use the net to download DLC and patch updates for games (particular the insanely large update for the Wii U) and games from Steam (if i find any good one other than this one JRPG) and downloading development resources from free sites like Deposit Files and Mediafire since one membership with a 1000GB data limit is $145 with the port speed being 12Mbps/1Mbps (cheapest) while another with the same limit is $190 with the port speed of 100Mbps/40Mbps (expensive) i am wondering how i can tell what the speed coming from site is since i don't want to be wasting money on speed that makes no difference (unlike memory which i rather have to spare) NOTE: the speeds are for a fiber optic network which where my new place is can only connect via fixed wireless which i may not be able to get with this ISP but if i can get this network then good NOTE 2: most of the resources i get from Deposit Files are always about 200 MB or less, if a resource pack is greater then it's split into multiple archives (like .7z.part) while Mediafire i have to see one bigger than 150MB NOTE 3: one update patch for a PS3 game is close to 4 GB (Disgaea 4) which i need to get access to the DLC and on the weekend i downloaded 5 GB for the Final Fantasy XIV Open Beta for the PS3 which took almost 5 hours

    Read the article

  • COM+/Desktop Heap errors in IIS affecting sites at random?

    - by tresstylez
    We have a Win2K3 server that is hosting 30+ sites. Each site is configured to have its own unique application pool -- so that we can manually recycle specific sites if needed and not kill sessions for the others. From what I've read, the consequence of this type of setup is that each application pool worker process gets allocated a Desktop Heap (normally 512 kb's) and we limit the number of app pools we can serve. http://blogs.msdn.com/b/david.wang/archive/2006/01/25/security-considerations-of-usesharedwpdesktop-on-iis6.aspx PROBLEM: What we're seeing is that occasionally COM+ errors get triggered, presumably by hitting our 512 kb limit of the desktop heap -- and certain sites become unresponsive (or have errors) until we manually recycle that specific app pool. I know that I can increase the desktop heap limit to 1024, and make other tweaks/tunes, but I've been tasked with finding out what exactly causes one site's heap to max out as opposed to another. It seems that when we start seeing COM+ errors, the sites it affects are random -- small sites or big sites (heavier used). Is it based on process id? Traffic? Any pointers on understanding this a little more would be excellent. Thanks! jg

    Read the article

  • How to add another application to apache?

    - by Jader Dias
    I was following the Zabbix installation tutorial for Ubuntu and it requested that I added a file /etc/apache2/sites-enabled/000-default containing Alias /zabbix /home/zabbix/public_html/ <Directory /home/zabbix/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS PROPFIND> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS PROPFIND> Order deny,allow Deny from all </LimitExcept> </Directory> But I already have /etc/apache2/sites-enabled/railsapp NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80> UseCanonicalName Off Include /etc/apache2/conf/railsapp.conf </VirtualHost> <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/ssl/certs/cert.pem Include /etc/apache2/conf/railsapp.conf RequestHeader set X_FORWARDED_PROTO 'https' </VirtualHost> and /etc/apache2/sites-enabled/mercurial NameVirtualHost *:8080 <VirtualHost *:8080> UseCanonicalName Off ServerAdmin webmaster@localhost AddHandler cgi-script .cgi ScriptAliasMatch ^(.*) /usr/lib/cgi-bin/hgwebdir.cgi/$1 </VirtualHost> I think that it is because of the already existing virtual hosts that my I can't access the zabbix page. How to circumvent this?

    Read the article

  • Limiting interface bandwidth with tc under Linux

    - by Matt
    I have a linux router which has a 10GBe interface on the outside and bonded Gigabit ethernet interfaces on the inside. We have currently budget for 2GBit/s. If we exceed that rate by more than 5% average for a month then we'll be charged for the whole 10Gbit/s capacity. Quite a step up in dollar terms. So, I want to limit this to 2GBit/s on 10GBe interface. TBF filter might be ideal, but this comment is of concern. On all platforms except for Alpha, it is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, sending out data exactly at the configured rates. Should I be using TBF or some other filter to apply this rate to the interface and how would I do it. I don't understand the example given here: Traffic Control HOWTO In particular "Example 9. Creating a 256kbit/s TBF" tc qdisc add dev eth0 handle 1:0 root dsmark indices 1 default_index 0 tc qdisc add dev eth0 handle 2:0 parent 1:0 tbf burst 20480 limit 20480 mtu 1514 rate 32000bps How is the 256K bit/s rate calculated? In this example, 32000bps = 32k bytes per second. Since tc uses bps = bytes per second. I guess burst and limit come into play but how would you go about choosing sensible numbers to reach the desired rate? This is not a mistake. I tested this and it gave a rate close to 256K but not exactly that.

    Read the article

  • How to use webdav and user dir in the same time in the same section ?

    - by Louis
    Dear community, i would like to mount trough webdav my https://myserver/~user_account but not https://myserver/. What i am doing now is : <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /banonymous/data/home/*/public_html> DAV On AllowOverride FileInfo Limit AuthConfig Options MultiViews SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> and i am setting the authentification in the .htaccess of eache user. AuthType Basic AuthName "Password Required" AuthUserFile /etc/apache2/users/htpasswd Require User geeky It does not work. Is there someone who can tell me if it is possible ? and if it is how to do it. My dream would have been to put the Dav On in the .htaccess.

    Read the article

  • File download speed issue over a dedicated fibre link

    - by nixnotwin
    My ISP has installed a fibre based dedicated internet connection at the place where I work. In the beginning the connection terminated at one of the ISP's core routers. It resulted in a strange issue. Eventhough the assigned speed was 5mbps, when tests were done by downloading large files over http and ftp from multiple locations, the speed never went above 2mbps. But bittorrent downloads reached 5mbps. Even file download from the ISP servers were fine. So, at the ISP our link was attached directly to their edge router. After this file downloads from high bandwidth servers, like Google and MS, reached the 5 mbps limit. Sometimes the speed would fall down below 2 mbps and suddenly it will go up to the 5 mbps limit ( it keeps on happening during any single file download). But other downloads like ubuntu apt repositories still struggle to go above 2 mbps. The engineers at the ISP have not been able to sort out the issue. After they moved us to their edge router instead of giving us 8 public ip's, they just gave 4 ip's. When we enquired about it, they told us that giving more ip's would result in arp overload at their edge router. But somehow I was able to convince them to give us the 8 ip's which we wanted. But the file download issue has remained. What might be the reason for files from different location getting downloaded with different speeds, that too with heavy fluctuation in speeds? I have downloaded files from same url's from a connection belonging to another smaller ISP, and the speeds were fine and reached full 5 mbps limit.

    Read the article

  • Page pool memory

    - by legiwei
    I'm currently using Windows XP SP3 32 bit, using C2D E6320 with 2GB RAM. When I am playing Starcraft 2, I encounter an error where it says my system is running low on page pool memory. Starcraft graphic settings suggested a high settings for me. I do not think it has to do with my GC but with my RAM. I then made a search to try to rectify the problem. Apparently, it's something to do with my virtual memory. I then proceed to try to the suggested solution which is to temper the registry and limit the page pool memory to 384MB. However, having done so, I still could not achieved it. I've seen screenshot settings of windows XP with 2GB having 384MB of page pool memory. My default settings puts it at 195MB whereas when I try to increase the pool limit, it can only go to a max of 229MB. I tried increasing my RAM capacity to 3GB but the pool limit still remains. I like to know how to increase my page pool memory. I've tried searching for solution but to no avail other than the one that I've mentioned above (which didn't solve my problem completely).

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • How to deal with the extremely big *.ost files in a Terminal Server environment which is running out of space

    - by Wolfgang Kuehne
    Our Terminal Server is running out of hard disk space, and the major files which occupy most of the space are *.ost files of the Outook, which come form the users which use the Terminal Server all the time through remote desktop. The Outlook is installed on the Terminal Server and various users can use it. What would be a solution in this case. Is there a way to limit the size of the *.ost files? I read in forums that having the Outlook 2010 set up in Cached Exchange Mode isn't the best practice for an environment where the hdd space is a major constraint. First thing that came to my mind is using folder redirection, and place the ost files (together with the AppData forlder) in a network share, but this does not help, because the ost files are saved a part of the AppData folder which can not be redirected. Then I thought if it is possible to limit the size of the ost file? Or limit the time that it keeps emailed cached, say just emails from the last 6 months are sufficient. Another solutions that came to my mind, moving the ost files somewhere else, this required the old ost file to be removed, and creation of a new one. I am not quite sure if the new OST file will still have cached the emails which where available in the old ost, or will it start caching from where the other one left. What do you suggest?

    Read the article

  • HTTP Error: 413 Request Entity Too Large

    - by Torben Gundtofte-Bruun
    What I have: I have an iPhone app that sends HTTP POST requests (XML format) to a web service written in PHP. This is on a hosted virtual private server so I can edit httpd.conf and other files on the server, and restart Apache. The problem: The web service works perfectly as long as the request is not too large, but around 1MB is the limit. After that, the server responds with: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. </body></html> The web service writes its own log file, and I can see that small messages are processed fine. Larger messages are not logged at all so I guess that something in Apache rejects them before they even reach the web service? Things I've tried without success: (I've restarted Apache after every change. These steps are incremental.) hosting provider's web-based configuration panel: disable mod_security httpd.conf: LimitXMLRequestBody 0 and LimitRequestBody 0 httpd.conf: LimitXMLRequestBody 100000000 and LimitRequestBody 100000000 httpd.conf: SecRequestBodyLimit 100000000 At this stage, Apache's error.log contains a message: ModSecurity: Request body no files data length is larger than the configured limit (1048576) It looks like my step #4 didn't really take, which is consistent with step #1 but does not explain why mod_security appears to be active after all. What more can I try, to get the web service to receive large messages?

    Read the article

  • Sun Grid Engine (SGE) / limiting simultaneous array job sub-tasks

    - by wfaulk
    I am installing a Sun Grid Engine environment and I have a scheduler limit that I can't quite figure out how to implement. My users will create array jobs that have hundreds of sub-tasks. I would like to be able to limit those jobs to only running a set number of tasks at the same time, independent of other jobs. Like I might have one array job that I want to run 20 tasks at a time, and another I want to run 50 tasks at a time, and yet another that I'm fine running without limit. It seems like this ought to be doable, but I can't figure it out. There's a max_aj_instances configuration option, but that appears to apply globally to all array jobs. I can't see any way to use consumable resources, as I'd need a "complex attribute" that is per-job, and that feature doesn't seem to exist. It didn't look like resource quotas would work, but now I'm not so sure of that. It says "A resource quota set defines a maximum resource quota for a particular job request", but it's unclear if an array job's sub-tasks' resource requests will be aggregated for the purposes of the resource quota. I'm going to play with this, but hopefully someone already knows outright.

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • How Would I Restrict a Linux Binary to a Limited Amount of RAM?

    - by Ken S.
    I would like to be able to limit an installed binary to only be able to use up to a certain amount of RAM. I don't want it to get killed if it exceeds it, only that that would be the max amount that it could use. The problem I am facing is that I am running an Apache 2.2 server with PHP and some custom code that a developer is writing for us. The problem is that somewhere in there code they launch a PHP exec call that launches ImageMagick's 'convert' to create a resized image file. I'm not privy to a lot of details to the project or the code, but need to find a solution to keep them from killing the server until they can find a way to optimize the code. I had thought that I could do this with /etc/security/limits.conf and setting a limit on the apache user, but it seems to have no effect. This is what I used: www-data hard as 500 If I understand it correctly, this should have limited any apache user process to a maximum to 500kb, however, when I ran a test script that would chew up a lot of RAM, this actually got up to 1.5GB before I killed it. Here is the output of 'ps auxf' after the setting change and a system reboot: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 5268 0.0 0.0 401072 10264 ? Ss 15:28 0:00 /usr/sbin/apache2 -k start www-data 5274 0.0 0.0 402468 9484 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k start www-data 5285 102 9.4 1633500 1503452 ? Rl 15:29 0:58 | \_ /usr/bin/convert ../tours/28786/.…. www-data 5275 0.0 0.0 401072 5812 ? S 15:28 0:00 \_ /usr/sbin/apache2 -k start Next I thought I could do it with Apache's RlimitMEM setting, but get the same result of it not getting limited. Here is what I have in my apache.conf file: RLimitMEM 500000 512000 It wasn't until many hours later that I figured out that if the process actually reached that amount that it would die with an OOM error. Would love any ideas on how to set this limit so other things could function on the server, and all of them could play together nicely.

    Read the article

  • Limited bandwidth and transfer rates per user.

    - by Cx03
    I searched for a while but couldn't find anything concrete, hopefully someone can help me. I'm going to be running a Debian server on a gigabit port, and want to give each user his/her fair share of internet access. The first objective is easy - transfer rates (speed) per user. From what I've looked at, IPTables/Shorewall could do the job easy. Is this easy to setup, or could one of you point me at a config? I was hoping to limit users at 300mbit or 650mbit each. The second objective gets complicated. Due to the usage of the boxes, most of the traffic will be internal network traffic that does NOT get counted to the quota. However, I still need to limit the external traffic, and if they go over, cut off access (or throttle traffic to a very low speed (10mbit?)). Let's say the user has a 3TB external traffic limit. The IF part is: If the hostname they are exchanging the traffic with DOES NOT MATCH .ovh. or .kimsufi. (company owns multiple TLDs), count to the quota. Once said quota exceeds 3TB, choke them. Where could I find a system to count that for me? It would also need to reset or be able to be manually reset on a monthly basis. Thanks ahead of time!

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • Page pool memory

    - by legiwei
    I'm currently using Windows XP SP3 32 bit, using C2D E6320 with 2GB RAM. When I am playing Starcraft 2, I encounter an error where it says my system is running low on page pool memory. Starcraft graphic settings suggested a high settings for me. I do not think it has to do with my GC but with my RAM. I then made a search to try to rectify the problem. Apparently, it's something to do with my virtual memory. I then proceed to try to the suggested solution which is to temper the registry and limit the page pool memory to 384MB. However, having done so, I still could not achieved it. I've seen screenshot settings of windows XP with 2GB having 384MB of page pool memory. My default settings puts it at 195MB whereas when I try to increase the pool limit, it can only go to a max of 229MB. I tried increasing my RAM capacity to 3GB but the pool limit still remains. I like to know how to increase my page pool memory. I've tried searching for solution but to no avail other than the one that I've mentioned above (which didn't solve my problem completely).

    Read the article

  • where are the default ulimit values set? (linux, centos)

    - by nomercysir
    I have two CentOS (5) servers with nearly identical specs. When I login and do: ulimit -u on one machine, I get unlimited and on the other, 77824 When I run a cron like * * * * * ulimit -u > ulimit.txt I get the same results (unlimited, 77824). I am trying to determine where these are set so that I can alter them. They are not set in any of my profiles (.bashrc, /etc/profile, etc .. these wouldn't affect cron anyway) nor in /etc/security/limits.conf (which is empty). I have scoured google and even gone so far as to do grep -Ir 77824 / but nothing has turned up so far. I don't understand how these machines could have come preset with different limits. I am actually wondering not for these machines, but for a different (CentOS 6) machine which has a limit of 1024, which is far too small. I need to run cron jobs with a higher limit and the only way I know how to set that is in the cron job itself. That's ok, but I'd rather set it system wide so it's not as hacky. Thanks for any help. This seems like it should be easy (NOT) EDIT -- SOLVED Ok, I figured this out. It seems to be an issue either with CentOS 6 or perhaps my machine configuration. On the CentOS 5 configuration, I can set in /etc/security/limits.conf: * - nproc unlimited and that would effectively update the accounts and cron limits. However, this does not work in my CentOS 6 box. Instead, I must do: myname1 - nproc unlimited myname2 - nproc unlimited ... And things work as expected. Maybe the UID specification works to, but the wildcard (*) definitely DOES NOT here. Oddly, wildcards DO work for the 'nofile' limit. I still would love to know where the default values are actually coming from, because by default, this file is empty and I couldn't see why I had different defaults for the two CentOS boxes, which had identical hardware and were from the same provider.

    Read the article

  • Android sending lots of SMS messages

    - by Robert Parker
    I have a app, which sends a lot of SMS messages to a central server. Each user will probably send ~300 txts/day. SMS messages are being used as a networking layer, because SMS is almost everywhere and mobile internet is not. The app is intended for use in a lot of 3rd world countries where mobile internet is not ubiquitous. When I hit a limit of 100 messages, I get a prompt for each message sent. The prompt says "A large number of SMS messages are being sent". This is not ok for the user to get prompted each time to ask if the app can send a text message. The user doesn't want to get 30 consecutive prompts. I found this android source file with google. It could be out of date, I can't tell. It looks like there is a limit of 100 sms messages every 3600000ms(1 day) for each application. http://www.netmite.com/android/mydroid/frameworks/base/telephony/java/com/android/internal/telephony/gsm/SMSDispatcher.java /** Default checking period for SMS sent without uesr permit */ private static final int DEFAULT_SMS_CHECK_PERIOD = 3600000; /** Default number of SMS sent in checking period without uesr permit */ private static final int DEFAULT_SMS_MAX_ALLOWED = 100; and /** * Implement the per-application based SMS control, which only allows * a limit on the number of SMS/MMS messages an app can send in checking * period. */ private class SmsCounter { private int mCheckPeriod; private int mMaxAllowed; private HashMap<String, ArrayList<Long>> mSmsStamp; /** * Create SmsCounter * @param mMax is the number of SMS allowed without user permit * @param mPeriod is the checking period */ SmsCounter(int mMax, int mPeriod) { mMaxAllowed = mMax; mCheckPeriod = mPeriod; mSmsStamp = new HashMap<String, ArrayList<Long>> (); } boolean check(String appName) { if (!mSmsStamp.containsKey(appName)) { mSmsStamp.put(appName, new ArrayList<Long>()); } return isUnderLimit(mSmsStamp.get(appName)); } private boolean isUnderLimit(ArrayList<Long> sent) { Long ct = System.currentTimeMillis(); Log.d(TAG, "SMS send size=" + sent.size() + "time=" + ct); while (sent.size() > 0 && (ct - sent.get(0)) > mCheckPeriod ) { sent.remove(0); } if (sent.size() < mMaxAllowed) { sent.add(ct); return true; } return false; } } Is this even the real android code? It looks like it is in the package "com.android.internal.telephony.gsm", I can't find this package on the android website. How can I disable/modify this limit? I've been googling for solutions, but I haven't found anything.

    Read the article

  • Separate Query for Count

    - by Anraiki
    Hello, I am trying to get my query to grab multiple rows while returning the maximum count of that query. My query: SELECT *, COUNT(*) as Max FROM tableA LIMIT 0 , 30 However, it is only outputting 1 record. I would like to return multiple record as it was the following query: SELECT * FROM tableA LIMIT 0 , 30 Do I have to use separate queries?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >