Search Results

Search found 4803 results on 193 pages for '200'.

Page 18/193 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Custom page sizes in paging dropdown in Telerik RadGrid

    Working with Telerik RadControls for ASP.NET AJAX is actually quite easy and the initial effort to get started with the control suite is very low. Meaning that you can easily get good result with little time. But there are usually cases where you have to go a little further and dig a little bit deeper than the standard scenarios. In this article I am going to describe how you can customize the default values (10, 20 and 50) of the drop-down list in the paging element of RadGrid. Get control over the displayed page sizes while using numeric paging... The default page sizes are good but not always good enough The paging feature in RadGrid offers you 3, well actually 4, possible page sizes in the drop-down element out-of-the box, which are 10, 20 or 50 items. You can get a fourth option by specifying a value different than the three standards for the PageSize attribute, ie. 35 or 100. The drawback in that case is that it is the initial page size. Certainly, the available choices could be more flexible or even a little bit more intelligent. For example, by taking the total count of records into consideration. There are some interesting scenarios that would justify a customized page size element: A low number of records, like 14 or similar shouldn't provide a page size of 50, A high total count of records (ie: 300+) should offer more choices, ie: 100, 200, 500, or display of all records regardless of number of records I am sure that you might have your own requirements, and I hope that the following source code snippets might be helpful. Wiring the ItemCreated event In order to adjust and manipulate the existing RadComboBox in the paging element we have to handle the OnItemCreated event of RadGrid. Simply specify your code behind method in the attribute of the RadGrid tag, like so: <telerik:RadGrid ID="RadGridLive" runat="server" AllowPaging="true" PageSize="20"    AllowSorting="true" AutoGenerateColumns="false" OnNeedDataSource="RadGridLive_NeedDataSource"    OnItemDataBound="RadGrid_ItemDataBound" OnItemCreated="RadGrid_ItemCreated">    <ClientSettings EnableRowHoverStyle="true">        <ClientEvents OnRowCreated="RowCreated" OnRowSelected="RowSelected" />        <Resizing AllowColumnResize="True" AllowRowResize="false" ResizeGridOnColumnResize="false"            ClipCellContentOnResize="true" EnableRealTimeResize="false" AllowResizeToFit="true" />        <Scrolling AllowScroll="true" ScrollHeight="360px" UseStaticHeaders="true" SaveScrollPosition="true" />        <Selecting AllowRowSelect="true" />    </ClientSettings>    <MasterTableView DataKeyNames="AdvertID">        <PagerStyle AlwaysVisible="true" Mode="NextPrevAndNumeric" />        <Columns>            <telerik:GridBoundColumn HeaderText="Listing ID" DataField="AdvertID" DataType="System.Int32"                SortExpression="AdvertID" UniqueName="AdvertID">                <HeaderStyle Width="66px" />            </telerik:GridBoundColumn>             <!--//  ... and some more columns ... -->         </Columns>    </MasterTableView></telerik:RadGrid> To provide a consistent experience for your visitors it might be helpful to display the page size selection always. This is done by setting the AlwaysVisible attribute of the PagerStyle element to true, like highlighted above. Customize the values of page size Your delegate method for the ItemCreated event should look like this: protected void RadGrid_ItemCreated(object sender, GridItemEventArgs e){    if (e.Item is GridPagerItem)    {        var dropDown = (RadComboBox)e.Item.FindControl("PageSizeComboBox");        var totalCount = ((GridPagerItem)e.Item).Paging.DataSourceCount;        var sizes = new Dictionary<string, string>() {            {"10", "10"},            {"20", "20"},            {"50", "50"}        };        if (totalCount > 100)        {            sizes.Add("100", "100");        }        if (totalCount > 200)        {            sizes.Add("200", "200");        }        sizes.Add("All", totalCount.ToString());        dropDown.Items.Clear();        foreach (var size in sizes)        {            var cboItem = new RadComboBoxItem() { Text = size.Key, Value = size.Value };            cboItem.Attributes.Add("ownerTableViewId", e.Item.OwnerTableView.ClientID);            dropDown.Items.Add(cboItem);        }        dropDown.FindItemByValue(e.Item.OwnerTableView.PageSize.ToString()).Selected = true;    }} It is important that we explicitly check the event arguments for GridPagerItem as it is the control that contains the PageSizeComboBox control that we want to manipulate. To keep the actual modification and exposure of possible page size values flexible I am filling a Dictionary with the requested 'key/value'-pairs based on the number of total records displayed in the grid. As a final step, ensure that the previously selected value is the active one using the FindItemByValue() method. Of course, there might be different requirements but I hope that the snippet above provide a first insight into customized page size value in Telerik's Grid. The Grid demos describe a more advanced approach to customize the Pager.

    Read the article

  • How to make Facebook like button narrower than 225px?

    - by tog22
    I'm generating a like button with Facebook's 'standard' layout for my site via https://developers.facebook.com/docs/reference/plugins/like/ . I've set its width to 200 pixels, but notice that setting it to lower than 225 pixels has no effect, and the documentation on that page indeed specifies 225px as the minimum width for the standard layout. Unfortunately I need to make it 200 pixels wide to fit my site's design. Is there any way to force it into this width? (The site's at http://gwwc2.centreforeffectivealtruism.org/ if you want to have a play with Firebug, though the like button gets generated by javascript so you'd probably have to duplicate that page and edit its source.)

    Read the article

  • facebook og:image not working [closed]

    - by zeinab
    When I try to post a comment (share link) on an article from my page which is actually a page I am using as an application in Facebook: https://apps.facebook.com/ids_newsletter/, Facebook chooses the feed thumbnail image of the post feed randomly from my page. I tried using below to put a custom image for the post but nothing affected the post thumbnail. So what should I do? <meta property="og:title" content="IDS June Newsletter" /> <meta property="og:description" content="Check this out" /> <meta property="og:image" content="MYSITEURL/Images/logobig.png" /> <meta property="og:image:type" content="image/png" /> <meta property="og:image:width" content="200" /> <meta property="og:image:height" content="200" />

    Read the article

  • Developing a search algorithm

    - by Richart Bremer
    I want to create a basic search engine, and I want you to give me some ideas how to filter out the best results for my visitors. I have three fields regarding a product the user can search in: Title Category Description I came up with these ideas and I ask you to either competently criticize them or add to them. If the search term occurs in all three fields it should be among the first results. If it is in two of the fields it is below the results of 1. Combine the amount of occurences and output a value in per cent. For instance if in all fields together the term clock appeared 50 times and in all fields together there are 200 words, then the per cent value is 50/200*100 = 25%. Another product entry amounts to say 20% so product one having 25% is listed before product two having 20%.

    Read the article

  • How to organize pictures on website using css? [on hold]

    - by user3624023
    Here is my website without any CSS: http://www.wmcicompsci.ca/cs20/students/theglowcloud/Bare%20bones%20website/classics_bare.html I am new to CSS and I would like to organize pictures these pictures in this fashion: http://css-tricks.com/examples/SlideinCaptions/ I would just like this layout for the pictures but I do not need the sliding of the captions(although I would like to but it does not work my browser). I would like the captions to be like titles on top of the pictures. Here is my current html code: <!DOCTYPE html> <html> <head> <title> My favourite Fantasy books</title> <meta charset = "utf-8"> <link rel="stylesheet" href="css.css"> </head> <body> <nav id="main_nav"> <ul> <li><a href = " homepage_css.html"> Homepage</a></li> <li><a href="science_fiction_css.html">Science Fiction</a></li> <li><a href="classics_css.html">Classics</a></li> <li><a href="fantasy_css.html">Fantasy</a></li> </ul> </nav> <h1> Fantasy Genre</h1> <p> Here are my favourites:</p> <ul> <li> Goblet of Fire by J.K Rowling (4th book in the Harry Potter Series) </li> <li><img class= displayed src="pics/fantasy/goblet_of_fire.jpg" width="200" alt="Goblet of Fire book cover"></li> <li> Graceling by Kristan Cashore </li> <li><img src="pics/fantasy/graceling.jpg" width="200" alt = " Graceling book cover"></li> <li> Serpent's Shadow by Rick Riordan (3rd book in the Kane Chronicles) </li> <li><img src="pics/fantasy/serpents_shadow.jpg" width="200" alt="Serpent's Shadow book cover"></li> <li> The Hobbit by J.R.R. Tolkein </li> <li><img src="pics/fantasy/the_hobbit.jpg" width="200" alt="The Hobbit book cover"></li> <li> The False Prince by Jennifer Neilson (1st book in the Ascendance Triology) </li> <li><img src="pics/fantasy/the_false_prince.jpg" width="200" alt="The False Prince book cover"></li> </ul>

    Read the article

  • Zabbix Log Monitoring - Duplicate alerts

    - by ArunS
    I am configured Zabbix to monitor my Jboss Server logs for Erros and exclude some know errors. This setup is working with one issue. Zabbix will send me alerts when there is a new "ERROR" entry in the log file. But sometimes I get multiple alerts for the same event. For example, I got 5 alerts with the same time stamp "2012-06-25 07:55:56,864 ERROR". The duplicate alerts count is not constant, sometimes I get 2 sometimes 5 or 11. I checked the Monitoring Latest data in the GUI, and found that there is no duplicate entries. I have given my configuration of the log monitoring below. I am using latest version of zabbix server(2.0) Item configuration: Description: Server Error Monitoring. Key: log["/SERVER/jboss/jboss-5/server/ps/log/server.log","ERROR",UTF-8,200,skip] Type: Zabbix Agent (Active) Type of information: Log Interval :30 Trigger configuration: Description: Found Error in Server Log. Expression: (({SERVER Error Monitoring - PS:log["/SERVER/jboss/jboss-5/server/ps/log/server.log","ERROR",UTF-8,200,skip].regexp("can not execute")})=0) & (({SERVER Error Monitoring - PS:log["/SERVER/jboss/jboss-5/server/ps/log/server.log","ERROR",UTF-8,200,skip].regexp("Unexpected redirect")})=0) Event generation: Normal + Multiple TRUE events Action configuration: Name: alert mail Event source: Trigger Enable escalations: Uncheck Default subject/message: Default Recovery message: Uncheck Action conditions: Trigger value = PROBLEM Action operations: Send message to User "Admin" Please help me fixing this issue.

    Read the article

  • lftp cannot connecto to IIS

    - by ruyrocha
    Hello, I can not connect to IIS using lftp as you can see here: <--- 200 Language is now English, UTF-8 encoding. ---> OPTS UTF8 ON <--- 200 OPTS UTF8 command successful - UTF8 encoding now ON. ---> HOST x.x.x.x <--- 504 Server cannot accept argument. ---> USER bla <--- 331 Password required for hgtrf. ---> PASS blabla <--- 230 User logged in. ---> PWD <--- 257 "/" is current directory. ---> PBSZ 0 <--- 200 PBSZ command successful. ---> PROT P <--- 534 Policy denies SSL. ---> PASV <--- 227 Entering Passive Mode (x.x.x.x,194,118). ---- Connecting data socket to (x.x.x.x) port 49782 **** Socket error (Connection refused) - reconnecting ---> LIST ---> ABOR ---- Closing aborted data socket ---- Closing control socket I could connect, list, retrieve and send files using standard ftp command. Do you have any suggestion?

    Read the article

  • Subversion gives Error 500 until authenticating with a web browser

    - by Farseeker
    We used to use Collabnet SVN/Apache combo on a Windows server with LDAP authentication, and whilst the performance wasn't brilliant it used to work perfectly. After switching to a fresh Ubuntu 10 install, and setting up an Apache/SVN/LDAP configuration, we have HTTPS access to our repositories, using Active Directory authentication via LDAP. We're now having a very peculiar issue. Whenever a new user accesses a repository, our SVN clients (we have a few depending on the tool, but for arguments sake, let's stick to Tortoise SVN) report "Error 500 - Unknown Response". To get around this, we have to log into the repo using a web browser and navigate 'backwards' until it works E.G: SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/ - Forbidden 403 (correct) Webbrowse to https://svn.example.local/SVN/MyRepo/ - OK 200 (correct) SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/MyModule/ - OK 200 (correct) SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - OK 200 (correct) It seems to require authentication up the tree, starting from the svnparentpath up through to the module required. Has anyone seen anything like this before? Any ideas on where to start before I ditch it back to Collabnet's SVN server?

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • weird POST request in IIS logs

    - by MIrrorMirror
    I noticed weird log entries (unless there's something i don't understand) in my IIS (7.5) logs. it's an online dictionary with requests ( user friendly url rewriting ) and most of them are GET. However I noticed weird POST requests which are taking place by a person who is trying to crawl our content ( tens of thousands of such requests ) 2013-11-09 20:39:27 GET /dict/mylang/word1 - y.y.y.y Mozilla/5.0+(compatible;+Googlebot/2.1;++http://www.google.com/bot.html) - 200 296 2013-11-09 20:39:29 GET /dict/mylang/word2 - z.z.z.z Mozilla/5.0+(iPhone;+CPU+iPhone+OS+6_0+like+Mac+OS+X)+AppleWebKit/536.26+(KHTML,+like+Gecko)+Version/6.0+Mobile/10A5376e+Safari/8536.25+(compatible;+Googlebot-Mobile/2.1;++http://www.google.com/bot.html) - 200 468 2013-11-09 20:39:29 POST /dict/mylang/word3 - x.x.x.x - - 200 2593 The two first requests are legal. Now for the third request, I don't think I have allowed cross domain POST. if that what the third log line means. all those POST requests take that much time for unknown reasons to me. I would like to know how are those POST requests possible and how can I stop them. p.s. I have masked the IPs on purpose. any help would be appreciated! thank you in advance.

    Read the article

  • SFTP only works occasionally

    - by 82din
    I suddenly get this error using SFTP: Status: Connecting to example.com... Response: fzSftp started Command: open "[email protected]" 22 Command: Pass: ********* Status: Connected to example.com Status: Retrieving directory listing... Command: pwd Response: Current directory is: "/root" Command: ls Status: Listing directory /root Error: Connection timed out Error: Failed to retrieve directory listing I tried using FileZila, Cyberduck, Shell (Terminal), same result. However, it worked fine today (just a few seconds) in Passive mode. I guess something changed in my network, so I have tried both: Active and Passive mode: Connecting to probe.filezilla-project.org Response: 220 FZ router and firewall tester ready USER FileZilla Response: 331 Give any password. PASS 3.6.0.2 Response: 230 logged on. Checking for correct external IP address Retrieving external IP address from http://checkip.dyndns.org:8245/ Checking for correct external IP address IP <external IP> big-bf-ccc-f Response: 200 OK PREP 49565 Response: 200 Using port 49565, data token 380352881 PORT 186,15,222,5,193,157 Response: 200 PORT command successful LIST Response: 150 opening data connection Response: 503 Failure of data connection. Server sent unexpected reply. Connection closed Because I'm working behind a router, I get my external IP from http://checkip.dyndns.org:8245/ I also tested different range of ports.

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Adding License to VMware Server 2 via scripting command?

    - by andyt25
    Hi all, I recently discovered the vimsvc/license command in vmware-vim-cmd and was trying to use that to automatically add my license key to a fresh vmware installation. vmware-vim-cmd -H hostip -O portnumber vimsvc/license --source file '/path/to/plaintext-file-that-contains-my-license-key.txt' plaintext-file-that-contains-my-license-key.txt contains my key in XXXXX-XXXXX-XXXXX-XXXXX format, I've also tried it with an extra carriage return at the end. Adding the key that way doesn't work, however. I always get the following error message: [200] Reading local file: /path/to/plaintext-file-that-contains-my-license-key.txt [200] Size of file is 24 bytes. returned were XXXXX-XXXXX-XXXXX-XXXXX [200] Changing license source to: file:/path/to/plaintext-file-that-contains-my-license-key.txt [500] Caught unexpected exception Type: N5Vmomi5Fault17NotEnoughLicenses9ExceptionE what() =vmodl.fault.NotEnoughLicenses GetMsg() = There are not enough licenses installed to perform the operation. It's kinda silly to require a license to be able to add a license, don't you think? ;-) So how do I go about and add the key via script? I would like to avoid any interaction as I have the rest of the install fully scripted and non-interactive. Kind Regards, Stefan

    Read the article

  • Apache logging issues

    - by Dan
    I'm trying to parse apache log files, but I'm finding some strange results and I'm not sure what they mean. Hopefully someone can provide some insight. (all of the IP addresses were altered. none actually start with 192, I didn't figure the search engines mattered though.) In the first example, multiple ip addresses are showing up in the host field: 192.249.71.25 - - [04/Aug/2009:04:21:44 -0500] "GET /publications/example.pdf HTTP/1.1" 200 2738 192.0.100.93, 192.20.31.86 - - [04/Aug/2009:04:21:22 -0500] "GET /docs/another.pdf HTTP/1.0" 206 371469 What causes this? Does it have to do with proxy servers? Is there a way to have Apache only log one? In the second example, a bunch of information is just completely missing! What would cause this? msnbot-65-55-207-50.search.msn.com - - [29/Dec/2009:15:45:16 -0600] "GET /publications/example.pdf HTTP/1.1" 200 3470073 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" 266 3476792 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 594 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 4195 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 299 109218 crawl-17c.cuil.com - - [29/Dec/2009:15:45:46 -0600] "GET /publications/another.pdf HTTP/1.0" 200 101481 "-" "Mozilla/5.0 (Twiceler-0.9 http://www.cuil.com/twiceler/robot.html)" 253 101704 My CustomLog configuration says: LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\" %I %O" common

    Read the article

  • Public-to-Public IPSec tunnel: NAT confusion

    - by WuckaChucka
    I know this is possible -- and apparently fairly common with larger companies that don't/can't route private addresses for overlap reasons -- but I can't wrap my head around how to get this to work. I'm playing around with pfSense, Vyatta and a Cisco 5505 right now, hardware-wise. So here's my setup: WEST: Vyatta outside: 10.0.0.254/24 inside: 172.16.0.1/24 machine a: 172.16.0.200/24 EAST: Cisco 5505 outside: 10.0.0.210/24 inside: 192.168.10.1 machine b (webserver): 192.168.10.2 So what we're trying to do is this: route traffic across the tunnel from machine A to machine B without using private addresses. i.e. 172.16.0.200 makes a TCP request to 10.0.0.210:80, and as far as EAST is concerned, it sees a src IP of 10.0.0.254. On WEST, I have your typical many-to-one Source NAT to translate 172.16.0.0/24 to 10.0.0.254 and that's confirmed to be working. Also on WEST, I have the following IPSec config: Local IP: 10.0.0.254 Peer IP: 10.0.0.210 local subnet: 10.0.0.254/32 remote subnet: 10.0.0.210/32 I have the reversed configuration on EAST. What happens when I make a request from machine A to 10.0.0.210:80 is that the SNAT translates the private address of machine A to 10.0.0.254 and it's routed out (and discarded at the other end) without establishing the tunnel. What I'm assuming is happening is that the inside interface on WEST receives a packet from 172.16.0.200 and since this doesn't match the local subnet defined in the tunnel configuration, it's not processed by the IPSec engine and the tunnel is not established. How do you make this work? Seems like a chicken and egg thing with the NAT and IPSec and I just can't wrap my head around how this can be done: can I say, "if a packet is received on the inside interface with a destination of 10.0.0.210, translate it to 10.0.0.254 before the IPSec engine inspects it"?

    Read the article

  • Why my 2nd ip from traceroute is not answering the ping anymore?

    - by Pedro77
    My Internet is really laggy today, I did a tracerout and I realize that I'm having no answer from an ip at the beginning of the traceroute. see: Tracing route to 12.129.202.154 over a maximum of 30 hops 1 <1 ms <1 ms <1 ms 192.168.0.1 2 * * * Request timed out. 3 8 ms 8 ms 8 ms bd044008.virtua.com.br [189.4.64.8] 4 9 ms 8 ms 8 ms bd044009.virtua.com.br [189.4.64.9] 5 26 ms 26 ms 24 ms embratel-T0-1-5-0-tacc01.cas.embratel.net.br [200.174.243.21] 6 360 ms 15 ms 12 ms ebt-T0-15-0-12-tcore01.ctamc.embratel.net.br [200.244.140.218] 7 330 ms 349 ms 261 ms ebt-Bundle-POS11942-intl04.mianap.embratel.net.br [200.230.220.10] 8 139 ms 141 ms 139 ms sl-st30-mia-.sprintlink.net [144.223.64.221] Connection diagram: PC - Router configured as access point - Router (192.168.0.1) - Cable modem (192.168.100.1). Well, I think it is odd that the 2nd ip is not returning the ping. I looked some old tracerout logs to see what was the 2nd ip. The ip was: 10.19.0.1 So, what this 2nd ip stand for? How can I find why it is not answering the ping? I don't understand it, if does not answer the ping, how can the packets continue (yeah newbie question)? edit: well, because the hope 3 have a ping of 8 ms the hop 2 request time out should really not be a problem. But it is still odd that the 2nd hop stopped to answer ping request. So my doubts are: 1. Were the ip 10.19.0.1 is from? 2. Why it stopped to answer ping requests? 3. How can hop 7 be smaller than 6 and 8 smaller than 7 and 6!?? Shouldn't the pings be higher for each hop? Like: hop 3 time should be the sum of the hops before it plus its own time (hop 3 = 1+2+3) ??

    Read the article

  • Why are external domains appearing in my apache logs?

    - by Johan
    I've got several log entries that refer to an external domain - mainly a Russian search engine (http://www.yandex.ru/) How are these appearing in my logs? 82.146.58.53 - - [10/Jun/2010:00:49:11 +0000] "GET http://www.yandex.ru/ HTTP/1.0" 200 8859 "http://www.yandex.ru/" "Opera/9.80 (Windows NT 5.1; U; ru) Presto/2.5.22 Version/10.50"` 82.146.59.209 - - [10/Jun/2010:01:54:10 +0000] "GET http://www.yandex.ru/ HTTP/1.0" 200 8859 "http://www.yandex.ru/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2"` 82.146.41.7 - - [10/Jun/2010:02:55:34 +0000] "GET http://www.yandex.ru/ HTTP/1.0" 200 8859 "http://www.yandex.ru/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5" 125.45.109.166 - - [09/Jun/2010:11:04:17 +0000] "GET http://proxyjudge1.proxyfire.net/fastenv HTTP/1.1" 404 1010 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"

    Read the article

  • What does this strange network/subnet mask mean?

    - by dunxd
    I'm configuring a new ASA 5505 for deployment as a VPN endpoint in a remote office. After configuring it and connecting the VPN, I get the following messages: WARNING: Pool (10.6.89.200) overlap with existing pool. ERROR: IP address,mask <10.10.0.0,93.137.70.9> doesn't pair 10.6.89.200 is the address I configured for the ASA. It has the subnet mask 255.255.255.0. The ip address 10.10.0.0 corresponds to one of our subnets, but it certainly wouldn't have a subnet mask of 93.137.70.9. That looks more like a public IP address (and resolves to an ADSL connection somewhere). I am sure if we had such a subnet configured, that it would indeed overlap with 10.6.89.200. There is no reference to 93.137.70.9 in the config of this ASA or our head office ASA. Can anyone shed light on what is going on here? The sudden appearance of a strange subnet mask is a bit alarming.

    Read the article

  • nginx and proxy_hide_header

    - by giskard
    When I curl for a URL I get this answer back: > < HTTP/1.1 200 OK < Server: nginx/0.7.65 < Date: Thu, 04 Mar 2010 12:18:27 GMT < Content-Type: application/json < Connection: close < Expires: Thu, 04 Mar 2010 12:18:27 UTC < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 < Transfer-Encoding: chunked I want to remove < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 So I used the proxy_hide_header directive in this way: location / { if ($arg_id) { proxy_pass http..authorized; break; } proxy_pass http..anonymous; proxy_hide_header http.context.path; proxy_hide_header jersey.response; proxy_hide_header http.request.path; proxy_hide_header http.status ; } But it doesn't work. any clues?

    Read the article

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • tc u32 --- how to match L2 protocols in recent kernels?

    - by brownian
    I have a nice shaper, with hashed filtering, built at a linux bridge. In short, br0 connects external and internal physical interfaces, VLAN tagged packets are bridged "transparently" (I mean, no VLAN interfaces are there). Now, different kernels do it differently. I can be wrong with exact kernel verions ranges, please forgive me. Thanks. 2.6.26 So, in debian, 2.6.26 and up (up to 2.6.32, I believe) --- this works: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 flowid 1:200 Here, "kernel" matches two bytes in "protocol" field with 0x8100, but counts the beginning of ip packet as a "zero position" (sorry for my English, if I'm a bit unclear). 2.6.32 Again, in debian (I've not built vanilla kernel), 2.6.32-5 --- this works: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 at 20 flowid 1:200 Here, "kernel" matches the same for protocol, but counts offset from the beginning of this protocol's header --- I have to add 4 bytes to offset (20, not 16 for dst address). It's ok, seems more logical, as for me. 3.2.11, the latest stable now This works --- as if there is no 802.1q tag at all: tc filter add dev internal protocol ip parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 flowid 1:200 The problem is that I couldn't find a way to match 802.1q tag so far. Matching 802.1q tag at past I could do this before as follows: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 match u16 0x0ed8 0x0fff at -4 flowid 1:300 Now I'm unable to match 802.1q tag with at 0, at -2, at -4, at -6 or like that. The main issue that I have zero hits count --- this filter is not being checked at all, "wrong protocol", in other words. Please, anyone, help me :-) Thanks!

    Read the article

  • script to list user's mapped drive not giving results or error

    - by user223631
    We are in the process of migrating two file servers to a new server. We have mapped drives via user group in group policy. Many users have manually mapped drives and we need to find these mappings. I have created a PowerShell script to run that remotely get the drive mappings. It works on most computers but there are many that are not returning results and I am not getting any error messages. Each workstation on the list creates a text file and the ones that are not returning results have no text in the files. I can ping these machines. If the machine is not turned on, it does come up error message that the RPC server is not available. My domain user account is in a group that is in the local admin account. I have no idea why some are not working. Here is the script. # Load list into variable, which will become an array of strings If( !(Test-Path C:\Scripts)) { New-Item C:\Scripts -ItemType directory } If( !(Test-Path C:\Scripts\Computers)) { New-Item C:\Scripts\Computers -ItemType directory } If( !(Test-Path C:\Scripts\Workstations.txt)) { "No Workstations found. Please enter a list of Workstations under Workstation.txt"; Return} If( !(Test-Path C:\Scripts\KnownMaps.txt)) { "No Mapping to check against. Please enter a list of Known Mappings under KnownMaps.txt"; Return} $computerlist = Get-Content C:\Scripts\Workstations.txt # Loop through each item in the array (each computer in the list of computers we loaded into the variable) ForEach ($computer in $computerlist) { $diskObject = Get-WmiObject Win32_MappedLogicalDisk -computerName $computer | Select Name,ProviderName | Out-File C:\Tester\Computers\$computer.txt -width 200 } Select-String -Path C:\Tester\Computers\*.txt -Pattern cmsfiles | Out-File C:\Tester\Drivemaps-all.txt $strings = Get-Content C:\Tester\KnownMaps.txt Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -notmatch -simplematch | Out-File C:\Tester\Drivemaps-nonmatch.txt -Width 200 Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -simplematch | Out-File C:\Tester\Drivemaps-match.txt -Width 200

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    Hi, I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • Windows Server 2008 IIS Random disconnect

    - by d123
    I am having a bit of a quirk with my IIS server. I'm running my IIS with 2 sets of IPs configured, one in the 192 range and the other in 172 range. I then have multiple apps which will talk to this server for information. Server has no AV or firewalls configured. I noticed that my apps when talking to the server on the 172 range, at random intervals, the server would just not respond. My apps would then disconnect and just try again, and every thing would be fine. This doesn't happen on the 192 range. So what I did is on a Linux box I did a watch command and to wget a file every half second on the 172 and 192 IPs. I noticed the same issue, every once in awhile wget on the 172 range would not get through, but there is no issues at all on 192. Thus I went around to Wireshark and did a dump. This is the last 3 packets, no other packets were received. 7010 100.871877 200.100.30.7 172.0.0.1 TCP 59619 http [ACK] Seq=140 Ack=85242 Win=64128 Len=0 TSV=1072818795 TSER=1660246133 7011 100.872238 200.100.30.7 172.0.0.1 TCP 59619 http [FIN, ACK] Seq=140 Ack=85242 Win=64128 Len=0 TSV=1072818796 TSER=1660246133 7013 100.873081 200.100.30.7 172.0.0.1 TCP 59619 http [ACK] Seq=141 Ack=85243 Win=64128 Len=0 TSV=1072818796 TSER=1660246133 So this is my issue, there is a random disconnect every once in awhile. The server doesn't receive the next SYN packet. HELP?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >