Search Results

Search found 22903 results on 917 pages for 'full length screenshots'.

Page 11/917 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • RaphaelJS HTML5 Library pathIntersection() bug or alternative optimisation (screenshots)

    - by user1236048
    I have a chart generated using RaphaelJS library. It is just on long path: M 50 122 L 63.230769230769226 130 L 76.46153846153845 130 L 89.6923076923077 128 L 102.92307692307692 56 L 116.15384615384615 106 L 129.3846153846154 88 L 142.6153846153846 114 L 155.84615384615384 52 L 169.07692307692307 30 L 182.3076923076923 62 L 195.53846153846152 130 L 208.76923076923077 74 L 222 130 L 235.23076923076923 66 L 248.46153846153845 102 L 261.6923076923077 32 L 274.9230769230769 130 L 288.15384615384613 130 L 301.38461538461536 32 L 314.6153846153846 86 L 327.8461538461538 130 L 341.07692307692304 70 L 354.30769230769226 130 L 367.53846153846155 102 L 380.7692307692308 120 L 394 112 L 407.2307692307692 68 L 420.46153846153845 48 L 433.6923076923077 92 L 446.9230769230769 128 L 460.15384615384613 110 L 473.38461538461536 78 L 486.6153846153846 130 L 499.8461538461538 56 L 513.0769230769231 116 L 526.3076923076923 80 L 539.5384615384614 58 L 552.7692307692307 40 L 566 130 L 579.2307692307692 94 L 592.4615384615385 64 L 605.6923076923076 122 L 618.9230769230769 98 L 632.1538461538461 120 L 645.3846153846154 70 L 658.6153846153845 82 L 671.8461538461538 76 L 685.0769230769231 124 L 698.3076923076923 110 L 711.5384615384615 94 L 724.7692307692307 130 L 738 130 L 751.2307692307692 66 L 764.4615384615385 118 L 777.6923076923076 70 L 790.9230769230769 130 L 804.1538461538461 44 L 817.3846153846154 130 L 830.6153846153845 36 L 843.8461538461538 92 L 857.076923076923 130 L 870.3076923076923 76 L 883.5384615384614 130 L 896.7692307692307 60 L 910 88 Also below these chart I have a jqueryUI slider of the same width (860px) and centered with the chart. I want when I move the slider to move a dot on the chart accordingly with the slider position. See attached screenshot: As you can see it seems to work fine. I've implemented this behaviour using the pathIntersection() method. On the slide event at each ui.value (x coordinate) I intersect my chartPath (the one from above) with a vertical straight line at the x coordinate. But still there are some problems. One of them is that it runs very hard, and it kinda freezes sometimes.. and very weird sometimes it doesn't seem to intersect at all even it should.. I'll example below 2 cases I identified: M 499.8461538461538 0 L 499.8461538461538 140 M 910 0 L 910 140 Could you please explain why this intersect behaviour happens (it should return a dot).. and the worst part it seems like it happens randomly.. if I use another chartdata. Also if you can identify another (better) solution to syncronise the slider position with the dot on the chart.. would be perfect. I thought about using Element.getPointAtLength(length), but I don't know how. I think I should save the pathSegments and for each to compute the start Length and the finish Length.

    Read the article

  • what is the wrong with this code"length indicator implementation" ?

    - by cj
    Hello, this is an implementation of length indicator field but it hang and i think stuck at a loop and don't show any thing. // readx22.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "iostream" #include "fstream" #include "stdio.h" using namespace std; class Student { public: string id; size_t id_len; string first_name; size_t first_len; string last_name; size_t last_len; string phone; size_t phone_len; string grade; size_t grade_len; void read(fstream &ven); void print(); }; void Student::read(fstream &ven) { size_t cnt; ven >> cnt; id_len=cnt; id.reserve( cnt ); while ( -- cnt ) { id.push_back( ven.get() ); } ven >> cnt; first_len=cnt; first_name.reserve( cnt ); while ( -- cnt ) { first_name.push_back( ven.get() ); } ven >> cnt; last_len=cnt; last_name.reserve( cnt ); while ( -- cnt ) { last_name.push_back( ven.get() ); } ven >> cnt; phone_len=cnt; phone.reserve( cnt ); while ( -- cnt ) { phone.push_back( ven.get() ); } ven >> cnt; grade_len=cnt; grade.reserve( cnt ); while ( -- cnt ) { grade.push_back( ven.get() ); } } void Student::print() { // string::iterator it; for ( int i=0 ; i<id_len; i++) cout << id[i]; } int main() { fstream in; in.open ("fee.txt", fstream::in); Student x; x.read(in); x.print(); return 0; } thanks

    Read the article

  • Millions of SYN_RECV connections, no DDoS

    - by ThomK
    We have such server structure: reverse proxy (nginx) - worker (uwsgi) - postgresql / memcached. All servers are in local network behind router, with NATed external ip:ports (http/s 80/443 to proxy, and ssh 22 to all servers). Problem is, that sometimes proxy server netstat reports MILLIONS of SYN_RECV connections. From same IP / same ports. Like that: nginx ~ # netstat -n | grep 83.238.153.195 tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV tcp 0 0 192.168.1.1:80 83.238.153.195:3107 SYN_RECV [...] And this is not DDoS, because all IPs affected belongs to our website users. On side note, users says that it's not affecting them. Website is online and working, but... that particular one (from example above) told me that website is down and Firefox can't connect. I've done tcpdump. 19:42:14.826011 IP 83.238.153.195.zephyr-srv > 192.168.1.1.http: Flags [S], seq 1845850583, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:42:14.826042 IP 192.168.1.1.http > 83.238.153.195.zephyr-srv: Flags [S.], seq 2835837547, ack 1845850584, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:17.887331 IP 83.238.153.195.zephyr-srv > 192.168.1.1.http: Flags [S], seq 1845850583, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:42:17.887343 IP 192.168.1.1.http > 83.238.153.195.zephyr-srv: Flags [S.], seq 2835837547, ack 1845850584, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:19.065497 IP 192.168.1.1.http > 83.238.153.195.zephyr-srv: Flags [S.], seq 2835837547, ack 1845850584, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:23.918064 IP 83.238.153.195.zephyr-srv > 192.168.1.1.http: Flags [S], seq 1845850583, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:42:23.918076 IP 192.168.1.1.http > 83.238.153.195.zephyr-srv: Flags [S.], seq 2835837547, ack 1845850584, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:25.265499 IP 192.168.1.1.http > 83.238.153.195.zephyr-srv: Flags [S.], seq 2835837547, ack 1845850584, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:37.265501 IP 192.168.1.1.http > 83.238.153.195.zephyr-srv: Flags [S.], seq 2835837547, ack 1845850584, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:37.758051 IP 83.238.153.195.2107 > 192.168.1.1.http: Flags [S], seq 564208067, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:42:37.758069 IP 192.168.1.1.http > 83.238.153.195.2107: Flags [S.], seq 3188568660, ack 564208068, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:40.714360 IP 83.238.153.195.2107 > 192.168.1.1.http: Flags [S], seq 564208067, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:42:40.714374 IP 192.168.1.1.http > 83.238.153.195.2107: Flags [S.], seq 3188568660, ack 564208068, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:41.665503 IP 192.168.1.1.http > 83.238.153.195.2107: Flags [S.], seq 3188568660, ack 564208068, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:46.751073 IP 83.238.153.195.2107 > 192.168.1.1.http: Flags [S], seq 564208067, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:42:46.751087 IP 192.168.1.1.http > 83.238.153.195.2107: Flags [S.], seq 3188568660, ack 564208068, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:47.665498 IP 192.168.1.1.http > 83.238.153.195.2107: Flags [S.], seq 3188568660, ack 564208068, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:42:59.865499 IP 192.168.1.1.http > 83.238.153.195.2107: Flags [S.], seq 3188568660, ack 564208068, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:01.265500 IP 192.168.1.1.http > 83.238.153.195.zephyr-srv: Flags [S.], seq 2835837547, ack 1845850584, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:13.320382 IP 83.238.153.195.2114 > 192.168.1.1.http: Flags [S], seq 2136055006, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:43:13.320399 IP 192.168.1.1.http > 83.238.153.195.2114: Flags [S.], seq 3754336171, ack 2136055007, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:16.320556 IP 83.238.153.195.2114 > 192.168.1.1.http: Flags [S], seq 2136055006, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:43:16.320569 IP 192.168.1.1.http > 83.238.153.195.2114: Flags [S.], seq 3754336171, ack 2136055007, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:17.665498 IP 192.168.1.1.http > 83.238.153.195.2114: Flags [S.], seq 3754336171, ack 2136055007, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:22.250069 IP 83.238.153.195.2114 > 192.168.1.1.http: Flags [S], seq 2136055006, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:43:22.250080 IP 192.168.1.1.http > 83.238.153.195.2114: Flags [S.], seq 3754336171, ack 2136055007, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:23.665500 IP 192.168.1.1.http > 83.238.153.195.2114: Flags [S.], seq 3754336171, ack 2136055007, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:23.865501 IP 192.168.1.1.http > 83.238.153.195.2107: Flags [S.], seq 3188568660, ack 564208068, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:35.665498 IP 192.168.1.1.http > 83.238.153.195.2114: Flags [S.], seq 3754336171, ack 2136055007, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:37.903038 IP 83.238.153.195.2213 > 192.168.1.1.http: Flags [S], seq 2918118729, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:43:37.903054 IP 192.168.1.1.http > 83.238.153.195.2213: Flags [S.], seq 4145523337, ack 2918118730, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:40.772899 IP 83.238.153.195.2213 > 192.168.1.1.http: Flags [S], seq 2918118729, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:43:40.772912 IP 192.168.1.1.http > 83.238.153.195.2213: Flags [S.], seq 4145523337, ack 2918118730, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:41.865500 IP 192.168.1.1.http > 83.238.153.195.2213: Flags [S.], seq 4145523337, ack 2918118730, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:46.793057 IP 83.238.153.195.2213 > 192.168.1.1.http: Flags [S], seq 2918118729, win 65535, options [mss 1412,nop,wscale 0,nop,nop,sackOK], length 0 19:43:46.793069 IP 192.168.1.1.http > 83.238.153.195.2213: Flags [S.], seq 4145523337, ack 2918118730, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:47.865500 IP 192.168.1.1.http > 83.238.153.195.2213: Flags [S.], seq 4145523337, ack 2918118730, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 19:43:49.465503 IP 192.168.1.1.http > 83.238.153.195.zephyr-srv: Flags [S.], seq 2835837547, ack 1845850584, win 5840, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 Anyone have some thoughts on that?

    Read the article

  • OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App

    - by user809526
    Isn't it nice to discover OBIEE 11g around a nice "How To" catalog of features? to observe OBI and Essbase relationships at work? to discover TimesTen? The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices: Enhanced visualizations as Geo-spacial maps and interactive dashboards, Action Framework,  BI Publisher, Scorecard and Strategy Management, Mobile style sheets, Semantic layer modeling, Multi-source federation, Integration with products such as Essbase, Oracle OLAP, ODM, TimesTen, ODI and more The FSA is intended to be comprehensive, it is big (see CAVEAT below). The FSA is not an Oracle product, it is a good will free deployment of OBIEE/Essbase designed to exemplify OBIEE features, infrastructure and security around the Fusion Middleware components. Its contents and code are distributed free for demonstrative purposes only. It is neither maintained nor supported by Oracle as a licensed product. The OBIEE Full Sample App is independent of the default Sample App that comes with the OBIEE product. BENEFITS The FSA helps as a demonstrator of OBIEE 11g best practices, a tutorial, an environment "Test & Scrap", a SR bench (regression, conflicts), a tuning bench, a quick ready made POC seed for projects, a security options environment, ... The FSA - Is organized around a catalog of functional features - Has been deployed over 1000 times, it should be stable RELEASE The Full Sample App (V107) is bound to OBIEE 11.1.1.5 and Essbase 11.1.2.1 (November 2011). The FSA release dates are independent of the Product GA date (OBIEE). In early December 2011, a new functional Patch (V110) is released. It is easily applied (in less than 15 mins) on top of OBIEE SampleApp 11.1.1.5 (V107). The patch (V110) includes additional functional examples:        1. Web Catalog Statistics Application: Provides detailed insight into your web catalog content, dormant catalog objects, webcat impact analysis for metadata changes and more        2. Data inflation Scripts: A set of simple SQL procedures to quickly inflate SampleApp Fact and Dimension data to millions of records in a few minutes        3. Public Content Extensions Framework: A patching framework for public examples and contributions leveraging SampleApp        4. Additional report examples (including bridge report, external chart integrations) and bug fixes DISTRIBUTION as VBox image (November 2011) The ready made VBox image is designed to run on Virtual Box. It can be converted to VMware (see another BLOG). 1/ http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html VBox Image Deployment Guide Sampleapp_v107_GA.ovf - VBox image key file The above http URL provides the user:password for the ftp URLs below. 2/ ftp://user:[email protected]/static/SampleAppV107/ 12 "7-zip" files Sampleapp_v107_GA_7_20.7z.001 -> .012 We recommend 7-zip file manager for unzipping (http://www.7-zip.org/). Select Unzip here option, it will create the contents under a directory named "SampleApp_10722". On Windows, it is important to download and save zip file under the root directory (e.g. C:\ or D:\) because of possible long pathnames. 3/ ftp://user:[email protected]/static/SampleAppV107/Unzipped_Version/ 4 files Sampleapp_v107_GA-disk[1234].vmdk Important note: Check the provided checksums (md5sum). Please do it! DISTRIBUTION as Installation files for existing OBI 11.1.1.5 (November 2011) http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html Install files Deployment Guide SampleApp_10722_1.zip - 198 MB CAVEAT Many computers have RAM chips problems that keep often silent ... until you manipulate big files. It is strongly advised you run some memory check program eg MEMTEST in GRUB boot manager. Running md5sum repeatedly onto the very same big file must be consistent [same result], else a hardware memory problem is suspected. For Virtual Box, you should most likely enable VT-X (Vanderpool) hardware virtualization in BIOS. A free disk space of 80 GB is required to perform safely the VBox image installation. A Virtual Machine of minimum 6 to 7 GB memory fits the needs of combining OBIEE and Essbase execution.

    Read the article

  • SQL Server 2005. Full Text Search. Need Thesaurus working with NEAR/AND/OR keywords

    - by user305924
    Hi, does anyone know if it's possible to do a thesaurus search together with NEAR or AND/OR keywords. Here is an example of the type of query I want to run: SELECT Title, RANK FROM Item INNER JOIN CONTAINSTABLE(Item, Title, 'FORMSOF(Thesaurus, "red" NEAR "wine")') AS KEY_TBL ON Item.ItemID = KEY_TBL.[KEY] ORDER BY RANK DESC ....But I get the error message: Syntax error near 'NEAR' in the full-text search condition 'FORMSOF(Thesaurus, "red" NEAR "wine")'.

    Read the article

  • How to Filter ADO.NET data using a Full Text Seach (FTS) field?

    - by ActionFactory
    Hi All, We are using ADO.NET dataservices & are building URL based filters. Example: /Customers?filter=City eq 'London' We now need to filter on a Full Text 'tags' Field. WAS HOPING FOR: /Customers?filter=Tag like 'Friendly' PROBLEM: ADO.NET does not have a LIKE operator. ADO.NET does not seem to like FTS (It is not finding a match - because it is not parsing through the CSV's) Any ideas how to make this work? THX

    Read the article

  • Will MySQL full-text-search return the results I need?

    - by mike
    I have a keyword field with a list of 5 keywords for each item. example below: 2008, Honda, Accord, Used, Car Will MySQL full text return the item above for the following search requests? 2008 Honda Accord Honda Accord Used Car If so, how well will this hold up when searching through fifty thousand plus records?

    Read the article

  • How to index small words (3 letters) with SQL Full-text search?

    - by Sly
    I have an Incident table with one row that has the value 'out of office' in the Description column. However the following query does not return that row. SELECT * FROM Incident WHERE CONTAINS( (Incident.Description), '"out*"' ) The word 'out' is not in the noise file (I cleared the noise file completely). Is it because SQL Full-text search does not index small words? Is there a setting for that? Note: I'm on SQL 2005.

    Read the article

  • Request Limit Length Limits for IIS&rsquo;s requestFiltering Module

    - by Rick Strahl
    Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless. However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this: http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong? If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so: <configuration> <security> <requestFiltering> <requestLimits maxQueryString="2048"> </requestLimits> </requestFiltering> </security> </configuration> This fixed me right up and made the requests work. How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down. Finding the Problem The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it. If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page. The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page. Using Failed Request Tracing to retrieve Error Info The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this: Find the Failed Request Tracing Rules in the IIS Service Manager.   Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down. Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site. These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:   Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive. Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries… Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  Security  

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   beginctx_ddl.optimize_index('FT_IDCTEXT1','FULL', parallel_degree =>'1');end; When I checked my own test instance, I found my collection had a row fragmentation of about 80% After running the optimization procedure, it went down to 0% The knowledgebase article On Index Fragmentation and Optimization When Using OracleTextSearch or DATABASE.FULLTEXT [ID 1087777.1] goes into detail on how to check your current index fragmentation, how to run the procedure, and then how to schedule the procedure to run automatically.  While the article mentions scheduling the job weekly, Peter says he now is recommending this be run daily, especially on more active systems. And just as a reminder, be sure to involve your DBA with your WebCenter Content implementation as you go to production and over time.  We recently had a customer complain of slow performance of the application when it was discovered the database was starving for memory.  So it's always helpful to keep a watchful eye on your database.

    Read the article

  • SEO impact on subdomain for full name and obscure ccTLD

    - by Dan Christian
    There have been a few questions on subdomains and their impact on SEO, mostly in comparison to subfolders. The closest question I've found is this question but it still doesn't completely answer my query. I'm setting up a blog for 'Sam Smith'. It's imperative the SEO is based around his full name as he is a prominent blogger and his name is his value. All ccTLD variations of 'samsmith' (samsmith.com, samsmith.cc etc) are taken. However there has been the opportunity to register an obscure ccTLD for 'smith'. In regards to SEO value purely from the URL... 1) Will there be any negative SEO implications on searches for 'Sam Smith' when setting up the subdomain as 'sam.smith.' compared to a more regular 'samsmith.' domain? Will a search engine recognise the subdomain as the full name as oppose to just 'smith'? 2) Are there any negative SEO implications with an obscure ccTLD. For instance if Sam Smith was a prominent blogger in Canada with most of his audience based there, would there be any negative SEO if he had, for example, a .co ccTLD.

    Read the article

  • SQL transaction log backups conflicting with full backups?

    - by BradC
    On our SQL servers (2000, 2005, and 2008), we run full backups once a day in the evening, and transaction log backups every 2 hrs. We haven't really worried about these two processes conflicting, but lately we've run into some of the following issues: On one server, the trans log backup occasionally blocks the full backup, and must be manually stopped before the full backup can complete We sometimes end up with a massively-sized trans log backup file (sometimes larger than the full backup!) that seems to occur at the same time the full backup is running. I found a reference that indicate that these are "not allowed" to run at the same time, whatever that means: SQL 2000 Books Online and SQL 2005 Books Online. I'm not sure whether that means that the server will simply prevent them from running simultaneously, or if we ought to be explicitly stopping the log backups while the full backups are running. So are there known conflicts/issues between these? Does the answer differ between SQL versions? Should I have the trans log backup job check to see if the full backup is running before it executes? (and how do I do that...?)

    Read the article

  • Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX

    - by Asian Angel
    Are you tired of the same old wallpaper on your Ubuntu desktop? Now you can go from blah to literally spacious, real-time styled views of Earth with the xplanetFX Wallpaper App for Linux. You can conveniently access the “file type” downloads, screenshots, and jump-to links all on the front page. For our example we downloaded the .deb setup file on our system. The setup file will need to download three additional files to complete the setup process. After those are downloaded all dependencies will have been met and you can complete the installation process. Once that is done you can find xplanetFX by going to the Accessories Section of your Ubuntu Menu. This is what the main control window looks like when you start xplanetFX for the first time. You should take a few moments to look through the various tabs and tweak the settings for items like location, screen resolution, timing, auto-start, etc. When you are done click on Execute and within a few moments your desktop will have a fresh new look! Note: It took ~30 seconds for the display to activate on our system. Have fun with xplanetFX! xplanetFX Homepage [via OMG! Ubuntu!] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope

    Read the article

  • XNA Guide text input - maximum length

    - by simonalexander2005
    so I am using Guide.BeginShowKeyboardInput to get the user to enter their username. I would like this to be limited to 20 characters, and it seems to break expected behaviour to let them input whatever they like and trim it later - so how would I go about limiting what they can input in the text box itself? I have the following code: public string GetKeyboardInput(string title, string description, string defaultText, int maxLength) { if (input.CheckCancel()) { useKeyboardResult = false; KeyboardResult = null; } if (KeyboardResult == null && !Guide.IsVisible) { KeyboardResult = Guide.BeginShowKeyboardInput(PlayerIndex.One, title, description, defaultText, null, null); useKeyboardResult = true; } else if (KeyboardResult != null && KeyboardResult.IsCompleted) { string result = Guide.EndShowKeyboardInput(KeyboardResult); KeyboardResult = null; if (result == null) { useKeyboardResult = false; return null; } if (useKeyboardResult) { KeyboardResult = null; return result; } } else //the user is still entering inputs { } return null; } I assume the code I need would go in that final, empty else{} block, but I can't see any way to do this. Does anyone know how?

    Read the article

  • URL length and content optimised for SEO [closed]

    - by Brendan Vogt
    Possible Duplicate: What is the best stucture of SEO friendly URL? I have done some reading on what URLS should look like for search engine optimisation, but I am curious to know how mine would like, I need some advice. I have a tutorial website, and my categories is something like: Web Development -> Client Side -> JavaScript So if I have a tutorial called "What is JavaScript?", is it good to have a URL that looks something like: www.MyWebsite.com/web-development/client-side/javascript/what-is-javascipt Or would something like this be more appropriate: www.MyWebsite.com/tutorials/what-is-javascipt Just curious because I also read that it is wise to have keywords in your URLs. Do I need to add the identifiers of each categories in the link as well, something like: www.MyWebsite.com/1/web-development/5/client-side/15/javascript/100/what-is-javascipt 1 is the unique identifier (primary key) of category web development 5 is the unique identifier (primary key) of category client side 15 is the unique identifier (primary key) of category javascript 100 is the unique identifier (primary key) of tutorial what is javascript

    Read the article

  • URL length and content optimised for SEO

    - by Brendan Vogt
    I have done some reading on what URLS should look like for search engine optimisation, but I am curious to know how mine would like, I need some advice. I have a tutorial website, and my categories is something like: Web Development -> Client Side -> JavaScript So if I have a tutorial called "What is JavaScript?", is it good to have a URL that looks something like: www.MyWebsite.com/web-development/client-side/javascript/what-is-javascipt Or would something like this be more appropriate: www.MyWebsite.com/tutorials/what-is-javascipt Just curious because I also read that it is wise to have keywords in your URLs. Do I need to add the identifiers of each categories in the link as well, something like: www.MyWebsite.com/1/web-development/5/client-side/15/javascript/100/what-is-javascipt 1 is the unique identifier (primary key) of category web development 5 is the unique identifier (primary key) of category client side 15 is the unique identifier (primary key) of category javascript 100 is the unique identifier (primary key) of tutorial what is javascript UPDATE This is not a programming question so can someone please help migrate this to the correct Q&A site without devoting my questions?

    Read the article

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

  • C/C++ Best indentation length?

    - by Tim
    I was reading a Vim tutorial ( http://www.oualline.com/vim-cook.html#drawing ), and came across this: This is very useful if you use a 4 space indentation for your C or C++ programs. (Studies at Rice University have shown this to be the best indentation size.) Is there any truth in these studies? Note-- i didn't mean for a flame war in indentation -- just whether anyone else has come across tis study before? EDIT: @MaR I made a poll http://poll.fm/3d5kg

    Read the article

  • Eliminating zero-length files

    - by RhZ
    I have been having multiple crashes recently. 4-5 last night within a few hours. I posted about it before, and got an answer but not sure how to proceed. The messages in my logs right before the crash are multiple complaints about valid eCryptfs headers. But the chron might not be related, I don't think I saw that in previous crashes: xxx-desktop kernel: [ 1112.274474] Valid eCryptfs headers not found in file header region or xattr region, inode 32376924 xxx-desktop CRON[4212]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) So I was sent to an answer providing this script: for i in find $(mount | grep " on $HOME type ecryptfs" | awk '{print $1}') -size 0c; do if ! fuser -v $i; then rm -f $i fi done I did find some zero byte files, not in the exactly right place (a folder called .private as I remember), but I need to fix this, its too bad right now. So I need to delete any of them that are not in use. I am a little too clueless, can someone walk me through executing this script? I don't know how.

    Read the article

  • Why is my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • Length of Page Title, URL, Meta Description and total number of links on a page

    - by MJWadmin
    We've been examining a number of different SEO tools recently. Several of these tell us that some of our page title's, urls and meta descriptions are too long. We've also been told that some of our pages have too many links on them. I guess our first question is - is any of that feedback true! Can URL's etc actually be too long and if so how much does this affect ranking? Secondly can you have too many links on a page and if so, how many is too many? Thanks in advance...

    Read the article

  • When it Comes to SEO, Length Matters

    Search engine optimizing your website can sometimes seem counter-intuitive. The common misconception is to try and associate every potential keyword with your website. Wrong! It's not about quantity, it's about quality long tail phrases. Here's what you need to know.

    Read the article

  • Resizing screenshots/screen captures for inclusion in Beamer

    - by Stephen
    Sorry, this may or may not be a programming question directly, but I am trying to resize screenshots with Imagemagick and Gimp to include in a Beamer presentation, but it comes out even blurrier than the resizing done by LaTeX. For instance, in Beamer I might have a command to rescale the image \includegraphics[width=.5\textwidth]{fig.png}. Using something like \begin{frame} \message{width = \the\textwidth} \message{height = \the\textheight} \end{frame} I have gotten the \textwidth and \textheight parameters in points (345.69548, 261.92444). So I have a script (in Python) that sends a system call to Imagemagick: 'convert %s -resize %.6f@ resized_%s' % (f,a,f) where a is calculated as \textwidth*\textheight*0.5**2. When I then go back into my Beamer presentation and include the resized figure, \includegraphics{resized_fig.png}, the size looks approximately correct but it's super-blurry. I also tried resizing in Gimp (using the GUI) but no luck either... help? Thanks...

    Read the article

  • Save a binary file in SQL Server as BLOB and text (or get the text from Full-Text index)

    - by Glennular
    Currently we are saving files (PDF, DOC) into the database as BLOB fields. I would like to be able to retrieve the raw text of the file to be able to manipulate it for hit-highlighting and other functions. Does anyone know of a simple way to either parse out the files and save the raw text on save, either via SQL or .net code. I have found that Adobe has a filtdump utility that will convert the PDF to text. Filtdump seems to be a command line tool, and i don't see a way to use a file stream. And what would the extractor be for Office documents and other file types? -or- Is there a way to pull out the raw text from the Full text index? Note i am trying to build a .net & MSSql solution without having to use a third party tool such as Lucene

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >