Search Results

Search found 9058 results on 363 pages for 'length'.

Page 174/363 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • phpmyadmin port change?

    - by Rajat
    How do i change my default phpmyadmin port to 443 or 9999? Is it possible or do I have use port 80 only? If possible, then how do I change share the same? Apache is listening on port 9999 for sure. However, going to URL http://<webserver>:9999/phpmyadmin/ Will give following error (with Firefox browser) An error occurred during a connection to webserver:9999. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Anyone has any clue what is going on?

    Read the article

  • IIS7 Custom ASP.NET Errors

    - by Nathan
    I'm trying to setup a custom error page for the IIS 7 404.13 (Content length too large) error. Here's the relevant sections of my web.config file: <system.webServer> <httpErrors errorMode="Custom" existingResponse="Replace"> <remove statusCode="404" subStatusCode="13" /> <error statusCode="404" subStatusCode="13" prefixLanguageFilePath="" path="/FileUpload/Test.aspx" responseMode="ExecuteURL" /> </httpErrors> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10240" /> </requestFiltering> </security> </system.webServer> The response that is being sent back to the server is blank. The Test.aspx file is not blank. Any idea what's going on here?

    Read the article

  • kerberos5 unable to authenticate

    - by wolfgangsz
    We have a Debian file server, configured to serve up samba shares, using winbind and kerberos. This is configured to authenticate against a Windows2003 DC. All worked fine until recently when I did a maintenance update on all packages. Since then, all attempts to connect to any of the shares (and also to just log into the box) fail. The logs contain this message, which seems to be at the root of the evil: [2009/09/14 12:04:29, 10] libsmb/clikrb5.c:get_krb5_smb_session_key(685) Got KRB5 session key of length 16 [2009/09/14 12:04:29, 10] libsmb/clikrb5.c:unwrap_pac(280) authorization data is not a Windows PAC (type: 141) [2009/09/14 12:04:29, 3] libads/kerberos_verify.c:ads_verify_ticket(430) ads_verify_ticket: did not retrieve auth data. continuing without PAC From there on it fails to find the user account on the DC, subsequently remaps the user to user nobody and then (rightly) refuses to grant access to the share. However, the following works just fine: wbinfo -a user%password I was wondering whether anybody has had this problem and could provide some insight. I would be happy to provide neutralised config files.

    Read the article

  • SQL Server 2005: reclaiming LOB space

    - by AndrewD
    Hello all, I've got an interesting table in one of my DBs that's confusing me. The table in question has a a few LOB type columns (two nvarchar(max) and a text) and it looks like there's some strange space issues going on. from this query: SELECT type_desc, SUM(total_pages) *8 [Size in kb] FROM sys.partitions p JOIN sys.allocation_units a ON p.partition_id = a.container_id WHERE p.object_id = OBJECT_ID('asyncoperationbase') GROUP BY type_desc; I get: type_desc Size in kb IN_ROW_DATA 27936 LOB_DATA 1198144 ROW_OVERFLOW_DATA 0 (there's just under 8000 rows in the table, each row has a data length of ~10k - not counting the LOB data) here's where it gets somewhat interesting: SELECT ( SUM(DATALENGTH(aob.WorkflowState)) + SUM(DATALENGTH(aob.[Message]))+ SUM(DATALENGTH(aob.[Data])) ) / 1024 FROM AsyncOperationBase aob returns: 76617 As I'm reading it - it looks like the ~75mb of LOB data is using over a gig of space to be stored - I would expect some overhead but not quit that much. Thanks, Andrew

    Read the article

  • nginx+php-fpm help optimize configs

    - by Dmitro
    I have 3 servers. First server (CPU - model name: 06/17, 2.66GHz, 4 cores, 8GB RAM) have nginx as load balancer with next config upstream lb_mydomain { server mydomain.ru:81 weight=2; server 66.0.0.18 weight=6; } server { listen 80; server_name ~(?!mydomain.ru)(.*); client_max_body_size 20m; location / { proxy_pass http://lb_mydomain; proxy_redirect off; proxy_set_header Connection close; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Set-Cookie; proxy_pass_header P3P; proxy_pass_header Content-Type; proxy_pass_header Content-Disposition; proxy_pass_header Content-Length; } } And configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 80 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 pm.status_path = /status ping.path = /ping ;ping.response = pong request_terminate_timeout = 30s request_slowlog_timeout = 10s slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 20 php-fpm processes which use from 1% - 15% CPU. So it's have high load averadge: top - 15:36:22 up 34 days, 20:54, 1 user, load average: 5.98, 7.75, 8.78 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 34.1%us, 3.2%sy, 0.0%ni, 37.0%id, 24.8%wa, 0.0%hi, 0.9%si, 0.0%st Mem: 8183228k total, 7538584k used, 644644k free, 351136k buffers Swap: 9936892k total, 14636k used, 9922256k free, 990540k cached Second server(CPU - model name: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz, 8 cores, 8GB RAM). Nginx configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config of php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 50 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong ;request_terminate_timeout = 0 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 50 php-fpm processes which use from 10% - 25% CPU. So it's have high load averadge: top - 15:53:05 up 33 days, 1:15, 1 user, load average: 41.35, 40.28, 39.61 Tasks: 239 total, 40 running, 199 sleeping, 0 stopped, 0 zombie Cpu(s): 96.5%us, 3.1%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st Mem: 8185560k total, 7804224k used, 381336k free, 161648k buffers Swap: 19802108k total, 16k used, 19802092k free, 5068112k cached Third server is server with database postgresql. Also i try ab -n 50 -c 5 http://www.mydomain.ru/ And I get next info: Complete requests: 50 Failed requests: 48 (Connect: 0, Receive: 0, Length: 48, Exceptions: 0) Write errors: 0 Total transferred: 9271367 bytes HTML transferred: 9247767 bytes Requests per second: 1.02 [#/sec] (mean) Time per request: 4882.427 [ms] (mean) Time per request: 976.486 [ms] (mean, across all concurrent requests) Transfer rate: 185.44 [Kbytes/sec] received Please advise how can I make lower level of load average?

    Read the article

  • iptables rules for botnet (UDP flood) protection

    - by Petar Simeonov
    I'm currently experiencing a massive UDP attack on my server. I host a couple of gameservers, mainly Tf2, CS:GO, CS 1.6 and CS:Source, and my 1.6 server is being flooded. I tried different rules in iptables, but none of them seemed to work. I'm on a 100mbps bandwidth tariff, but the flood i receive is 500+mbps. This is the log of the latest tcpdump - http://pastebin.com/HSgFVeBs Packet length varies throughout the day. Only my gameserver ports are being flooded - 27015, 27016, 27018 via UDP packets. Are there any iptables rules, that might prevent this?

    Read the article

  • Ubuntu 10.4 Lucid Server Minimal Install: Slow terminal scrolling

    - by noname
    I have a minimal install of Ubuntu 10.4 Server for testing and learning purposes. There is a very annoying occurrance: whenever I try to "man dpkg" or any command that load a few screens length of text (eg. "ls -al") the redraw speed of the console is just way too slow. I can see how each new line causes the whole screen to redraw. Note: that this doesn't happen inside X. No gui is installed. I have been experimenting with adding vesafb to the grub line as some guides suggested, but no speedups happened. You might be able to reproduce this behaviour on your linux system by switching to terminal using CTRL+ALT+F1. Is there any way to speed scrolling up?

    Read the article

  • I've inherited a rat's nest of cabling. What now?

    - by hydroparadise
    You know, you see pictures like below and sort of chuckle until you actually have to deal with it. I have just inherited something that looks like the picture below. The culture of the organization does not tolerate down time very well, yet I have been tasked to 'clean it up'. The network functions as it is, and there doesn't seem to be rush to get it done, but I will have to tackle the bear at some point. I get the ugly eye when I mention anything about weekends. So my question goes, is there sort of a structured approach to this problem? My Ideas thus far: Label, Label, Label Make up my patch cables of desired length ahead of time Do each subnet at a time (appears that each subnet are for different physical locations) Replace one cable at a time for each subnet It's easier to get forgiveness than permision?

    Read the article

  • How to edit semi-plaintext file and maintaining character structure?

    - by Raul
    I am using a software (Groupmail from Infacta) that uses exact / absolute %PATHS% for saving some settings in specific semi-plaintext file. This is a really bad idea because you can't move to USER folder, or like my case it does not work after migrating to a new computer with different language. For example: C:\Documents and Settings\USER\Local Settings\Application Data\Infacta is different than C:\Documents and Settings\USER\Configuración local\Datos de programa\Infacta Obviously, the software does not work well. I tried to solve this using Find/Replace the new PATH with Notepad++. While the Groupmail software loads well and shows settings correctly, the software fails when trying to save data on that file. I guess this is because length or number of replaced characters is different and also it corrupt the file. Please could you help me to edit this file maintaining file integrity / structure?

    Read the article

  • How to use mencoder to speed up video ?

    - by timday
    There's a webcam streaming WMV video that I can play/record with mplayer/mencoder (Debian Lenny including Debian-multimedia) just fine. I'd like to capture the video such that, when played back, it's faster (so say 100s of real time zooms past in every 1s of video). The "frameskip" filter seems like it ought to do this, but when I try e.g mencoder -o output.avi -ofps 25 -vf framestep=100 -ovc lavc -lavcopts vcodec=mpeg4:... <mms source URL> what I actually get is something which plays for the same length of time as it's allowed to capture for, but which only updates the displayed image every 4 seconds. Is there any way mencoder can be persuaded to do what I want ? (By the way, ideally I'd like to average the frames together to obtain motion-blur instead of just dropping/discarding most of them.).

    Read the article

  • Input not supported message when monitor is powered off then back on

    - by Jason Down
    I've been getting a message on my monitor where "Input not supported" is floating around. This only happens when I manually turn the monitor off and then later turn it back on. Leaving the monitor on and allowing it to go to the screen saver doesn't seem to cause the issue (but I prefer to turn the monitor off if I'm going to be away from the computer for any length of time). Any ideas what might cause this, only when the monitor is turned off manually? Specs: Acer X203w mointor Radeon 9600 Pro Video card Linux Mint 8 Resolution 1680 x 1050 (16:10 - Preferred native resolution for the monitor) Refresh Rate 60hz Here is what is in my xorg.conf file: Section "Device" Identifier "Radeon 9600" Driver "ati" BusID "PCI:1:0:0" Option "XAANoOffscreenPixmaps" Option "AccelMethod" "XAA" EndSection Section "Screen" Identifier "Default Screen" Device "Radeon 9600" DefaultDepth 24 SubSection "Display" Depth 24 Modes "1680x1050" "1440x900" "1024x768" EndSubSection EndSection Section "DRI" Mode 0666 EndSection Section "Extensions" Option "Composite" "Enable" EndSection

    Read the article

  • Roughly cutting Fibre Cables

    - by Bryan
    I recently asked a question about which type of Fibre optic cable to use for my installation. We have now found a cable type and supplier and I'm about to order the cable (OM3, Tight Buffered, 12 Core), however I have a further question: We are shortly moving in to a new building (still being constructed) and we are about to get a window in which we will be able to run any cabling that we need. How should I roughly cut the fibre? Can I just use some heavy shears to cut through the cable? I'm aware that doing so will damage the ends, but will the damage be limited to the ends, or could the cracks/fractures continue up the length of the cable? p.s. I have already allowed for several metres of extra cable at each end, and I am going to get a specialist in to terminate the fibres.

    Read the article

  • Extract number with regex

    - by Joey
    I have this string: > HTTP/1.1 200 OK Date: Tue, 12 Nov 2013 15:26:17 GMT Server: > Apache/2.2.3 (CentOS) Last-Modified: Fri, 08 Nov 2013 21:34:50 GMT > ETag: "452//path/to/file" > Accept-Ranges: bytes Content-Length: 26010 Connection: close > Content-Type: text/plain; charset=UTF-8 And would like to extract 452 which is before // and after ETag, what regex to use? I am stuck. Thanks a lot

    Read the article

  • how insecure is my short password really?

    - by rika-uehara
    Using systems like TrueCrypt, when I have to define a new password I am often informed that using a short password is insecure and "very easy" to break by brute-force. I always use passwords of 8 characters in length, which are not based on dictionary words, which consists of characters from the set A-Z, a-z, 0-9 I.e. I use password like sDvE98f1 How easy is it to crack such a password by brute-force? I.e. how fast. I know it heavily depends on the hardware but maybe someone could give me an estimate how long it would take to do this on a dual core with 2GHZ or whatever to have a frame of reference for the hardware. To briute-force attack such a password one needs not only to cycle through all combinations but also try to de-crypt with each guessed password which also needs some time. Also, is there some software to brute-force hack truecrypt because I want to try to brute-force crack my own passsword to see how long it takes if it is really that "very easy".

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Strange performance from RAID5 using WD RE4 disks

    - by Howard
    I've noticed a bit of a performance issue with some WD RE4 drives I'm using under AMD's hardware RAID solution. First a bit of background: Environment: Windows 7 home premium x64 HDD's: 3x 1TB WD Raid Edition 4 in a RAID 5 setup with 128 kbyte stripe (2TB usable space) Testing Tool: HD Tune, process set to "High Priority" Processor: AMD Phenom II x6 1100T Ram: 16GB DDR3/1600mhz Motherboard: MSI 970A-G45 The image below pretty much depicts the issue I'm having. Every test has the same thing, a period of similar length where the performance drops to a few megabytes a second. This can't be a TLER issue as the purpose of RE4's is to work around that. Any help would be greatly appreciated.

    Read the article

  • Preserving smart playlist order on iPhone

    - by Doug Harris
    I have a smart playlist configured to play my favorite podcasts during my commute. It's a list of audio-only, unheard podcasts: In iTunes, I've ordered the playlist by Release Date -- i.e. I want to listen to the oldest podcast episodes first. The order is correct in iTunes but when I sync to my iPhone the order is jumbled. Currently, I have five episodes in this playlist -- ordered by release date they're A,B,C,D,E. On my iPhone, they're ordered B,D,C,E,A. There's no sort order that's obvious to me -- not sorted by name of episode, name of podcast, length of episode, date added, etc. Any solutions to ordering smart playlists on the iPhone? Update: This is likely an iTunes 9 bug. There's lots of discussion about this over on apple.com

    Read the article

  • Is zip's encryption really bad?

    - by Nifle
    The standard advice for many years regarding compression and encryption has been that the encryption strength of zip is bad. Is this really the case in this day and age? I read this article about WinZip (it has had the same bad reputation). According to that article the problem is removed provided you follow a few rules when choosing your password. At least 12 characters in length Be random not contain any dictionary, common words or names At least one Upper Case Character Have at least one Lower Case Character Have at least one Numeric Character Have at least one Special Character e.g. $,£,*,%,&,! This would result in roughly 475,920,314,814,253,000,000,000 possible combinations to brute force Please provide recent (say past five years) links to back up your information.

    Read the article

  • Difference between tcp recv buffer and tcp receive window size?

    - by pradeepchhetri
    The command shows the tcp receive buffer size in bytes. $ cat /proc/sys/net/ipv4/tcp_rmem 4096 87380 4001344 where the three values signifies the min, default and max values respectively. Then I tried to find the tcp window size using tcpdump command. $ sudo tcpdump -n -i eth0 'tcp[tcpflags] & (tcp-syn|tcp-ack) == tcp-syn and port 80 and host google.com' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 16:15:41.465037 IP 172.16.31.141.51614 > 74.125.236.73.80: Flags [S], seq 3661804272, win 14600, options [mss 1460,sackOK,TS val 4452053 ecr 0,nop,wscale 6], length 0 I got the window size to be 14600 which is 10 times the size of MSS. Can anyone please tell me the relationship between the two.

    Read the article

  • SSL Ajax type of certificate for the static domain (image + js)

    - by Alexl
    Hi, I have a page that is SSL and has a valid certificate extended. (mainpage.com) But this page request some static content to another domain(page-static.com), basicly images and js. Actually i have only a certificate for my mainpage.com. So now when i request this page i get invalid ssl page because it contains invalid encrypted data (the one provided by the www.page-static.com) What kind of certificate do i need for the www.page-static.com. Do i need the same one as the mainpage.com, because this certificate are expensive (it's a extended certificate). Or a cheap certificate from godaddy will do the trick. This is another question do both certificates have to be signed by the same root provider and/or the same encryption key length (or it can be only 128 bits)? Thanks for your help

    Read the article

  • VMware Workstation on Linux: Dropping core files in a shared folder...

    - by kclittle
    I'm using VMware 6.0.2 on a RHEL 4.6 host. The VMs are MontaVista CGE 5.0 (2.6.21 kernel). I'm trying to get applications running in the VMs to drop any core files on a HGFS volume, i.e. in a "shared folder". The core files get created as per the path and format given in /proc/sys/kernel/core_pattern, but they are always zero length. If I change the path to a local path (on a virtual disk in the VM), all is well. Any ideas what I have to do get the core files written into a shared folder? Thanks for your help!

    Read the article

  • Squid/Kerberos authentication with only Linux

    - by user28362
    Hi, I would like to know if it possible to let a Windows Xp machine authenticate to Squid (Linux) using Kerberos without the need of an Active Directory domain. I only want to create a Kerberos ticket on the client side, which should give the client access to squid (using I.E.). I only found tutorials about configuring A.D./Squid, not an environment with only Linux servers. Thanks Update: The kerberos setup is correctly done, the proxy and client can get tickets. As for the browser (FF/IE), I get: ERROR Cache Access Denied While trying to retrieve the URL: http://www.google.com/ The following error was encountered: * Cache Access Denied. Sorry, you are not currently allowed to request: http://www.google.com/ from this cache until you have authenticated yourself. In kerberos, I get: squid_kerb_auth: Got 'YR ElRNTVMTUABBAABAB4IIogAAAAAAAAAAAAAAAAAAAAAFASgDAAAADw==' from squid (length: 59). squid_kerb_auth: parseNegTokenInit failed with rc=101 squid_kerb_auth: received type 1 NTLM token This message is strange, as I didn't configure NTLM. It looks like the browser uses the wrong authentication methode.

    Read the article

  • Dummy HTTP server for debugging

    - by Andrea
    This is more or less the inverse of my previous question. I need to debug some HTTP requests that I am making. Since these requests arise from the use of some external libraries, sometimes I am not sure of what is the actual data I am sending. Is there some dummy server (for Linux) that accepts HTTP requests and just prints them somewhere so that I can inspect them? I would like to be able to see in plain text the full request, like POST /foo HTTP/1.1 Host: www.example.com Accept: text/xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Language: en-gb,en;q=0.5 Content-Type: text/plain Content-Length: 11 Hello world

    Read the article

  • How to shuffle pieces of large video files?

    - by Gabriël
    Hi Folks, For a party we want to show film-fragments using a projector. We have around 50 full-length movie avis and would like to shuffle them in shorter pieces. So every ca. 2-5 minutes another piece should begin. For example a sequence could be: Movie05-12m00s-14m30s, Movie08-55m50s-57m00s, Movie02-02m42s-06m40s, etc. So, random movies, at random positions within the movie of random lengths. What would be the kind of solution we're looking for? We were thinking 2 scenarios: Before doing this use some kind of tool to cut all avis in smaller pieces and put them on shuffle in a regular media player A sort of "VJ" program that can do this shuffle mixing realtime. Any suggestions for any of the above scenarios? Thanks!

    Read the article

  • Apache Prepending Header Information to ALL FILES

    - by Michael Robinson
    We're in the middle of setting up new servers, and have been having some odd problems with Apache. Apache is prepending text that looks like this: $15plðI‚‚?E?ðA™@?@??yeÔ|~Ÿ²?PγZ" zS€?8i³?? ,ÀŠ{ÿBHTTP/1.1 200 OK Date: Mon, 02 Feb 2009 22:28:05 GMT Server: Apache/2.2.3 (CentOS) Last-Modified: Mon, 02 Feb 2009 22:28:05 GMT ETag: W/"1238007d-2224e-fe617f40" Accept-Ranges: bytes Content-Length: 139854 Connection: close Content-Type: application/x-javascript To all files. The file I copied the above text from is the prototype library js file. As loaded from our server. I've searched, but couldn't find much about this problem Maybe I don't know what I'm searching for... Anyway, if anyone has seen this behaviour before, could they please let me know either 1) how to fix it so that this content is not prepended to all files, or 2) where to look for further help. Thanks

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >