Search Results

Search found 1246 results on 50 pages for 'compression'.

Page 40/50 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • IIS7 Session ID rotating with Classic ASP

    - by ManiacZX
    I am trying to migrate a Classic ASP app onto a Windows 2008 R2 server. The application features run fine, but I am having issue with session. The application keeps the logged in user information in session and I am constantly getting knocked out as if the session had expired. While debugging I have discovered the sessions are not expiring but instead I am getting 2-3 different Session IDs in use by one browser. I am outputting Response.Write(Session.SessionID) on various pages in the application and I can sit there and hit refresh over and over and watch the number changed between these 2-3 SessionIDs randomly. The sessions are still valid because when I refresh and get the Session ID that I logged in under the page is displayed (because the security check was successful) and when I get one of the other Session IDs I get the "you aren't logged in, you need to log in" message. If I close and re-open the browser, same story just the set of IDs are new. This happens with IE8, Firefox and Chrome from multiple computers. Things I've tried: - AppPool set to No Managed Code and Classic - Output Caching set .asp to never cache - ASP Session Properties enabled and disabled asp session state and confirmed it affected page (error trying to read Session.SessionID when disabled) Things I've tried just in case but shouldn't have anything to do with ASP Session: - Disabled compression - Changed ASP.Net Session State properties (InProc, StateServer, SQLServer, Cookies, URI, etc) -

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

  • Why is uploading to S3 so slow?

    - by Tom Marthenal
    I am using s3cmd to upload to S3: # s3cmd put 1gb.bin s3://my-bucket/1gb.bin 1gb.bin -> s3://my-bucket/1gb.bin [1 of 1] 366706688 of 1073741824 34% in 371s 963.22 kB/s I am uploading from Linode, which has an outgoing bandwidth cap of 50 Mb/s according to support (roughly 6 MB/s). Why am I getting such slow upload speeds to S3, and how can I improve them? Update: Uploading the same file via SCP to an m1.medium EC2 instance (SCP from my Linode to the instance's EBS drive) gives about 44 Mb/s according to iftop (any compression done by the cipher is not a factor). Traceroute: Here's a traceroute to the server it's uploading to (according to tcpdump). # traceroute s3-1-w.amazonaws.com. traceroute to s3-1-w.amazonaws.com. (72.21.194.32), 30 hops max, 60 byte packets 1 207.99.1.13 (207.99.1.13) 0.635 ms 0.743 ms 0.723 ms 2 207.99.53.41 (207.99.53.41) 0.683 ms 0.865 ms 0.915 ms 3 vlan801.tbr1.mmu.nac.net (209.123.10.9) 0.397 ms 0.541 ms 0.527 ms 4 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.400 ms 1.481 ms 1.508 ms 5 0.gi-0-0-0.pr1.tl9.nac.net (209.123.11.62) 1.602 ms 1.677 ms 1.699 ms 6 equinix02-iad2.amazon.com (206.223.115.35) 9.393 ms 8.925 ms 8.900 ms 7 72.21.220.41 (72.21.220.41) 32.610 ms 9.812 ms 9.789 ms 8 72.21.222.141 (72.21.222.141) 9.519 ms 9.439 ms 9.443 ms 9 72.21.218.3 (72.21.218.3) 10.245 ms 10.202 ms 10.154 ms 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * The latency looks reasonable, at least until the server stopped responding to ping requests.

    Read the article

  • Dual Monitor support rdp 7 to win 7 on esxi

    - by rphilli5
    I am trying to RDP from a Windows 7 Professional dual monitor physical machine to a Windows 7 Professional VM hosted on esxi 4.0. I can get the spanning option to work to both monitors, but I have tried 3 different methods of connecting but have not been able to use true multiple monitors. At different times, I tried checking the "use all monitors" option, command line mstsc /multimon and added the line use multimon:i:1 to the .rdp file. None of these worked. Any ideas? The physical machine can connect to other Windows 7 physical machines with true multi monitor access. I also have the same issue when going from a 32bit RC1 machine to a Windows 7 Professional x64, but not when going in the reverse direction. Here's the .rdp: screen mode id:i:2 use multimon:i:1 desktopwidth:i:1440 desktopheight:i:900 session bpp:i:16 winposstr:s:0,1,341,118,1139,568 compression:i:1 keyboardhook:i:2 audiocapturemode:i:0 videoplaybackmode:i:1 connection type:i:1 displayconnectionbar:i:1 disable wallpaper:i:1 allow font smoothing:i:0 allow desktop composition:i:0 disable full window drag:i:1 disable menu anims:i:1 disable themes:i:1 disable cursor setting:i:0 bitmapcachepersistenable:i:1 full address:s:192.168.1.5 audiomode:i:0 redirectprinters:i:1 redirectcomports:i:0 redirectsmartcards:i:1 redirectclipboard:i:1 redirectposdevices:i:0 redirectdirectx:i:1 autoreconnection enabled:i:1 authentication level:i:2 prompt for credentials:i:0 negotiate security layer:i:1 remoteapplicationmode:i:0 alternate shell:s: shell working directory:s: gatewayhostname:s: gatewayusagemethod:i:4 gatewaycredentialssource:i:4 gatewayprofileusagemethod:i:0 promptcredentialonce:i:1 use redirection server name:i:0 drivestoredirect:s:

    Read the article

  • Real-time local backup with versioning on Windows 7/8

    - by Borda
    I'm looking for a reliable backup solution on Windows, something with a feature set similar to Yadis. I've been using CrashPlan for 2 months, but their software lost more than 1TB of my data, that's why I'm looking for alternatives. Requirements: Real-time folder-to-folder backup: I don't need online features, I want to use this to duplicate my files between my local disks. Versioning support: Should be able to choose how many versions to keep of the files. Plain backup: I'd like to be able to open the backup without special software. No proprietary file format. External disk support: Shouldn't have any problem using external disks, at least on the source side. Backup every file, even locked/system files. (Yadis fails this one) Should start automatically, and have a comfortable GUI. Should use actively maintained and/or popular software. I don't want to use discontinued products. Optional requirements: Low RAM usage I'm not really comfortable with something that eats 1GB of my RAM. Compression support Preferably ZIP, but I'm not picky about this. Any ordinary everyday format is acceptable. Freeware or Open Source Preferred, but not necessary. I can do a one-time payment within reasonable bounds. (preferably under $100) I only need Windows support, but if it works on Linux, that's a plus. I've already searched and tried lots of software, most of them failed at the plain backup, the versioning or the locked file backup requirements.

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance,

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • Can OpenVPN invoke DHCP Client?

    - by Ency
    I have got working VPN connection through openvpn, but I would like to use also my DHCP server and not openvpn's push feature. Currently everything works fine, but I have to manually start dhcp client, eg. dhclient tap0 and I get IP and other important stuff from my DHCP, is there any directive which start DHCP Client when connection is established? There is my client's config: remote there.is.server.com float dev tap tls-client #pull port 1194 proto tcp-client persist-tun dev tap0 #ifconfig 192.168.69.201 255.255.255.0 #route-up "dhclient tap0" #dhcp-renew ifconfig 0.0.0.0 255.255.255.0 ifconfig-noexec ifconfig-nowarn ca /etc/openvpn/ca.crt cert /etc/openvpn/encyNtb_openvpn_client.crt key /etc/openvpn/encyNtb_openvpn_client.key dh /etc/openvpn/dh-openvpn.dh ping 10 ping-restart 120 comp-lzo verb 5 log-append /var/log/openvpn.log Here comes server's config: mode server tls-server dev tap0 local servers.ip.here port 1194 proto tcp-server server-bridge # Allow comunication between clients client-to-client # Allowing duplicate users per one certificate duplicate-cn # CA Certificate, VPN Server Certificate, key, DH and Revocation list ca /etc/ssl/CA/certs/ca.crt cert /etc/ssl/CA/certs/openvpn_server.crt key /etc/ssl/CA/private/openvpn_server.key dh /etc/ssl/CA/dh/dh-openvpn.dh crl-verify /etc/ssl/CA/crl.pem # When no response is recieved within 120seconds, client is disconected keepalive 10 60 persist-tun persist-key user openvpn group openvpn # Log and Connected clients file log-append /var/log/openvpn verb 3 status /var/run/openvpn/vpn.status 10 # Compression comp-lzo #Push data to client push "route-gateway 192.168.69.1" push "redirect-gateway def1"

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • Lotus Domino - DAOS not reducing file size?

    - by SydxPages
    I have implemented DAOS on a Lotus Domino Server (8.5.3 FP2) as follows: Lotus Domino Server Document: Store file attachments in DAOS: Enabled Minimum size of object before Domino will store in DAOS: 64000 bytes DAOS base path: E:\DAOS Defer object deletion for: 30 days Transaction logging is running, and the specific test database has the following advanced properties set: Domino Attachment and Object Service (ticked) Use LZ1 compression for atachments Compress Database Design Compress Data I have restarted the server. When I run a compact -c, it compacts the database, but does not reduce the size. I have checked the DB in Windows Explorer (60Gb) and the size is the same pre and post. I have checked the directory (E:\DAOS) and it is 35Gb in size. When I run the command 'Tell DAOSMgr Status tmp\test.nsf', I get the following response. From looking up on the net, I believe ticket count = 0 means that the db is not really DAOS'ed? Admin Process: Searching Administration Requests database DAOSMGR: Status tmptest.nsf started DAOS database status: Database: E:\Lotus\Domino\Data\tmp\test.nsf Database state = Synchronized Last resynchronized: 03/09/2012 02:49:13 PM Ticket count: 0 DAOSMGR: Status tmp\test.nsf completed I have run fixup on the database. When I have tried to run the DAOS estimator it has always crashed. This was a problem with larger databases on earlier versions of domino, but not anymore. Can anyone tell me why the size has not reduced? Am I missing anything?

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Codec Problems with trying to edit videos with VirtualDub

    - by Roy Rico
    So, I'm a little frustrated. According to this post and various other internet sources, virtualdub is supposed to allow users to quickly split and join video files. I am using windows 7 64 Bit and the latest version of VirtualDub (64-bit). I have tried to edit various movie files, and each attempt at editing various files I have done has not worked for me. AVI file A.avi won't load, saying that it can't located the Decompressor for the "FMP4" format. I have tried this solution and this one, and neither of them work. I have tried setting the VFW Decompressor for 'Other MPEG4' setting to XVID or LIBAVCODEC. There is no change in virtual dub AVI file C.avi will load in Virtual Dub, but any attempt to split it gives me an error that I don't have XVID codecs installed. I've attempted to install the proper codecs (Shark's Windows 7 Codecs, CCCP) with no change. AVI file C.avi will load, and it will split, but won't split using the "Direct Stream Copy" claiming the compression algorithm is incompatible. I tried "Fast Recompress" option and it created a 27GB file out of what was supposed to be about a 300-400MB file. Can someone please give me some insight into what I'm messing up?

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Is StoreJet Transcend (0x2329) an Advanced Format drive?

    - by Graham Perrin
    I use a 640 GB StoreJet Transcend (0x2329) with ZEVO Community Edition 1.1.1 on OS X 10.8.2. Question Is this drive Advanced Format? Background I submitted a request for technical support to Transcend but the first response was gibberish so I don't expect a reasonable follow-up. Models at http://www.transcend-info.com/Products/CatList.asp?LangNo=0&ModNo=293 are similar but different sizes (not 640 GB). Mine is probably 25M2 (TS640GSJ25M2): Unless I'm missing something, nothing currently in the Transcend support area tells me whether the drive is Advanced Format. From System Information in OS X 10.8.2: StoreJet Transcend: Capacity: 640.14 GB (640,135,028,736 bytes) Removable Media: Yes Detachable Drive: Yes BSD Name: disk3 Product ID: 0x2329 Vendor ID: 0x152d (JMicron Technology Corp.) Version: 0.00 Serial Number: 322549FBA004 Speed: Up to 480 Mb/sec Manufacturer: JMicron History for the ZFS pool shows creation in March 2012 –  macbookpro08-centrim:~ gjp22$ zpool history zhandy | grep create 2012-03-14.17:29:37 zpool create -f -O compression=off -O copies=1 -O casesensitivity=insensitive -O snapdir=visible zhandy /dev/dsk/GPTE_1928482A-7FE4-482D-B692-3EC6B03159BA 2012-06-22.15:51:16 zfs create zhandy/Pocket Time Machine At that time I almost certainly used ZEVO Setup Assistant to create the pool. macbookpro08-centrim:~ gjp22$ zpool get ashift zhandy NAME PROPERTY VALUE SOURCE zhandy ashift 0 default If I discover that the drive is Advanced Format, a different ashift value will be appropriate.

    Read the article

  • Why is OpenSSH not using the user specified in ssh_config?

    - by Jordan Evens
    I'm using OpenSSH from a Windows machine to connect to a Linux Mint 9 box. My Windows user name doesn't match the ssh target's user name, so I'm trying to specify the user to use for login using ssh_config. I know OpenSSH can see the ssh_config file since I'm specifying the identify file in it. The section specific to the host in ssh_config is: Host hostname HostName hostname IdentityFile ~/.ssh/id_dsa User username Compression yes If I do ssh username@hostname it works. Trying using ssh_config only gives: F:\>ssh -v hostname OpenSSH_5.6p1, OpenSSL 0.9.8o 01 Jun 2010 debug1: Connecting to hostname [XX.XX.XX.XX] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa-cert type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa type 2 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debia n-3ubuntu5 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /cygdrive/f/progs/OpenSSH/home/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa debug1: Offering DSA public key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I was under the impression that (as outlined in this question: How to make ssh log in as the right user?) specifying User username in ssh_config should work. Why isn't OpenSSH using the username specified in ssh_config?

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Test A SSH Connection from Windows commandline

    - by IguanaMinstrel
    I am looking for a way to test if a SSH server is available from a Windows host. I found this one-liner, but it requires the a Unix/Linux host: ssh -q -o "BatchMode=yes" user@host "echo 2>&1" && echo "UP" || echo "DOWN" Telnet'ing to port 22 works, but that's not really scriptable. I have also played around with Plink, but I haven't found a way to get the functionality of the one-liner above. Does anyone know Plink enough to make this work? Are there any other windows based tools that would work? Please note that the SSH servers in question are behind a corporate firewall and are NOT internet accessible. Arrrg. Figured it out: C:\>plink -batch -v user@host Looking up host "host" Connecting to 10.10.10.10 port 22 We claim version: SSH-2.0-PuTTY_Release_0.62 Server version: SSH-2.0-OpenSSH_4.7p1-hpn12v17_q1.217 Using SSH protocol version 2 Server supports delayed compression; will try this later Doing Diffie-Hellman group exchange Doing Diffie-Hellman key exchange with hash SHA-256 Host key fingerprint is: ssh-rsa 1024 aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa Initialised AES-256 SDCTR client->server encryption Initialised HMAC-SHA1 client->server MAC algorithm Initialised AES-256 SDCTR server->client encryption Initialised HMAC-SHA1 server->client MAC algorithm Using username "user". Using SSPI from SECUR32.DLL Attempting GSSAPI authentication GSSAPI authentication initialised GSSAPI authentication initialised GSSAPI authentication loop finished OK Attempting keyboard-interactive authentication Disconnected: Unable to authenticate C:\>

    Read the article

  • Warning: Cannot modify header information - headers already sent

    - by Pankaj Khurana
    Hi, I am using code available on http://www.forosdelweb.com/f18/zip-lib-php-archivo-zip-vacio-431133/ for creating zip file. First file-zip.lib.php <?php /* $Id: zip.lib.php,v 1.1 2004/02/14 15:21:18 anoncvs_tusedb Exp $ */ // vim: expandtab sw=4 ts=4 sts=4: /** * Zip file creation class. * Makes zip files. * * Last Modification and Extension By : * * Hasin Hayder * HomePage : www.hasinme.info * Email : [email protected] * IDE : PHP Designer 2005 * * * Originally Based on : * * http://www.zend.com/codex.php?id=535&single=1 * By Eric Mueller <[email protected]> * * http://www.zend.com/codex.php?id=470&single=1 * by Denis125 <[email protected]> * * a patch from Peter Listiak <[email protected]> for last modified * date and time of the compressed file * * Official ZIP file format: http://www.pkware.com/appnote.txt * * @access public */ class zipfile { /** * Array to store compressed data * * @var array $datasec */ var $datasec = array(); /** * Central directory * * @var array $ctrl_dir */ var $ctrl_dir = array(); /** * End of central directory record * * @var string $eof_ctrl_dir */ var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; /** * Last offset position * * @var integer $old_offset */ var $old_offset = 0; /** * Converts an Unix timestamp to a four byte DOS date and time format (date * in high two bytes, time in low two bytes allowing magnitude comparison). * * @param integer the current Unix timestamp * * @return integer the current date in a four byte DOS format * * @access private */ function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method /** * Adds "file" to archive * * @param string file contents * @param string name of the file in the archive (may contains the path) * @param integer the current timestamp * * @access public */ function addFile($data, $name, $time = 0) { $name = str_replace('', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = 'x' . $dtime[6] . $dtime[7] . 'x' . $dtime[4] . $dtime[5] . 'x' . $dtime[2] . $dtime[3] . 'x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method /** * Dumps out file * * @return string the zipped file * * @access public */ function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method /** * A Wrapper of original addFile Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param array An Array of files with relative/absolute path to be added in Zip File * * @access public */ function addFiles($files /*Only Pass Array*/) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } /** * A Wrapper of original file Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param string Output file name * * @access public */ function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> My second file newzip.php <? include("zip.lib.php"); $ziper = new zipfile(); $ziper->addFiles(array("index.htm")); //array of files // the next three lines force an immediate download of the zip file: header("Content-type: application/octet-stream"); header("Content-disposition: attachment; filename=test.zip"); echo $ziper -> file(); ?> I am getting this warning while executing newzip.php Warning: Cannot modify header information - headers already sent by (output started at E:\xampp\htdocs\demo\zip.lib.php:233) in E:\xampp\htdocs\demo\newzip.php on line 6 I am unable to figure out the reason for the same. Please help me on this. Thanks

    Read the article

  • gitolite mac don't add new user to authorized_keys

    - by crashbus
    I installed gitolite and every thing works fine for me as admin. But when I'd like to add add a new user the new user can't connect to the server. After I looked into the file authorized_keys I saw that the new user wasn't added to the file. During the commit of the new public-key I get some workings: WARNING: split conf not set, gl-conf present for 'gitolite-admin' Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 882 bytes, done. Total 4 (delta 1), reused 0 (delta 0) remote: WARNING: split conf not set, gl-conf present for 'gitolite-admin' remote: WARNING: ?? @staff christianwaldmann markwelch remote: sh: find: command not found remote: sh: find: command not found remote: sh: sort: command not found remote: sh: find: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: cut: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 23: grep: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sort: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sed: command not found remote: sh: find: command not found remote: sh: find: command not found How can I fix it that gitolite auto-add the new user to the authorized_keys.

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    Hello - I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified with SpeakEasy and Speedtest.net) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >