Search Results

Search found 1285 results on 52 pages for 'lossless compression'.

Page 41/52 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • How can I improve the battery life under 12.04 on my Inspiron 14z? [duplicate]

    - by cfogelberg
    This question already has an answer here: Tips to extend battery life for laptops and notebooks 24 answers How do I improve the battery life of my Inspiron 14z under Ubuntu 12.04? This laptop gets 4-5 hours of battery life using Windows (e.g. here). I've removed Windows, installed Ubuntu 12.04 and the initial battery life was only 2 hours. With some tweaks (described below) it's still only ~2.5 hours. For reference, the laptop is the latest model of the 14z: i5-3337U processor 32GB MSATA, 500GB HDD (5400rpm) AMD Radeon HD7570M graphics card I have put ext4 partitions on both the SSD and the HDD, and have mounted / to the SSD and /home to the HDD. I also put a 24gb linux swap partition at the start of the HDD, though I figure this won't be used all that much (the laptop has 8gb of RAM). After googling around and reading Ask Ubuntu and other sites extensively, I have done the following steps, and they have improved the battery life ~30 minutes (exact improvement not clear, but battery life is still nowhere near 4-5 hours). Installed Jupiter (and set Performance to "Power Saving") Installed laptop-mode-tools cat /proc/sys/vm/laptop_mode now outputs 5 (previously it output 0) But it's not clear that this will help: AskUbuntu question Turned down the brightness of my screen from full to 1/3 Other things I have heard about but have not tried for fear of frying the laptop or my linux install: Add "pcie_aspm=force" at the end of the line with "quiet splash" in /boot/grub/grub.cfg Enable ALPM, but it may already be enabled in 12.04? Enable i915 framebuffer compression Use a propietary driver for the graphics card? Turn off the graphics card? (what would happen if I relied on the internal Intel bridge?) Use TLP? Spin down the HDD more aggressively (howto, but I think laptop-mode-tools does this already) The only other thing I've noticed is that plastic just above the F5, F6 and F7 keys gets really hot. According to Jupiter my CPU temperature is only 69 celsius and the System Monitor shows CPU load at 7% so I don't think it's the CPU. Maybe it's the graphics card? Also, I've set up MongoDB and LAMP on the machine as well. When I run powertop MongoDB is high in the list, but I'm not sure if that's relevant to battery life because I'm not actually doing anything with MongoDB most of the time. Edit - Additional info as requested $ lspci -nnk | grep -iEA3 "(graphics|vga)" 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) Subsystem: Dell Device [1028:057f] Kernel driver in use: i915 Kernel modules: i915 -- 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] [1002:6841] Subsystem: Dell Device [1028:057f] Kernel driver in use: radeon Kernel modules: radeon

    Read the article

  • ????????????????

    - by Yusuke.Yamamoto
    ????·???? ?????:??????·?????? ~???????????Platinum???????????~ Pickup!:????????????????????? ???????|??????????|???????? ????????:?Oracle DB?????????????????????Windows?VMware?? ???? Oracle Technology Network, ????/????, ??IT???????·?????????????? View RSS feed ????? ????? Oracle?????????????????????????!???????????? View RSS feed ????? ???? ????????? Oracle Database ?????? ??????? ?????????(????????, ???, etc) ????????(???, REDO, ????????, etc) ????·????????????????? ????·?????????(??, etc) ????????????? ???????????????????·?????? ??????? ???? ????????·??SQL Server Windows Server ??????????PL/SQL|Java|.NET|PHP ??/??? ORACLE MASTER ???? DWH(?????????)??·?? ????? ?????(SAN, NAS, SSD, etc) ??·??????? Oracle Database Oracle Database 11g Release 2(11gR2) Oracle Database Standard Edition ????????: Advanced Compression ?????????: Advanced Security Application Express(APEX) Automatic Storage Management(ASM) SSD???Oracle???: Database Smart Flash Cache ??????????: Data Guard Data Pump Oracle Data Provider for .NET(ODP.NET) Partitioning(???????/?????????) DB????: Real Application Clusters(RAC) Real Application Testing Recovery Manager(RMAN) SQL*Loader|SQL*Plus|Statspack ??????|????????|???????? Amazon EC2|Microsoft Excel MSFC/MSCS(Microsoft Cluster Service) Exadata|Database Firewall SQL Developer ?????DB: TimesTen In-Memory Database Oracle Fusion Middleware Java Oracle Coherence Oracle Data Integrator(ODI) Oracle GoldenGate Oracle JRockit JVM Oracle WebLogic Server Oracle Enterprise Manager ????????????: Oracle Application Testing Suite Oracle Solaris DTrace|ZFS|???/???? Oracle VM Server for x86 ?????? ???????? ?????????Oracle???????????????·????????????????? ?????????(??·??????) Oracle Direct Seminar(?????????) OTN??????(??????) ???????(????????) Oracle University(??) ??????! View RSS feed ????? ?????? ????? ?????? ?????|?Sun?? ???????? OTN???????? OTN(????) ?????? ???? OTN???|???? OTN????? ?????? ?????? ???????? ???? ???????

    Read the article

  • How to remove music/videos DRM protection and convert to Mobile Devices such as iPod, iPhone, PSP, Z

    - by tonywesley
    The music/video files you purchased from online music stores like iTunes, Yahoo Music or Wal-Mart are under DRM protection. So you can't convert them to the formats supported by your own mobile devices such as Nokia phone, Creative Zen palyer, iPod, PSP, Walkman, Zune… You also can't share your purchased music/videos with your friends. The following step by step tutorial is dedicated to instructing music lovers to how to convert your DRM protected music/videos to mobile devices. Method 1: If you only want to remove DRM protection from your protected music, this method will not spend your money. Step 1: Burn your protected music files to CD-R/RW disc to make an audio CD Step 2: Find a free CD Ripper software to convert the audio CD track back to MP3, WAV, WMA, M4A, AAC, RA… Method 2: This guide will show you how to crack drm from protected wmv, wma, m4p, m4v, m4a, aac files and convert to unprotected WMV, MP4, MP3, WMA or any video and audio formats you like, such as AVI, MP4, Flv, MPEG, MOV, 3GP, m4a, aac, wmv, ogg, wav... I have been using Media Converter software, it is the quickest and easiest solution to remove drm from WMV, M4V, M4P, WMA, M4A, AAC, M4B, AA files by quick recording. It gets audio and video stream at the bottom of operating system, so the output quality is lossless and the conversion speed is fast . The process is as follows. Step 1: Download and install the software Step 2: Run the software and click "Add…" button to load WMA or M4A, M4B, AAC, WMV, M4P, M4V, ASF files Step 3: Choose output formats. If you want to convert protected audio files, please select "Convert audio to" list; If you want to convert protected video files, please select "Convert video to" list. Step 4: You can click "Settings" button to custom preference for output files. Click "Settings" button bellow "Convert audio to" list for protected audio files Click "Settings" button bellow "Convert video to" list for protected video files Step 5: Start remove DRM and convert your DRM protected music and videos by click on "Start" button. What is DRM? DRM, which is most commonly found in movies and music files, doesn't mean just basic copy-protection of video, audio and ebooks, but it basically means full protection for digital content, ranging from delivery to end user's ways to use the content. We can remove the Drm from video and audio files legally by quick recording.

    Read the article

  • SSL23_WRITE:ssl handshake failure:s23_lib.c:177

    - by Armin
    When attempting to connect to an xmpp server over SSL, openssl fails with the following error: 3071833836:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177 I believe that the server uses the RC4-MD5 cipher, here is the full output: [root@localhost ~]# openssl s_client -connect 184.106.52.124:5222 -cipher RC4-MD5 CONNECTED(00000003) >>> SSL 2.0 [length 0032], CLIENT-HELLO 01 03 03 00 09 00 00 00 20 00 00 04 01 00 80 00 00 ff b0 c9 c2 3f 0b 0e 98 df b4 dc fe b7 e8 8f 17 9a 34 b5 9b 17 1b 2b ac 01 dc bd 2b a9 2d 18 44 0c 3071866604:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 52 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Using gnutls-cli: [root@localhost ~]# gnutls-cli 184.106.52.124 -p 5222 Resolving '184.106.52.124'... Connecting to '184.106.52.124:5222'... *** Fatal error: A TLS packet with unexpected length was received. *** Handshake has failed GNUTLS ERROR: A TLS packet with unexpected length was received. Connecting to the same server on port 5223 works fine. Using OpenSSL 1.0.1e-fips on CentOS 6.5 and OpenSSL 1.0.1f on Ubuntu 14.04.1 Any tips on how to troubleshoot this? Thanks in advance.

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Improving Performance of RDP Over LAN

    - by Jared Brown
    Architecture: A deployment of 6 new HP thin clients (Windows XP Embedded) with TCP/IP access to several new HP servers (Windows 2003 Server). Each thin client is connected over fiber optic to a Gigabit Cisco switch, which the servers are connected to. There are 10/100 Ethernet to fiber converter boxes on each end of the fiber cables. Problem: Noticeable lag over RDP while using the Unigraphics CAD package. 3D models take .5 to 1 second to respond to mouse actions. Other Details: Network throughput on each thin client's RDP session is 7288 kbps. RDP connection settings - color setting: 15k, all themes, etc. turned off. Local and remote system performance stats are well within norms (CPU, Memory, and Network). Question: Are there newer versions of terminal services or RDP I can use on my existing OSes? Are there compression algorithms, etc. that are well suited for a high-bandwidth LAN? Are there valid alternatives that will yield higher performance (i.e. UltraVNC with drivers installed)? Are there TCP/IP tuning options I can exploit?

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • IIS7 Session ID rotating with Classic ASP

    - by ManiacZX
    I am trying to migrate a Classic ASP app onto a Windows 2008 R2 server. The application features run fine, but I am having issue with session. The application keeps the logged in user information in session and I am constantly getting knocked out as if the session had expired. While debugging I have discovered the sessions are not expiring but instead I am getting 2-3 different Session IDs in use by one browser. I am outputting Response.Write(Session.SessionID) on various pages in the application and I can sit there and hit refresh over and over and watch the number changed between these 2-3 SessionIDs randomly. The sessions are still valid because when I refresh and get the Session ID that I logged in under the page is displayed (because the security check was successful) and when I get one of the other Session IDs I get the "you aren't logged in, you need to log in" message. If I close and re-open the browser, same story just the set of IDs are new. This happens with IE8, Firefox and Chrome from multiple computers. Things I've tried: - AppPool set to No Managed Code and Classic - Output Caching set .asp to never cache - ASP Session Properties enabled and disabled asp session state and confirmed it affected page (error trying to read Session.SessionID when disabled) Things I've tried just in case but shouldn't have anything to do with ASP Session: - Disabled compression - Changed ASP.Net Session State properties (InProc, StateServer, SQLServer, Cookies, URI, etc) -

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

  • Why is uploading to S3 so slow?

    - by Tom Marthenal
    I am using s3cmd to upload to S3: # s3cmd put 1gb.bin s3://my-bucket/1gb.bin 1gb.bin -> s3://my-bucket/1gb.bin [1 of 1] 366706688 of 1073741824 34% in 371s 963.22 kB/s I am uploading from Linode, which has an outgoing bandwidth cap of 50 Mb/s according to support (roughly 6 MB/s). Why am I getting such slow upload speeds to S3, and how can I improve them? Update: Uploading the same file via SCP to an m1.medium EC2 instance (SCP from my Linode to the instance's EBS drive) gives about 44 Mb/s according to iftop (any compression done by the cipher is not a factor). Traceroute: Here's a traceroute to the server it's uploading to (according to tcpdump). # traceroute s3-1-w.amazonaws.com. traceroute to s3-1-w.amazonaws.com. (72.21.194.32), 30 hops max, 60 byte packets 1 207.99.1.13 (207.99.1.13) 0.635 ms 0.743 ms 0.723 ms 2 207.99.53.41 (207.99.53.41) 0.683 ms 0.865 ms 0.915 ms 3 vlan801.tbr1.mmu.nac.net (209.123.10.9) 0.397 ms 0.541 ms 0.527 ms 4 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.400 ms 1.481 ms 1.508 ms 5 0.gi-0-0-0.pr1.tl9.nac.net (209.123.11.62) 1.602 ms 1.677 ms 1.699 ms 6 equinix02-iad2.amazon.com (206.223.115.35) 9.393 ms 8.925 ms 8.900 ms 7 72.21.220.41 (72.21.220.41) 32.610 ms 9.812 ms 9.789 ms 8 72.21.222.141 (72.21.222.141) 9.519 ms 9.439 ms 9.443 ms 9 72.21.218.3 (72.21.218.3) 10.245 ms 10.202 ms 10.154 ms 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * The latency looks reasonable, at least until the server stopped responding to ping requests.

    Read the article

  • Dual Monitor support rdp 7 to win 7 on esxi

    - by rphilli5
    I am trying to RDP from a Windows 7 Professional dual monitor physical machine to a Windows 7 Professional VM hosted on esxi 4.0. I can get the spanning option to work to both monitors, but I have tried 3 different methods of connecting but have not been able to use true multiple monitors. At different times, I tried checking the "use all monitors" option, command line mstsc /multimon and added the line use multimon:i:1 to the .rdp file. None of these worked. Any ideas? The physical machine can connect to other Windows 7 physical machines with true multi monitor access. I also have the same issue when going from a 32bit RC1 machine to a Windows 7 Professional x64, but not when going in the reverse direction. Here's the .rdp: screen mode id:i:2 use multimon:i:1 desktopwidth:i:1440 desktopheight:i:900 session bpp:i:16 winposstr:s:0,1,341,118,1139,568 compression:i:1 keyboardhook:i:2 audiocapturemode:i:0 videoplaybackmode:i:1 connection type:i:1 displayconnectionbar:i:1 disable wallpaper:i:1 allow font smoothing:i:0 allow desktop composition:i:0 disable full window drag:i:1 disable menu anims:i:1 disable themes:i:1 disable cursor setting:i:0 bitmapcachepersistenable:i:1 full address:s:192.168.1.5 audiomode:i:0 redirectprinters:i:1 redirectcomports:i:0 redirectsmartcards:i:1 redirectclipboard:i:1 redirectposdevices:i:0 redirectdirectx:i:1 autoreconnection enabled:i:1 authentication level:i:2 prompt for credentials:i:0 negotiate security layer:i:1 remoteapplicationmode:i:0 alternate shell:s: shell working directory:s: gatewayhostname:s: gatewayusagemethod:i:4 gatewaycredentialssource:i:4 gatewayprofileusagemethod:i:0 promptcredentialonce:i:1 use redirection server name:i:0 drivestoredirect:s:

    Read the article

  • Real-time local backup with versioning on Windows 7/8

    - by Borda
    I'm looking for a reliable backup solution on Windows, something with a feature set similar to Yadis. I've been using CrashPlan for 2 months, but their software lost more than 1TB of my data, that's why I'm looking for alternatives. Requirements: Real-time folder-to-folder backup: I don't need online features, I want to use this to duplicate my files between my local disks. Versioning support: Should be able to choose how many versions to keep of the files. Plain backup: I'd like to be able to open the backup without special software. No proprietary file format. External disk support: Shouldn't have any problem using external disks, at least on the source side. Backup every file, even locked/system files. (Yadis fails this one) Should start automatically, and have a comfortable GUI. Should use actively maintained and/or popular software. I don't want to use discontinued products. Optional requirements: Low RAM usage I'm not really comfortable with something that eats 1GB of my RAM. Compression support Preferably ZIP, but I'm not picky about this. Any ordinary everyday format is acceptable. Freeware or Open Source Preferred, but not necessary. I can do a one-time payment within reasonable bounds. (preferably under $100) I only need Windows support, but if it works on Linux, that's a plus. I've already searched and tried lots of software, most of them failed at the plain backup, the versioning or the locked file backup requirements.

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance,

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • Can OpenVPN invoke DHCP Client?

    - by Ency
    I have got working VPN connection through openvpn, but I would like to use also my DHCP server and not openvpn's push feature. Currently everything works fine, but I have to manually start dhcp client, eg. dhclient tap0 and I get IP and other important stuff from my DHCP, is there any directive which start DHCP Client when connection is established? There is my client's config: remote there.is.server.com float dev tap tls-client #pull port 1194 proto tcp-client persist-tun dev tap0 #ifconfig 192.168.69.201 255.255.255.0 #route-up "dhclient tap0" #dhcp-renew ifconfig 0.0.0.0 255.255.255.0 ifconfig-noexec ifconfig-nowarn ca /etc/openvpn/ca.crt cert /etc/openvpn/encyNtb_openvpn_client.crt key /etc/openvpn/encyNtb_openvpn_client.key dh /etc/openvpn/dh-openvpn.dh ping 10 ping-restart 120 comp-lzo verb 5 log-append /var/log/openvpn.log Here comes server's config: mode server tls-server dev tap0 local servers.ip.here port 1194 proto tcp-server server-bridge # Allow comunication between clients client-to-client # Allowing duplicate users per one certificate duplicate-cn # CA Certificate, VPN Server Certificate, key, DH and Revocation list ca /etc/ssl/CA/certs/ca.crt cert /etc/ssl/CA/certs/openvpn_server.crt key /etc/ssl/CA/private/openvpn_server.key dh /etc/ssl/CA/dh/dh-openvpn.dh crl-verify /etc/ssl/CA/crl.pem # When no response is recieved within 120seconds, client is disconected keepalive 10 60 persist-tun persist-key user openvpn group openvpn # Log and Connected clients file log-append /var/log/openvpn verb 3 status /var/run/openvpn/vpn.status 10 # Compression comp-lzo #Push data to client push "route-gateway 192.168.69.1" push "redirect-gateway def1"

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • Lotus Domino - DAOS not reducing file size?

    - by SydxPages
    I have implemented DAOS on a Lotus Domino Server (8.5.3 FP2) as follows: Lotus Domino Server Document: Store file attachments in DAOS: Enabled Minimum size of object before Domino will store in DAOS: 64000 bytes DAOS base path: E:\DAOS Defer object deletion for: 30 days Transaction logging is running, and the specific test database has the following advanced properties set: Domino Attachment and Object Service (ticked) Use LZ1 compression for atachments Compress Database Design Compress Data I have restarted the server. When I run a compact -c, it compacts the database, but does not reduce the size. I have checked the DB in Windows Explorer (60Gb) and the size is the same pre and post. I have checked the directory (E:\DAOS) and it is 35Gb in size. When I run the command 'Tell DAOSMgr Status tmp\test.nsf', I get the following response. From looking up on the net, I believe ticket count = 0 means that the db is not really DAOS'ed? Admin Process: Searching Administration Requests database DAOSMGR: Status tmptest.nsf started DAOS database status: Database: E:\Lotus\Domino\Data\tmp\test.nsf Database state = Synchronized Last resynchronized: 03/09/2012 02:49:13 PM Ticket count: 0 DAOSMGR: Status tmp\test.nsf completed I have run fixup on the database. When I have tried to run the DAOS estimator it has always crashed. This was a problem with larger databases on earlier versions of domino, but not anymore. Can anyone tell me why the size has not reduced? Am I missing anything?

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >