Search Results

Search found 2696 results on 108 pages for 'compression formats'.

Page 83/108 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • mplayer (mplayerhq.hu) repeats ending audio frames

    - by kamikatze
    mplayer (from mplayerhq.hu) on windows repeats the last few audio frames upon exit. When the video ends, before you can see Exiting... (End of file) in the command prompt, you will hear the last 1/2 second or so of the audio track again. This behavior is the same for multiple containers/codecs/soundcards Vista or Windows 7. Is there a workaround for this? My playback specs: MPlayer Sherpya-MT-SVN-r31027-4.2.5 (C) 2000-2010 MPlayer Team 150 audio & 343 video codecs Playing splash_final.wmv. ASF file format detected. [asfheader] Audio stream found, -aid 1 [asfheader] Video stream found, -vid 2 VIDEO: [WMV3] 1280x720 24bpp 1000.000 fps 6291.5 kbps (768.0 kbyte/s) ========================================================================== Opening video decoder: [dmo] DMO video codecs DMO dll supports VO Optimizations 0 1 DMO dll might use previous sample when requested Decoder supports the following formats: YV12 YUY2 UYVY YVYU RGB8 [..] Decoder is capable of YUV output (flags 0x1b) Movie-Aspect is undefined - no prescaling applied. VO: [directx] 1280x720 = 1280x720 Planar YV12 Selected video codec: [wmv9dmo] vfm: dmo (Windows Media Video 9 DMO) ========================================================================== ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 44100 Hz, 2 ch, s16le, 329.8 kbit/23.37% (ratio: 41221-176400) Selected audio codec: [ffwmav2] afm: ffmpeg (DivX audio v2 (FFmpeg)) ========================================================================== AO: [dsound] 44100Hz 2ch s16le (2 bytes per sample) Starting playback...

    Read the article

  • gitolite mac don't add new user to authorized_keys

    - by crashbus
    I installed gitolite and every thing works fine for me as admin. But when I'd like to add add a new user the new user can't connect to the server. After I looked into the file authorized_keys I saw that the new user wasn't added to the file. During the commit of the new public-key I get some workings: WARNING: split conf not set, gl-conf present for 'gitolite-admin' Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 882 bytes, done. Total 4 (delta 1), reused 0 (delta 0) remote: WARNING: split conf not set, gl-conf present for 'gitolite-admin' remote: WARNING: ?? @staff christianwaldmann markwelch remote: sh: find: command not found remote: sh: find: command not found remote: sh: sort: command not found remote: sh: find: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: cut: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 23: grep: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sort: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sed: command not found remote: sh: find: command not found remote: sh: find: command not found How can I fix it that gitolite auto-add the new user to the authorized_keys.

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    Hello - I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified with SpeakEasy and Speedtest.net) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Winamp playing sound but no video

    - by Greg Sansom
    I am having problems playing video in Winamp (the movie I am trying to play is an AVI - not sure if other formats work). I have installed the K-Lite Codec Pack, and the video does work in Winamp Classic. I can also play the video in Winamp on another machine (although I can't remember the exact configuration details of that machine - and I don't think they're relevant). There are a few symptoms: The content of the Video view is either empty, transparent, or displays rendering from other programs. Opening the Visualization view shows the following error: MILKDROP ERROR DirectX initialization failed (GetDeviceCaps). This means that no valid 3D-accelerated display adapter could be found on your computer. If you know this is not the case, it is possible that your graphics subsystem is unstable; please try rebooting your computer and then try to run the plugin again. Otherwise, please install a 3D-accelerated display adapter. Trying to open streams via the SHOUTCast TV plugin shows Error opening video output, and the video does not open. Opening the file with WMC causes the following error (although the movie still plays): Error creating DX9 allocation presenter CreateDevice failed D3DERR_NOTAVAILABLE There are no warnings displayed in Device Manager, although the display adapter is the standard Windows one. Running DxDiag shows no problems (codec for Video listed as XviD 1.1.2 Final). GSpot reports that codecs are installed. System specs: - Windows Server 2008 r2 Standard 64-bit, with latest updates; - .NET 3.5.1 installed; - Winamp v5.6.01 (latest version); - DirectX 11 (Latest version); - K-Lite Codec Pack 7.0.0 (Full); - Machine is HP DC7600 - full specs here. Please comment if there is any more information which will help to diagnose the problem.

    Read the article

  • rsnapshot - not correctly archiving mysql databases

    - by Tiffany Walker
    My rsnapshot configuration: snapshot_root /.snapshots/ backup /home/user localhost/ backup_script /usr/local/backup_mysql.sh localhost/mysql/ Using this file: NOW=$(date +"%m-%d-%Y") # mm-dd-yyyy format FILE="" # used in a loop ### Server Setup ### #* MySQL login user name *# MUSER="root" #* MySQL login PASSWORD name *# MPASS="YOUR-PASSWORD" #* MySQL login HOST name *# MHOST="127.0.0.1" #* MySQL binaries *# MYSQL="$(which mysql)" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" # get all database listing DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')" # start to dump database one by one for db in $DBS do FILE=$BAK/mysql-$db.$NOW-$(date +"%T").gz # gzip compression for each backup file $MYSQLDUMP --single-transaction -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE done It dumps the databases under / I then tried with the following: http://bash.cyberciti.biz/backup/rsnapshot-remote-mysql-backup-shell-script/ I got: rsnapshot hourly ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: backup_script /usr/local/backup_mysql.sh returned 1 WARNING: Rolling back "localhost/mysql/" ls -la /.snapshots/hourly.0/localhost/mysql total 8 drwxr-xr-x 2 root root 4096 Nov 23 17:43 ./ drwxr-xr-x 4 root root 4096 Nov 23 18:20 ../ What exactly am I doing wrong? EDIT: # /usr/local/backup_mysql.sh *** Dumping MySQL Database *** Database> information_schema..cphulkd..eximstats..horde..leechprotect..logaholicDB_ns1..modsec..mysql..performance_schema..roundcube..test.. *** Backup done [ files wrote to /.snapshots/tmp/mysql] *** root@ns1 [~]# ls -la /.snapshots/tmp/mysql total 8040 drwxr-xr-x 2 root root 4096 Nov 23 18:41 ./ drwxr-xr-x 3 root root 4096 Nov 23 18:41 ../ -rw-r--r-- 1 root root 1409 Nov 23 18:41 cphulkd.18_41_45pm.gz -rw-r--r-- 1 root root 113522 Nov 23 18:41 eximstats.18_41_45pm.gz -rw-r--r-- 1 root root 4583 Nov 23 18:41 horde.18_41_45pm.gz -rw-r--r-- 1 root root 71757 Nov 23 18:41 information_schema.18_41_45pm.gz -rw-r--r-- 1 root root 692 Nov 23 18:41 leechprotect.18_41_45pm.gz -rw-r--r-- 1 root root 2603 Nov 23 18:41 logaholicDB_ns1.18_41_45pm.gz -rw-r--r-- 1 root root 745 Nov 23 18:41 modsec.18_41_45pm.gz -rw-r--r-- 1 root root 138928 Nov 23 18:41 mysql.18_41_45pm.gz -rw-r--r-- 1 root root 1831 Nov 23 18:41 performance_schema.18_41_45pm.gz -rw-r--r-- 1 root root 3610 Nov 23 18:41 roundcube.18_41_45pm.gz -rw-r--r-- 1 root root 436 Nov 23 18:41 test.18_41_47pm.gz MySQL Backup seems fine.

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • Codec Problems with trying to edit videos with VirtualDub

    - by Roy Rico
    So, I'm a little frustrated. According to this post and various other internet sources, virtualdub is supposed to allow users to quickly split and join video files. I am using windows 7 64 Bit and the latest version of VirtualDub (64-bit). I have tried to edit various movie files, and each attempt at editing various files I have done has not worked for me. AVI file A.avi won't load, saying that it can't located the Decompressor for the "FMP4" format. I have tried this solution and this one, and neither of them work. I have tried setting the VFW Decompressor for 'Other MPEG4' setting to XVID or LIBAVCODEC. There is no change in Virtual Dub. AVI file C.avi will load in Virtual Dub, but any attempt to split it gives me an error that I don't have XVID codecs installed. I've attempted to install the proper codecs (Shark's Windows 7 Codecs, CCCP) with no change. AVI file C.avi will load, and it will split, but won't split using the "Direct Stream Copy" claiming the compression algorithm is incompatible. I tried the "Fast Recompress" option and it created a 27GB file out of what was supposed to be about a 300-400MB file. Can someone please give me some insight into what I'm messing up?

    Read the article

  • How to move your Windows User Profile to another drive in Windows 8

    - by Mark
    I like to have my user folder on a different drive (D:) than my OS is (C:). Reading the following post I decided to give it a try. All went quite well, untill I found out that my Windows 8 Apps won't execute anymore (other than that I didn't noticed any problems). My apps do work, while using an account that isn't moved. In the eventviewer I've found error messages like these: App <Microsoft.MicrosoftSkyDrive> crashed with an unhandled Javascript exception. App details are as follows: Display Name:<SkyDrive>, AppUserModelId: <microsoft.microsoftskydrive_8wekyb3d8bbwe!Microsoft.MicrosoftSkyDrive> Package Identity:<microsoft.microsoftskydrive_16.4.4204.712_x64__8wekyb3d8bbwe> PID:<4452>. The details of the JavaScript exception are as follows Exception Name:<WinRT error>, Description:<Loading the state store failed. > , HTML Document Path:</modernskydrive/product/skydrive/App.html>, Source File Name:<ms-appx://microsoft.microsoftskydrive/jx/jx.js>, Source Line Number:<1>, Source Column Number:<27246>, and Stack Trace: ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:27246 localSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:51544 _initSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:54710 getApplicationStatus(boolean) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:48180 init(object) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:45583 Application(number, boolean) ms-appx://microsoft.microsoftskydrive/modernskydrive/product/skydrive/App.html:216:13 Anonymous function(object) Using ProcMon, I see a lot of access denied messages, like these: Date & Time: 12-9-2012 9:32:20 Event Class: File System Operation: CreateFile Result: ACCESS DENIED Path: D:\Users\John\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe\Settings\settings.dat TID: 2520 Duration: 0.0000149 Desired Access: Read Data/List Directory, Write Data/Add File, Read Control Disposition: OpenIf Options: Sequential Access, Synchronous IO Non-Alert, No Compression Attributes: N ShareMode: None AllocationSize: 0 Any idea how to solve this? I noticed that the app folders e.g.: D:\Users\john\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe had a different owner than the old profile folder had. Old profile folder had john as owner where my new profile folder had the Administrators group as owner. Changing this didn't help unfortunately.

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Importing long numerical identifiers into Excel

    - by Niels Basjes
    I have some data in a database that uses ids that have the form of 16 digit numbers. In some situations i need to export the data in such a way that it can be manipulated in excel. So i export the data into a file and import it into excel. I've tried several file formats and I'm stuck. The problem I'm facing is that when reading a file into excel that has a cell that looks like a number then excel treats it as a number. The catch is that (as far as i can tell) all numerical values in excel are double precision floating point which have a precision of less than 16 digits. So my ids are changed: very often the last digit its changed to a 0. So far I've only been able to convince excel to keep the Id unchanged by breaking it myself: by adding a letter or symbol to the Id. This however means that in order to use the value again it must be "unbroken". Is there a way to create a file where i can specify that excel must treat the value as a text without changing the value? Or its there a way to let excel treat the value as a long (64bit integer)?

    Read the article

  • Converting PDF eBooks into a Kindle format

    - by Ender
    Over the past couple of years I've amassed quite a collection of guides, tutorials and ebooks in PDF format. A lot of these are quite useful for work, especially PDF documentation, and rather than have to be at a computer every time I want to read how to do something in Sitecore or to read through a software testing ebook I'd like to do it on my brand-spanking-new Kindle. However, even though there is now a native PDF reader on the Kindle due to the nature of PDF's they are practically unreadable. The text doesn't wrap due to how PDF's are sized and so far after a bunch of Google searches I've yet to find a viable solution to get my PDF's converted into a readable Kindle format. Sometimes these books have code or pictures/tables in them, but most of the time they're text-heavy and to be honest I'd be surprised if there wasn't a free tool to handle the converting of PDF to one of the (seemingly many) Kindle formats. So, can anyone help me out with this? EDIT: I've tried Calibre, and have checked their forums to play with some of the advanced settings, yet the solutions available seem to be extremely poor, especially if the book you're attempting to read contains equations, code, or anything outside of plain text. I've also tried Amazon's conversion service, which wasn't much help with such documents. The best way I have found so far is to build the entire thing over again in ePub or RTF format and convert to MOBI from there. This works for text-heavy books with tables, but anything technical still isn't covered. Can anyone help with this?

    Read the article

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • How to install/configure ffmpeg to compress mp4 videos for flash player delivery?

    - by Andrew Fulton
    We have a flash web-app that created interactive video, and are using ffmpeg to do some compression/resizing when a user "publishes" their project. The user can upload flv files and mp4 files, both of which play fine in the Flash UI before publishing. After publishing the flv files work fine, but the mp4 files will not play in the flash player: Audio will play but video won't. The mp4 files will play fine if I download them and play them in the Quicktime player but if I attempt to open them in the Adobe Media Player it reports "The media file does not contain a supported video track". If I open the Movie inspector in quicktime it tells me that the original file is an "h264" video and the ffmpeg-processed ones are "mpeg-4". I have tried forcing it to h264 by adding flags like -f h264 and -vcodec h264 but I get a screenfull of errors (no frame, illegal POC type, sps_id out of range) ending with Could not find codec parameters (Video: h264) h264 will show up if I run ffmpeg -formats and ffmpeg -codecs, and as I said it will play fine in Quicktime. Is there anything else I need to do to convince the flash player to play them? Is there anything else I need to tell you about the server that will help?

    Read the article

  • how to remove background layer of djvu file

    - by Jon
    Hello, I've downloaded some files from the Internet Archive. They come in different file formats and most of the time I use pdf. However, sometimes the scans are saves in colour instead of b/w. This makes it difficult/impossible to read on a dedicated ebook reader. In that case I downloaded the djvu files as on the PC you can select which layer (color, bw,fore,back) one would like to see. Selecting the bw gives excellent results. However, the ebook reader does not has this option. The question is, how can I remove /extract a layer from the djvu file and save only this layer. So far I've tried the following two approaches: 1) select bw in the djvu viewer on the PC and printed to postscript file. Followed by a ps2pdf conversion. This works, but generates a fairly large pdf file. Sure, I can again upload it to any2djvu but it just seems to much manual work for each file. 2) I tried the shared annotation feature and said (mode bw). This works on the PC as desired but is ignored on the ebook reader as the other layers are still present. Any help or suggestions would be greatly appreciated.

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • GitHub - commit local changes in local branch to remote branch

    - by user62046
    I use Git Shell in Windows 7, working in a branch named Save-Rotation. Then I used git push origin Save-Rotation to commit the changes to remote. The result is posted at the end. It seems good. But when I went to my repository in GitHub site, which is https://github.com/chiapas/sumatrapdf/tree/Save-Rotation I can't see any change in the repository tree or commit tree. How can I know if the commit (to remote) is successful, and why the repository page is not updated? Here is the result in command-line C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation]> git push origin Save-R otation Counting objects: 167, done. Delta compression using up to 8 threads. Compressing objects: 100% (18/18), done. Writing objects: 100% (119/119), 27.43 KiB, done. Total 119 (delta 101), reused 119 (delta 101) To https://github.com/chiapas/sumatrapdf * [new branch] Save-Rotation -> Save-Rotation C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]> git push o rigin Save-Rotation Everything up-to-date C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]>

    Read the article

  • Why doesn't SSHFS let me look into a mounted directory?

    - by Jan
    I use SSHFS to mount a directory on a remote server. There is a user xxx on client and server. UID and GID are identical on both boxes. I use sshfs -o kernel_cache -o auto_cache -o reconnect -o compression=no \ -o cache_timeout=600 -o ServerAliveInterval=15 \ [email protected]:/mnt/content /home/xxx/path_to/content to mount the directory on the remote server. When I log in as xxx on the client I have no problems. I can cd into /home/xxx/path_to/content. But when I log in on the client as another user zzz and then $ ls -l /home/xxx/path_to I get this d????????? ? ? ? ? ? content and on $ ls -l /home/xxx/path_to/content I get ls: cannot access content: Permission denied When I do $ ls -l /mnt on the remote server I get drwxr-xr-x 6 xxx xxx 4096 2011-07-25 12:51 content What am I doing wrong? The permissions seem to be correct to me. Am I wrong?

    Read the article

  • IIS 7 much slower than IIS 6

    - by JoeJoe
    I have a asp.net 3.5 web application running fine on Windows2003 IIS6. I published same exact application to IIS7.5 (Win2008R2) on a faster box (i5,8Gram) and it is significantly slower. 5-6 sec per page vs. 1-2 sec per page. During that time the Task Mgr CPU is always under 10%. Both attach to same database on other box. Benchmark is consistent from any other client browser or machine. I have connection pool on both, compression on both. Same network subnet. Forms authentication (no SSL yet). Can you give me steps on how to troubleshoot where the delays are being inserted or settings in IIS7 that I may have overlooked. Just using defaults. There is only 1 web site on each box. I understand the roles of an Application as defined in IIS has changed. There is no special Application defined in IIS.

    Read the article

  • How can I totally flatten a PDF in Mac OS on the command line?

    - by Matthew Leingang
    I use Mac OS X Snow Leopard. I have a PDF with form fields, annotations, and stamps on it. I would like to freeze (or "flatten") that PDF so that the form fields can't be changed and the annotations/stamps are no longer editable. Since I actually have many of these PDFs, I want to do this automatically on the command line. Some things I've tried/considered, with their degree of success: Open in Preview and Print to File. This creates a totally flat PDF without changing the file size. The only way to automate seems to be to write a kludgy UI-based AppleScript, though, which I've been trying to avoid. Open in Acrobat Pro and use a JavaScript function to flatten. Again, not sure how to automate this on the command line. Use pdftk with the flatten option. But this only flattens form fields, not stamps and other annotations. Use cupsfilter which can create PDF from many file formats. Like pdftk this flattened only the form fields. Use cups-pdf to hook into the Mac's printserver and save a PDF file instead of print. I used the macports version. The resulting file is flat but huge. I tried this on an 8MB file; the flattened PDF was 358MB! Perhaps this can be combined with a ghostscript call as in Ubuntu Tip:Howto reduce PDF file size from command line. Any other suggestions would be appreciated.

    Read the article

  • How to Set Up an SMTP Submission Server on Linux

    - by Kevin Cox
    I was trying to set up a mail server with no luck. I want it to accept mail from authenticated users only and deliver them. I want the users to be able to connect over the internet. Ideally the mail server wouldn't accept any incoming mail. Essentially I want it to accept messages on a receiving port and transfer them to the intended recipient out port 25. If anyone has some good links and guides that would be awesome. I am quite familiar with linux but have never played around with MTA's and am currently running debian 6. More Specific Problem! Sorry, that was general and postfix is complex. I am having trouble enabling the submission port with encryption and authentication. What Works: Sending mail from the local machine. (sendmail [email protected]). Ports are open. (25 and 587) Connecting to 587 appears to work, I get a "need to starttls" warning and starttls appears to work. But when I try to connect with the next command I get the error below. # openssl s_client -connect localhost:587 -starttls smtp CONNECTED(00000003) depth=0 /CN=localhost.localdomain verify error:num=18:self signed certificate verify return:1 depth=0 /CN=localhost.localdomain verify return:1 --- Certificate chain 0 s:/CN=localhost.localdomain i:/CN=localhost.localdomain --- Server certificate -----BEGIN CERTIFICATE----- MIICvDCCAaQCCQCYHnCzLRUoMTANBgkqhkiG9w0BAQUFADAgMR4wHAYDVQQDExVs b2NhbGhvc3QubG9jYWxkb21haW4wHhcNMTIwMjE3MTMxOTA1WhcNMjIwMjE0MTMx OTA1WjAgMR4wHAYDVQQDExVsb2NhbGhvc3QubG9jYWxkb21haW4wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDEFA/S6VhJihP6OGYrhEtL+SchWxPZGbgb VkgNJ6xK2dhR7hZXKcDtNddL3uf1YYWF76efS5oJPPjLb33NbHBb9imuD8PoynXN isz1oQEbzPE/07VC4srbsNIN92lldbRruDfjDrAbC/H+FBSUA2ImHvzc3xhIjdsb AbHasG1XBm8SkYULVedaD7I7YbnloCx0sTQgCM0Vjx29TXxPrpkcl6usjcQfZHqY ozg8X48Xm7F9CDip35Q+WwfZ6AcEkq9rJUOoZWrLWVcKusuYPCtUb6MdsZEH13IQ rA0+x8fUI3S0fW5xWWG0b4c5IxuM+eXz05DvB7mLyd+2+RwDAx2LAgMBAAEwDQYJ KoZIhvcNAQEFBQADggEBAAj1ib4lX28FhYdWv/RsHoGGFqf933SDipffBPM6Wlr0 jUn7wler7ilP65WVlTxDW+8PhdBmOrLUr0DO470AAS5uUOjdsPgGO+7VE/4/BN+/ naXVDzIcwyaiLbODIdG2s363V7gzibIuKUqOJ7oRLkwtxubt4D0CQN/7GNFY8cL2 in6FrYGDMNY+ve1tqPkukqQnes3DCeEo0+2KMGuwaJRQK3Es9WHotyrjrecPY170 dhDiLz4XaHU7xZwArAhMq/fay87liHvXR860tWq30oSb5DHQf4EloCQK4eJZQtFT B3xUDu7eFuCeXxjm4294YIPoWl5pbrP9vzLYAH+8ufE= -----END CERTIFICATE----- subject=/CN=localhost.localdomain issuer=/CN=localhost.localdomain --- No client certificate CA names sent --- SSL handshake has read 1605 bytes and written 354 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: E07926641A5EF22B15EB1D0E03FFF75588AB6464702CF4DC2166FFDAC1CA73E2 Session-ID-ctx: Master-Key: 454E8D5D40380DB3A73336775D6911B3DA289E4A1C9587DDC168EC09C2C3457CB30321E44CAD6AE65A66BAE9F33959A9 Key-Arg : None Start Time: 1349059796 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN read:errno=0 If I try to connect from evolution I get the following error: The reported error was "HELO command failed: TCP connection reset by peer".

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >