Search Results

Search found 88714 results on 3549 pages for 'data type'.

Page 307/3549 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Using openssl command line tool to encrypt/decrypt data, DES ECB

    - by smsrecv
    Hello How can I create a random 64 bit key for DEC ECB encryption/decryption, and then use the same key for encryption/decryption many times? All this must be done using openssl command line tool. In all the examples I have seen, they do not use a "key", they use "password". But I need a key - array of bytes - because I need to send it to the other party (I don't know which API they use for cryptography.) Then I need to use this key - array of bytes - to encrypt/decrypt data. Thnak you

    Read the article

  • Possible to recover older, previously deleted files with R-studio?

    - by SteveO
    The files and directories on one of my ntfs partition were wiped out last time. I used R-studio to scan the partition, and it did find many files, actually more than the capacity of the partition. This is because R-studio found files that were deleted even earlier. So I wonder if it is possible to specify those files and directories deleted last time instead of those deleted earlier for recovery? R-studio has a free demo version, for which scanning is free,but recovery isn't. It is downloadable from http://www.data-recovery-software.net/Data_Recovery_Download.shtml Its manual is here http://www.r-tt.com/downloads/Recovery_Manual.pdf. I have tried my best to search for answers in the manual, but failed to find one. Their technical support is not as good as their software, and helpless usually in my opinion. Thanks!

    Read the article

  • How secure is cloud computing?

    - by Rhubarb
    By secure, I don't mean the machines itself and access to it from the network. I mean, and I suppose this could be applied to any kind of hosting service, when you put all your intellectual property onto a hosted provider, what happens to the hard disks as they cycle through them? Say I've invested million into my software, and the information and data that I have is valuable, how can I be sure it isn't read off old disks as they're recycled? Is there some kind of standard to look for that ensures a provider is going to use the strictest form of intellectual property protection? Is SAS70 applicable here?

    Read the article

  • Ranking tables from Excel data

    - by Joe
    Hi all (asking here because this meta question told me to). I have some data in an excel spreadsheet here. It's no more than a table with about five columns. Year Purchased Manufacturer Model Num Unit Price Total Price 2007 SMARTBOX FuturePad XP 1 £2,915.00 £2,915.00 2007 Attainment Company Inc Go Talk 9+ 1 £104.00 £104.00 2007 Attainment Company Inc Go Talk 20+ 1 £114.00 £114.00 I'd like to be able to build a 'top ten' of either manufacturers or models (and I'd like to be able to do it by either most mentioned, most sales, or highest value of sales) - but I've got no idea what the best method is in excel. Any suggestions...? The ideal output might be a set of sells that says something like Company Units A 5342 B 232 C 2 D 1

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • Recovered video files won't play

    - by BioGeek
    I have an SD card with pictures and video which malfunctioned. I was able to recover the files with Photorec. The pictures are OK, but wen I try to open the vide files (*.mov extension) in get the following errors when I try to open them in the following programs Windows Media player: "Windows Media Player encountered a problem while playing the file" Quicktime: "Error -2048: Couldn't open the file because it is not a file that QuickTime understands" VLC: it shows the first frame of the video and the sound is just white noise The filesizes look correct so I presume the data is still in there. Is there any way to fix these recovered video files?

    Read the article

  • Decrypting a TrueCrypt drive pulled from another machine

    - by Blakeg08
    I work in a corporate environment and we are now required to encrypt laptops. I have already encrypted about 5 or 6 out of 40. I still have a few questions before we go all out with TrueCrypt. Can I decrypt a hard drive by plugging it into my desktop using a data transfer kit? I tried this and the hard drive showed up asking me to format before using the volume. If I have the TRD from each laptop backed up do I still need to backup the volume headers? What else do I need to back up? Thanks.

    Read the article

  • How can I pin point a USB file transfer bottleneck in Unix?

    - by HankHendrix
    I'm experiencing very slow data transfer speeds over USB 2.0 on my nix box and was wondering how I can pin-point the cause of the problem. I've looked into iotop and top but the cpu and mem figures look normal (compared to guides I have checked). The box which is affected is Ubuntu 12.04 32bit Server running on an Asus EEE 701 2G model and I am transferring from the OS over USB 2.0 to an external HDD (which transfers at 30MB/s+ on Windows 7 on other machine). I get rsync write speeds of 1MB/s from OS to USB HDD which seems ridiculously slow. These speeds are consistent with other USB HDDs and sticks.

    Read the article

  • Tar and gzip together, but the other way round?

    - by Boldewyn
    Gzipping a tar file as whole is drop dead easy and even implemented as option inside tar. So far, so good. However, from an archiver's point of view, it would be better to tar the gzipped single files. (The rationale behind it is, that data loss is minified, if there is a single corrupt gzipped file, than if your whole tarball is corrupted due to gzip or copy errors.) Has anyone experience with this? Are there drawbacks? Are there more solid/tested solutions for this than find folder -exec gzip '{}' \; tar cf folder.tar folder

    Read the article

  • Distributed filesystem for automated offline data mirroring

    - by Petr Pudlák
    I'd like to achieve the following setup: Every time I connect my laptop to a local network, my partition gets automatically mirrored to a partition on my local server. I only want to mirror what has changed from the last time. (I understand that it is not a proper backup solution since there is no history of the changes, it'd be more like a non-persistent network RAID.) Is there a distributed file system that allows such a setup? I've done some searching and it seems to me that most distributed file-systems are focused on data availability and distribution, not duplicating them. I'd be thankful for suggestions. Edit: Sorry, I forgot to mention: I'm using Linux.

    Read the article

  • How much data does windows write on boot

    - by soandos
    This question was inspired by Bob's comment to my answer here. On boot, windows writes files to the hard drive (I imagine this to be the case, as it has a way of detecting if the boot was previously interrupted by a hard power-off, and I am sure many other things). But assuming that there is a "smooth" boot, where there are no error, etc, and no logon scripts that run, and things like that, about how much (a few KB, a few MB, a few GB) data gets written to the drive? For simplicity's sake, assume that: hibernation is turned off windows 7 pagefile is turned off (does this matter right at boot, or only later?) How could one go about measuring this? Are there resources that have this information?

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • In Excel, how to group data by date, and then do operations on the data?

    - by Bicou
    Hi, I have Excel 2003. My data is like this: 01/10/2010 0.99 02/10/2010 1.49 02/10/2010 0.99 02/10/2010 0.99 02/10/2010 0.99 03/10/2010 1.49 03/10/2010 1.49 03/10/2010 0.99 etc. In fact it is a list of sales every day. I want to have something like this: 01/10/2010 0.99 02/10/2010 4.46 03/10/2010 3.97 I want to group by date, and sum the column B. I'd like to see the evolution of the sales over time, and display a nice graph about that. I have managed to create pivot tables that almost do the job: they list the number of 0.99 and 1.49 each day, but I can't find a way to simply sum everything and group by date. Thanks for reading.

    Read the article

  • "Cannot allocate memory " error whle copying data from window to ubuntu

    - by John
    I have Ubuntu 9.10 installed inside VM of server 2008. WHen i try to copy the data from the network and paste insid ethe Ubuntu it says error called "Cannot allocate memory " I have 3GB RAM attached to the Ubuntu I tried above suggestion but still im unbale to copy file from my host machine i.e. Windows XP to my Ubuntu machine ( which is at Virtual Machine) Im trying to copy jdk-1_5_0_22-linux-i586.bin file whose size is 47.4 MB Is there any other work around for this problem???? I tried Set the following registry key to ’1': HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache and set the following registry key to ’3': HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size but still im unbale to copy file from my host machine i.e. Windows XP to my Ubuntu machine ( which is at Virtual Machine) Im trying to copy jdk-1_5_0_22-linux-i586.bin file whose size is 47.4 MB Is there any other work around for this problem????

    Read the article

  • Tar an gzip together, but the other way round?

    - by Boldewyn
    Gzipping a tar file as whole is drop dead easy and even implemented as option inside tar. So far, so good. However, from an archiver's point of view, it would be better to tar the gzipped single files. (The rationale behind it is, that data loss is minified, if there is a single corrupt gzipped file, than if your whole tarball is corrupted due to gzip or copy errors.) Has anyone experience with this? Are there drawbacks? Are there more solid/tested solutions for this than find folder -exec gzip '{}' \; tar cf folder.tar folder

    Read the article

  • Why does yum index get corrupted?

    - by TomOnTime
    Occasionally yum's cache gets corrupted and we see errors like this: error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery error: cannot open Packages index using db3 - (-30974) error: cannot open Packages database in /var/lib/rpm The workaround is rm -f /var/lib/rpm/__db* and then the next "yum" command regenerates the data. My question is: what is likely to be causing this? Is there some common task that ignores locks or has other problem that causes this? We have hundreds of CentOS machines and there is no pattern to which see this problem. It could be a "one in a million" issue, which at large scale is seen often. NOTE: I realize this is a very "open ended" question, but if an answer finds the cause, I will go back and turn the question into something more canonical that directly relates to the specific issue.

    Read the article

  • Excel - Reuse a trend line to apply to other data

    - by milko
    I've obtained a trend line from a particular set of data. What I'd like to do now is to reuse this trend line to predict values from a given pair (x,y) of coordinates. To put it another way, I have one pair (x,y) that I know is correct for sure. I don't know any other point. Let's assume the behavior of this new set is similar to the one I've got the trend line from. Is there any way Excel could compute other points following this trend line?

    Read the article

  • Mirrored servers in data centers nationwide -- how?

    - by Sysadmin Evstar
    Mirrored servers in data centers nationwide -- how? I flunked my IT interview by getting this question wrong. I thought that in the various metropolitan areas, an "http://google.com" request goes to the ISP's DNS server, which somehow returns an IP address for one of several geographically-nearby http servers, and then something internally rolls over to the next available local Google server. But then, I could not explain where the table of available local Google servers is actually cached, or the details of the IP address rollover. Or how they could manually take some server out of the rotation, from anywhere. So, what should I be reading now so I can ace this question next time? Also, what daemons run on these machines 24/7 to keep all those mirrored database disks synchronized?

    Read the article

  • Best archive format & tool for large amounts of data (50gb+)

    - by marcusstarnes
    I only realised this afternoon that the ZIP format has a limit of what appears to be around 20gb. I am trying to automate an archive process (using Automate) to zip/rar/whatever a collection of folders/files on one of my disks. It always appeared to bomb out with an incomplete archive at about 20gb. So I tried using WinRAR and doing it manually as a ZIP file, but it told me of the limit. So, I was wondering, what is a recommended zip format (and tool for accomplishing the task) for archiving up a large amount of data (around 50gb)? Thanks

    Read the article

  • Analyse frequencies of date ranges in Google Drive

    - by wnstnsmth
    I have a Google Drive spreadsheet where I would like to compute occurrences of date ranges. As you can see in my sheet, there is a column date_utc+1 which contains almost random date data. https://docs.google.com/spreadsheet/ccc?key=0AhqMXeYxWMD_dGRkVGRqbkR3c05mWUdhYkJWcFo2Mmc What I would like to do is 1) put the date values into bins of 6 hours each, i.e. 12/5/2012 23:57:04 until 12/6/2012 0:03:17 would be in the first bin, 12/6/2012 11:20:53 until 12/6/2012 17:17:07 in the second bin, and so forth. Then, I would like to count the occurrence of those bins, such as bin_from bin_to freq ----------------------------------------------- 12/5/2012 23:57:04 12/6/2012 0:03:17 2 12/6/2012 11:20:53 12/6/2012 17:17:07 19 ... ... ... Hope it is clear what I mean. Partial hints are very welcome as well since I am pretty new to spreadsheeting.

    Read the article

  • minimum required bandwidth for remote database server

    - by user66734
    I want to build a small warehousing application for my company. We have a central warehouse which distributes to 8 sales points across the country. They insist on an in-house solution. I am thinking to setup a central mySQL db Linux server and have the branches connect to it to store sales. Queries to the db from the branches will be minimum, maybe 10 per hour. However I need all the branches to be able to store each sale data ( product ID, customer ID ) in the central db at peak time at most once every five minutes. My question is can I get away with simple 24mbps/768kbps DSL lines? If not what is the bandwith requirement? Can I rely on a load balancing router to combine additional lines if needed? Can you propose some server hardware specs?

    Read the article

  • Archive format & tool for large amounts of data (50gb+)

    - by marcusstarnes
    I only realised this afternoon that the ZIP format has a limit of what appears to be around 20gb. I am trying to automate an archive process (using Automate) to zip/rar/whatever a collection of folders/files on one of my disks. It always appeared to bomb out with an incomplete archive at about 20gb. So I tried using WinRAR and doing it manually as a ZIP file, but it told me of the limit. So, I was wondering, what is a recommended zip format (and tool for accomplishing the task) for archiving up a large amount of data (around 50gb)?

    Read the article

  • Excel 2010 Move data from multiple columns to single row

    - by frustrated529
    So frustrating! I get data sent to me and it looks like this: a 1 a --2 2 a-------3 3 b 1 b-- 2 2 b ------ 3 3 b------------ 4 4 and i need it to look like this: a 1 2 2 3 3 b 1 2 2 3 3 4 4 I have about 30 columns that needs to move to the top value in their group, then removing the duplicates. I have been searching forums for several days and trying bits and pieces of code. I am having such a tough time with VBA!!!!

    Read the article

  • Removing extra commas in CSV without another data source

    - by fi-no
    We have a large database with customer addresses that was exported from an SQL database to CSV. In the event that a company has a comma in their name, it (predictably) throws the whole database out of whack. Unfortunately, there are so many instances of this (and commas in the second address line) that the whole CSV (~100k rows) is a huge mess. The obvious fix is to export the data again in a different, non comma reliant format, but access to that SQL database is more or less impossible at the moment... I've tried a few tools and brainstormed about combining things to fix this, but I figured asking couldn't hurt. Thanks!

    Read the article

  • Open source monitoring tool without sending data to "Their Server"

    - by hangu
    I trying to use open source server monitoring tool. I know there are a lot, but I couldn't find what I need.. the basic process of monitoring tool I used to use before was, 1) Install agent in my server which I want to monitor 2) The agent send data to "their server" 3) I can check the health of my server through web browser presented by them. What I need is, avoiding "Step 2". Are there any monitoring tool that I can use? I have Windows 2008 and Linux servers simple feature will be enough like (CPU, Memory, Network..) Thank you

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >