Search Results

Search found 58499 results on 2340 pages for 'temporal data'.

Page 197/2340 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Recovered video files won't play

    - by BioGeek
    I have an SD card with pictures and video which malfunctioned. I was able to recover the files with Photorec. The pictures are OK, but wen I try to open the vide files (*.mov extension) in get the following errors when I try to open them in the following programs Windows Media player: "Windows Media Player encountered a problem while playing the file" Quicktime: "Error -2048: Couldn't open the file because it is not a file that QuickTime understands" VLC: it shows the first frame of the video and the sound is just white noise The filesizes look correct so I presume the data is still in there. Is there any way to fix these recovered video files?

    Read the article

  • Hard Disk recovery

    - by Shaihi
    I have 3 disks of the same type model and year of production. All the disks were used part of a generic solution of an IBM server solution. My problem is that all 3 disks suffered the same malfunction at the same exact time and are now dis-functional. I went to two different expert's laboratories and got the same answer: To recover the data they need another identical disk from which they can take spare parts. Can my case really be that clinical? Anyway, I am not sure if this question belongs to this forum, but I am looking to buy the following disk: IIBM ESERVER XSERIES IBM P/N 24P3707 IBM FRU 24P3708 146.8GB USCSI 10K RPM PART NOMBER 9V2005-027 I already bought a disk with the same part number, but the labs said that apparently I need a disk that was manufactured in the same factory. That means that all the numbers have to be exactly the same. If anybody know where I can purchase such a disk (the information on the lost disks is really important to me), please tell me the place.

    Read the article

  • Tar and gzip together, but the other way round?

    - by Boldewyn
    Gzipping a tar file as whole is drop dead easy and even implemented as option inside tar. So far, so good. However, from an archiver's point of view, it would be better to tar the gzipped single files. (The rationale behind it is, that data loss is minified, if there is a single corrupt gzipped file, than if your whole tarball is corrupted due to gzip or copy errors.) Has anyone experience with this? Are there drawbacks? Are there more solid/tested solutions for this than find folder -exec gzip '{}' \; tar cf folder.tar folder

    Read the article

  • How can I pin point a USB file transfer bottleneck in Unix?

    - by HankHendrix
    I'm experiencing very slow data transfer speeds over USB 2.0 on my nix box and was wondering how I can pin-point the cause of the problem. I've looked into iotop and top but the cpu and mem figures look normal (compared to guides I have checked). The box which is affected is Ubuntu 12.04 32bit Server running on an Asus EEE 701 2G model and I am transferring from the OS over USB 2.0 to an external HDD (which transfers at 30MB/s+ on Windows 7 on other machine). I get rsync write speeds of 1MB/s from OS to USB HDD which seems ridiculously slow. These speeds are consistent with other USB HDDs and sticks.

    Read the article

  • Decrypting a TrueCrypt drive pulled from another machine

    - by Blakeg08
    I work in a corporate environment and we are now required to encrypt laptops. I have already encrypted about 5 or 6 out of 40. I still have a few questions before we go all out with TrueCrypt. Can I decrypt a hard drive by plugging it into my desktop using a data transfer kit? I tried this and the hard drive showed up asking me to format before using the volume. If I have the TRD from each laptop backed up do I still need to backup the volume headers? What else do I need to back up? Thanks.

    Read the article

  • Distributed filesystem for automated offline data mirroring

    - by Petr Pudlák
    I'd like to achieve the following setup: Every time I connect my laptop to a local network, my partition gets automatically mirrored to a partition on my local server. I only want to mirror what has changed from the last time. (I understand that it is not a proper backup solution since there is no history of the changes, it'd be more like a non-persistent network RAID.) Is there a distributed file system that allows such a setup? I've done some searching and it seems to me that most distributed file-systems are focused on data availability and distribution, not duplicating them. I'd be thankful for suggestions. Edit: Sorry, I forgot to mention: I'm using Linux.

    Read the article

  • How much data does windows write on boot

    - by soandos
    This question was inspired by Bob's comment to my answer here. On boot, windows writes files to the hard drive (I imagine this to be the case, as it has a way of detecting if the boot was previously interrupted by a hard power-off, and I am sure many other things). But assuming that there is a "smooth" boot, where there are no error, etc, and no logon scripts that run, and things like that, about how much (a few KB, a few MB, a few GB) data gets written to the drive? For simplicity's sake, assume that: hibernation is turned off windows 7 pagefile is turned off (does this matter right at boot, or only later?) How could one go about measuring this? Are there resources that have this information?

    Read the article

  • "Cannot allocate memory " error whle copying data from window to ubuntu

    - by John
    I have Ubuntu 9.10 installed inside VM of server 2008. WHen i try to copy the data from the network and paste insid ethe Ubuntu it says error called "Cannot allocate memory " I have 3GB RAM attached to the Ubuntu I tried above suggestion but still im unbale to copy file from my host machine i.e. Windows XP to my Ubuntu machine ( which is at Virtual Machine) Im trying to copy jdk-1_5_0_22-linux-i586.bin file whose size is 47.4 MB Is there any other work around for this problem???? I tried Set the following registry key to ’1': HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache and set the following registry key to ’3': HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size but still im unbale to copy file from my host machine i.e. Windows XP to my Ubuntu machine ( which is at Virtual Machine) Im trying to copy jdk-1_5_0_22-linux-i586.bin file whose size is 47.4 MB Is there any other work around for this problem????

    Read the article

  • In Excel, how to group data by date, and then do operations on the data?

    - by Bicou
    Hi, I have Excel 2003. My data is like this: 01/10/2010 0.99 02/10/2010 1.49 02/10/2010 0.99 02/10/2010 0.99 02/10/2010 0.99 03/10/2010 1.49 03/10/2010 1.49 03/10/2010 0.99 etc. In fact it is a list of sales every day. I want to have something like this: 01/10/2010 0.99 02/10/2010 4.46 03/10/2010 3.97 I want to group by date, and sum the column B. I'd like to see the evolution of the sales over time, and display a nice graph about that. I have managed to create pivot tables that almost do the job: they list the number of 0.99 and 1.49 each day, but I can't find a way to simply sum everything and group by date. Thanks for reading.

    Read the article

  • Why does yum index get corrupted?

    - by TomOnTime
    Occasionally yum's cache gets corrupted and we see errors like this: error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery error: cannot open Packages index using db3 - (-30974) error: cannot open Packages database in /var/lib/rpm The workaround is rm -f /var/lib/rpm/__db* and then the next "yum" command regenerates the data. My question is: what is likely to be causing this? Is there some common task that ignores locks or has other problem that causes this? We have hundreds of CentOS machines and there is no pattern to which see this problem. It could be a "one in a million" issue, which at large scale is seen often. NOTE: I realize this is a very "open ended" question, but if an answer finds the cause, I will go back and turn the question into something more canonical that directly relates to the specific issue.

    Read the article

  • Tar an gzip together, but the other way round?

    - by Boldewyn
    Gzipping a tar file as whole is drop dead easy and even implemented as option inside tar. So far, so good. However, from an archiver's point of view, it would be better to tar the gzipped single files. (The rationale behind it is, that data loss is minified, if there is a single corrupt gzipped file, than if your whole tarball is corrupted due to gzip or copy errors.) Has anyone experience with this? Are there drawbacks? Are there more solid/tested solutions for this than find folder -exec gzip '{}' \; tar cf folder.tar folder

    Read the article

  • Hosting solution for sensitive client data

    - by Mark
    Hello, We are developing a web application that will deal with highly sensitive (financial) data of clients (audience is medium to large sized businesses). Clients will be under scrutiny from regulators & auditors and, as such, we will be too. More importantly to give clients a level of comfort our application and related hosting arrangement should instill a lot of confidence with them. We are looking into using a cloud based service like Linode, Amazon EC2, etc. To allow for maximum flexibility We are keen on putting everything on virtual servers and avoiding having to buy our own hardware. Does a cloud based service make sense for our particular scenario? If not what type of hosting should we consider? If so what should we look out for? Thanks!

    Read the article

  • Excel - Reuse a trend line to apply to other data

    - by milko
    I've obtained a trend line from a particular set of data. What I'd like to do now is to reuse this trend line to predict values from a given pair (x,y) of coordinates. To put it another way, I have one pair (x,y) that I know is correct for sure. I don't know any other point. Let's assume the behavior of this new set is similar to the one I've got the trend line from. Is there any way Excel could compute other points following this trend line?

    Read the article

  • Mirrored servers in data centers nationwide -- how?

    - by Sysadmin Evstar
    Mirrored servers in data centers nationwide -- how? I flunked my IT interview by getting this question wrong. I thought that in the various metropolitan areas, an "http://google.com" request goes to the ISP's DNS server, which somehow returns an IP address for one of several geographically-nearby http servers, and then something internally rolls over to the next available local Google server. But then, I could not explain where the table of available local Google servers is actually cached, or the details of the IP address rollover. Or how they could manually take some server out of the rotation, from anywhere. So, what should I be reading now so I can ace this question next time? Also, what daemons run on these machines 24/7 to keep all those mirrored database disks synchronized?

    Read the article

  • Best archive format & tool for large amounts of data (50gb+)

    - by marcusstarnes
    I only realised this afternoon that the ZIP format has a limit of what appears to be around 20gb. I am trying to automate an archive process (using Automate) to zip/rar/whatever a collection of folders/files on one of my disks. It always appeared to bomb out with an incomplete archive at about 20gb. So I tried using WinRAR and doing it manually as a ZIP file, but it told me of the limit. So, I was wondering, what is a recommended zip format (and tool for accomplishing the task) for archiving up a large amount of data (around 50gb)? Thanks

    Read the article

  • Three disk (possibly RAID) data recovery

    - by Martin
    I have on my desk three 160 GB disks that were once part of an HP Proliant Windows 2003 Server. They may have been part of a RAID configuration of some sort. They may or may not be damaged in some way. When I interface them via USB, one of them shows up as a drive, but unformatted. The others show up as uninitialized disks in manager. An alternative explanation is that the two drives were simply not unused. What's my first step? I've recovered data off damaged drives before but never had anything to do with RAID configs. How can I even tell if any type of RAID was used?

    Read the article

  • minimum required bandwidth for remote database server

    - by user66734
    I want to build a small warehousing application for my company. We have a central warehouse which distributes to 8 sales points across the country. They insist on an in-house solution. I am thinking to setup a central mySQL db Linux server and have the branches connect to it to store sales. Queries to the db from the branches will be minimum, maybe 10 per hour. However I need all the branches to be able to store each sale data ( product ID, customer ID ) in the central db at peak time at most once every five minutes. My question is can I get away with simple 24mbps/768kbps DSL lines? If not what is the bandwith requirement? Can I rely on a load balancing router to combine additional lines if needed? Can you propose some server hardware specs?

    Read the article

  • Archive format & tool for large amounts of data (50gb+)

    - by marcusstarnes
    I only realised this afternoon that the ZIP format has a limit of what appears to be around 20gb. I am trying to automate an archive process (using Automate) to zip/rar/whatever a collection of folders/files on one of my disks. It always appeared to bomb out with an incomplete archive at about 20gb. So I tried using WinRAR and doing it manually as a ZIP file, but it told me of the limit. So, I was wondering, what is a recommended zip format (and tool for accomplishing the task) for archiving up a large amount of data (around 50gb)?

    Read the article

  • Analyse frequencies of date ranges in Google Drive

    - by wnstnsmth
    I have a Google Drive spreadsheet where I would like to compute occurrences of date ranges. As you can see in my sheet, there is a column date_utc+1 which contains almost random date data. https://docs.google.com/spreadsheet/ccc?key=0AhqMXeYxWMD_dGRkVGRqbkR3c05mWUdhYkJWcFo2Mmc What I would like to do is 1) put the date values into bins of 6 hours each, i.e. 12/5/2012 23:57:04 until 12/6/2012 0:03:17 would be in the first bin, 12/6/2012 11:20:53 until 12/6/2012 17:17:07 in the second bin, and so forth. Then, I would like to count the occurrence of those bins, such as bin_from bin_to freq ----------------------------------------------- 12/5/2012 23:57:04 12/6/2012 0:03:17 2 12/6/2012 11:20:53 12/6/2012 17:17:07 19 ... ... ... Hope it is clear what I mean. Partial hints are very welcome as well since I am pretty new to spreadsheeting.

    Read the article

  • Excel 2010 Move data from multiple columns to single row

    - by frustrated529
    So frustrating! I get data sent to me and it looks like this: a 1 a --2 2 a-------3 3 b 1 b-- 2 2 b ------ 3 3 b------------ 4 4 and i need it to look like this: a 1 2 2 3 3 b 1 2 2 3 3 4 4 I have about 30 columns that needs to move to the top value in their group, then removing the duplicates. I have been searching forums for several days and trying bits and pieces of code. I am having such a tough time with VBA!!!!

    Read the article

  • Open source monitoring tool without sending data to "Their Server"

    - by hangu
    I trying to use open source server monitoring tool. I know there are a lot, but I couldn't find what I need.. the basic process of monitoring tool I used to use before was, 1) Install agent in my server which I want to monitor 2) The agent send data to "their server" 3) I can check the health of my server through web browser presented by them. What I need is, avoiding "Step 2". Are there any monitoring tool that I can use? I have Windows 2008 and Linux servers simple feature will be enough like (CPU, Memory, Network..) Thank you

    Read the article

  • WUBI installation wiped hard-drive?

    - by gkaykck
    Here is what happened, i ve installed xubuntu via wubi on my D: drive. I have 2 drives by the way C: and D: Basically i use C: drive for windows and D drive for rest and backup as everybody does. And i installed my WUBI installation on drive D: too. Than i tried to do a little extreme thing. Which is basically i tried to make a shortcut to D: folder within Xubuntu. The problem is suddenly all my files disappeared. Folders stayed same, but files disappeared. Also the drive have the files, i know because it is still full, but the thing is i cannot see any of my files. I tried checking for errors and some basic data recovery which didn't worked at all Any help?

    Read the article

  • Preventing an Apache 2 Server from Logging Sensitive Data

    - by jstr
    Apache 2 by default logs the entire request URI including query string of every request. What is a straight forward way to prevent an Apache 2 web server from logging sensitive data, for example passwords, credit card numbers, etc., but still log the rest of the request? I would like to log all log-in attempts including the attempted username as Apache does by default, and prevent Apache from logging the password directly. I have looked through the Apache 2 documentation and there doesn't appear to be an easy way to do this other than completely preventing logging of these requests (using SetEnvIf). How can I accomplish this?

    Read the article

  • Removing extra commas in CSV without another data source

    - by fi-no
    We have a large database with customer addresses that was exported from an SQL database to CSV. In the event that a company has a comma in their name, it (predictably) throws the whole database out of whack. Unfortunately, there are so many instances of this (and commas in the second address line) that the whole CSV (~100k rows) is a huge mess. The obvious fix is to export the data again in a different, non comma reliant format, but access to that SQL database is more or less impossible at the moment... I've tried a few tools and brainstormed about combining things to fix this, but I figured asking couldn't hurt. Thanks!

    Read the article

  • datagrid height issue in nested datagrid( when using three data grid)

    - by prince23
    hi, i have a nested datagrid(which is of three data grid). here i am able to show data with no issue. the first datagrid has 5 rows the main problem that comes here is when you click any row in first datagrid i show 2 datagrid( which has 10 rows) lets say i click 3 row in 2 data grid. it show further records in third datagrid. again when i click 5 row in 2 data grid it show further records in third datagrid. now all the recods are shown fine when u try to collpase the 3 row in 2 data grid it collpase but the issue is the height what the third datagrid which took space for showing the records( we can see a blank space showing between the main 2 datagrid and third data grid) in every grid first column i have an button where i am writing this code for expand and collpase this is the functionality i am implementing in all the datagrid button where i do expand collpase. hope my question is clear . any help would great private void btn1_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } } --------------------------------------- 2 datagrid private void btn2_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } } ----------------- 3 datagrid private void btn1_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } }**

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >