Search Results

Search found 81583 results on 3264 pages for 'open data'.

Page 47/3264 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Free Data Recovery Software

    - by Morais
    I lost some of my very important files from my computer. I tried the windows vista recovery from a restore point but could not get the files. Can some one please suggest some free data recovery tools. This is very urgent.

    Read the article

  • Does moving a file outside NTFS loose data in alertnate data streams?

    - by jay
    I have a lot of files on machine running Windows Server 2008 which I wanted to move to a Fedora machine. How can I keep the attributes stored in, for example, media files (date taken, rating, length, etc) while transfering it to outside the realm of NTFS's Alternate Data Streams. I'm aware that similar metadata exists in other file systems, but what happens when you move these files? And what's the best way to retain them in other file systems?

    Read the article

  • Data recovery for Mac

    - by josh3736
    I've got a Mac that won't boot and I'd like to recover whatever data I can before wiping the hard drive and reinstalling. I'm looking for something similar to TRK (which is Windows-centric) — boot from CD, mount the hard drive, and copy to a network share. I just noticed TRK does appear to support HFS+; has anyone had success with this?

    Read the article

  • Data Center Design and Preferences

    - by Warner
    When either selecting a data center as a co-location facility or designing a new one from scratch, what would your ideal specification be? Fundamentally, diversified power sources, multiple ISPs, redundant generators, UPS, cooling, and physical security are all desireable. What are the additional key requirements that someone might not consider on the first pass? What are the functional details someone might not consider during the initial high level design?

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • Changing filesystem types "safely"

    - by warren
    Back in Windows 95 OSR2 (I believe), there was a conversion tool that would take your extant FAT16 partition and change it to FAT32 non-destructively (most of the time). Are there any tools like that now for going from one file system type to another in situ without destroying the data? For example, from etx3 to ext4? Or NTFS to XFS?

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • How to recover data from a partially overwritten partition

    - by shredder12
    By mistake, I configured a 900GB partition to be part of a 50GB raid. The sync is complete and my understanding is that only the first 50GB of the bigger partition is overwritten. How do I recover the rest of the data? When I try to mount this partition by identifying it as ext3, it mounts only the 50GB overwritten space. This partition was earlier divided into various logical volumes(all ext3 filesystems) through LVM. Any suggestions?

    Read the article

  • Safely reboot prior to recovering data

    - by ELO
    What is the safest (without additional writing to the disk) way to power down computer whose deleted files you want to recover in order to boot from rescue medium? In case of a desktop computer, plugging off the power cord looks like the most direct solution, but are there possible side-effects, apart from losing unsaved data? More problematic seems the laptop, with removing the battery being the equivalent, but is it a good idea overall?

    Read the article

  • Using www-data through SSH

    - by Fluidbyte
    For development purposes I'm using www-data (on an ubuntu 11.10 server) to ssh in and fire git commands and basic stuff against the webroot. I don't have things like command history, coloring, etc like I do when I ssh in as any other user, so I'm curious how to get this working. I'm assuming I need a `.bashrc' file, but I'm not sure what to include or (more importantly since I could just copy the one from another user) where it goes.

    Read the article

  • Should I keep investing into data structures and algorithms?

    - by Chiron
    These days, I'm investing heavily in data structures and algorithms and trying to solve some programming puzzles. I'm trying to code and solve with Java and Clojure. Am I wasting my time? should I invest more in technologies and frameworks that I already know in order to gain deeper knowledge (the ins and the outs) and be able to code with them more quickly? By studying data structures and algorithms, am I going to become a better programmer or those subjects are only important during college years?

    Read the article

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • How do I implement IDataServiceMetadataProvider and tell my Data Service to use that custom provider

    - by Pwninstein
    There's no obvious entry point for implementing a custom provider for an ADO.NET Data Service using IDataServiceMetadataProvider, and then telling a Data Service to use that provider. Has anyone had any luck in this area? I've tried implementing this interface on my Data Source class, but none of my breakpoints are hit. There is also no (obvious) way to set the provider from the Data Service's DataServiceConfiguration parameter passed in to the InitializeService function. Any help would be appreciated. Thanks! Data Services Providers (ADO.NET Data Services) IDataServiceMetadataProvider Members

    Read the article

  • Decode received multipart/form-data request in Cocoa

    - by Snej
    Hi: I wonder if there is any possibility to explicitly decode an incoming multipart/form-data POST request. Is there any lib to handle this safely? Several files are embedded in this request and I want to save these files individually. NSData *data = [(id)CFHTTPMessageCopyBody(request) autorelease]; Content-Type: multipart/form-data; boundary=0xKhTmLbOuNdArY The data content is: --0xKhTmLbOuNdArY Content-Disposition: form-data; name="file1"; filename="fileName1.extension" Content-Type: application/octet-stream; charset=utf-8 ......... --0xKhTmLbOuNdArY Content-Disposition: form-data; name="file2"; filename="fileName2.extension" Content-Type: application/octet-stream; charset=utf-8 ......... --0xKhTmLbOuNdArY--

    Read the article

  • JQuery Datatable Question: Centering column data after data insertion

    - by Chris
    I have a data table that is initially empty and is populated after a particular Javascript call. After the data is inserted into the table, I'd like to center all of the data in one of the columns. I tried specifying this at the initialization step in this way: dTable = $('#dt').datatable({ 'aoColumns': [ null, null, { "sClass" : "center" }] }); The data in the third column was not centered after the insertions were complete. I tried modifying aoColumns after the insertions and redrawing the table as well: dTable.fnSettings().aoColumns[2].sClass = "center"; dTable.fnDraw(); This did not work either. So my question is simply how should I go about telling the data table to center the data in the third column? Thanks in advance for your suggestions. Chris

    Read the article

  • using document.createDocumentFragment() child dom elements that contain jQuery.data

    - by taber
    I want to use document.createDocumentFragment() to create an optimized collection of HTML elements that contain ".data" coming from jQuery (v 1.4.2), but I'm kind of stuck on how to get the data to surface from the HTML elements. Here's my code: var genres_html = document.createDocumentFragment(); $(xmlData).find('genres').each(function(i, node) { var genre = document.createElement('a'); $(genre).addClass('button') .attr('href', 'javascript:void(0)') .html( $(node).find('genreName:first').text() ) .data('genreData', { id: $(node).find('genreID:first').text() }); genres_html.appendChild( genre.cloneNode(true) ); }); $('#list').html(genres_html); // error: $('#list a:first').data('genreData') is null alert($('#list a:first').data('genreData').id); What am I doing wrong here? I suspect it's probably something with .cloneNode() not carrying over the data when the element is appended to the documentFragment. Sometimes there are tons of rows so I want to keep things pretty optimized, speed-wise. Thanks!

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Data Model Evolution

    - by redleafong
    Hey guys, When writing code I am seeing requirements to change data models (e.g. adding/changing/removing data members from a class). When these data models belong to an interface, it seems difficult to change without breaking the existing client codes. So I am wondering if there is any best practice of designing interfaces/data models in a way to minimize the impact during evolution. The closest thing I can find from google is data contract versioning. But that seems to be a .net specific topic. I am wondering if the same practice applies to the Java world, or there is a different or generic way to deal with data model evolution. Thanks

    Read the article

  • should jQuery data be chainable?

    - by pedalpete
    I'm trying to add multiple jQuery data entries to a single element. I suspected that the following would work jQuery('td.person#a'+personId).data('email',thisPerson.email).data('phone',thisPerson.phone); However, I am getting nothing but errors when I do this. jQuery('td.person#a'+personId).data('email',thisPerson.email); jQuery('td.person#a'+personId).data('phone',thisPerson.phone); is there another way to get more than one data entry on an element? Hopefully chained?

    Read the article

  • Using protocol buffers for a comprehensive data strategy for Windows Mobile devices

    - by Steve
    I have started reading some of the posts related to protocol buffers. The serialization method seems very appropriate for the transfer of data to and from web servers. Has anyone considered using a method like this to save and retrieve data on the mobile device itself? (i.e. a replacement for a traditional database / orm layer) Where would the data be persisted? How would the data be queried? Would it make sense to store the data in a traditional database (SqlCE or SqlLite) with a few "searchable" columns and then one column for the serialized data? Thoughts? Am I out on a limb here? Thank you!

    Read the article

  • Estimating the boundary of arbitrarily distributed data

    - by Dave
    I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it. Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch. My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists. OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg: from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx y=ymin+dy, do 1 do 1-2, but now sample in y An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments. Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary? A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon. The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now? Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really! Cheers, David

    Read the article

  • Entity data field validation and part data submission

    - by pradeeptp
    I have an entity class that has 10 fields. I am using MS Validation Application block to mark all fields as mandatory (IsRequired). I am implementing a securiy feature in which during updation of the data, not all the fields in the entity class will have data. For example a few users can only view 5 fileds while others all 10 fields during updation on GUI I have the following options 1) Bring all the data for all the fields from the DB table and hide the ones not accessible to the users in GUI. I am concerned about the performance because everytime GUI will pull unncessary data. 2) Bring the data (e.g. only 5 fields) that are permissible for access/view by the user on GUI. During submit, validation block will throw exception because all fields are marked as IsRequired and only data for 5 fields are sent back to the server. I want to know if there are any other good approaches to solve problems like this. I am using .NET 3.5 Thanks.

    Read the article

  • Real-Time Data Streaming to Multiple Clients

    - by AriX
    Hi all, I would like to write an application which will stream data at 2400 baud over the internet from a server to multiple clients. The data will be the same for each client, and it would probably be fine to send it as a UDP stream, since exact data accuracy is not a 100% necessity, as there are checksums built-in to the data format and the data will be sent repeatedly on a loop. What is the best way to do this? I would want to write the server in C, but I don't know how to best multicast this data to the different clients that would be receiving it all over the country. I'm sure this seems like a pretty draconian way to go about my project, as opposed to just using some sort of fetch command, but I'd prefer to do it this way if possible.

    Read the article

  • Data driven charts and graphs from xml to svg

    - by garymlewis
    I asked this question a week ago, but did not do a good job of describing the problem. Here's a second attempt. I'd like to produce data-driven charts, graphs, and other data visualizations, starting with data in an xml database and ending up with the visualizations as SVG. Here's an example from the W3C. It uses Javascript to create a stacked bar chart as SVG from xml. I'd like to do something similar but use a graphics library (or ???) instead of js to handle the construction of axes, labels, titles, data points, etc. My question, then: what are the options that I should consider ... things like Raphael I suppose, but initially I'd like to cast a wide net and look at many different options. My experience is all with static data visualizations using statistics packages like R, but eventually I'd like to create interactive data visualizations with html5/css3/svg. Any help would be much appreciated. Thanks.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >