Search Results

Search found 58779 results on 2352 pages for 'cleaned data'.

Page 28/2352 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • best data+partition recovery software

    - by Pennf0lio
    I accidentally formatted my Drive D that contained all my Backups and Documents. I separated my files to my Drive D hoping I will not harm my files. Since I use Acronis Recovery to Install a new OS with some pre-installed application to my HDD I didn't realize I also formated/erase my Drive D. Now my drive D is unpartitioned. I am really in really in deep trouble and would need some urgent help, Please recommend a Software that at least can restore my Old Drive that contained my files. I'm assuming most of you think this is a duplicate of some old questions here, But I'm not looking for data recovery, I need to recover the whole partition with the files. I used to use "Recuva" but It only recovers files not the whole folders with the files in it. Please advice. Thank You!

    Read the article

  • Data Validation of a Comma Delimited List

    - by Brad
    I need a simple way of taking a comma seperated list in a cell, and providing a drop down box to select one of them. For Example, the cell could contain: 24, 32, 40, 48, 56, 64 And in a further cell, using Data Validation, I want to provide a drop-down list to select ONE of those values I need to do this without VBA or Macros please. Apolgies, I want this to work with Excel 2010 and later. I have been playing around with counting the number of commas in the list and then trying to split this into a number of rows of single numbers etc with no joy yet.

    Read the article

  • Excel Macro Help - Data Input

    - by B-Ballerl
    I'm want to develop a macro where in my excel worksheet I type a date in a specific cell, and the macro will go into a folder containing text files. A database you could say. I want it to find the corresponding file name which is written as a date, put the data through a delimeter, and paste into the cells directly below where I orginally put the date. I'm very new with Macro's so if you must answer try to be a little more simple than you might usually be. Thanks In Advance if anyone can Help!!

    Read the article

  • Recovering data from a Silicon Image SiI3114 RAID

    - by Isaac Truett
    I have a set of 3 disks in RAID 5 originally created with a Silicon Image SiI3114 on-board RAID controller. The old motherboard is dead. The new motherboard (which has a different raid controller) won't boot from the array. I have no reason to believe that the drives are damaged or corrupted. I'm 99% sure that the problem is that the new controller isn't compatible or I'm not setting it up properly. Is it possible to recover data from the drives using a different controller? Would a PCI card like this one allow me to read from the array again?

    Read the article

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • how to recover data from my disk - I accidentally copied with dd an iso on it

    - by sijoune
    I wanted to create a bootable usb from an iso image and i accidentally put as the output of the dd, instead of my usb drive, one of my hard disks. The iso was 3,3 GB and my disk is 1TB! And it was almost full. Can i at least restore the data that has not been overwritten? Right now i can't even mount it. I get this error: Error mounting /dev/sdd1 at /media/main/UDF Volume: Command-line `mount -t "udf" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,iocharset=utf8,umask=0077" "/dev/sdd1" "/media/main/UDF Volume"' exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Also since i know which filesystem my disk used if i reformat it to this filesystem is there any chance i can mount it and retrieve the rest of the files?

    Read the article

  • SQLAuthority News – Download SQL Azure Labs Codename “Data Explorer” Client

    - by pinaldave
    Microsoft SQL Azure labs has recently released Data Explorer client. I was looking forward to visualizing tool for quite a while and I am delighted to see this tool. I will be trying out this tool in coming week and will post here my experience. I have listed few of the resources which are related to Data Explorer at the end. Please let me know if I have missed any and I will add the same. With “Data Explorer” you can: Identify the data you care about from the sources you work with (e.g. Excel spreadsheets, files, SQL Server databases). Discover relevant data and services via automatic recommendations from the Windows Azure Marketplace. Enrich your data by combining it and visualizing the results. Collaborate with your colleagues to refine the data. Publish the results to share them with others or power solutions. The Data Explorer Client package contains the Data Explorer workspace as well as an Office plugin that integrates Data Explorer into Excel. Resources: Download Data Explorer Data Explorer Blog Desktop Client Video of  Contoso Bikes and Frozen Yogurt (Data Explorer) Please note that this is not the final release of the product. Please do not attempt this on production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Azure, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • Data and Secularism

    - by kaleidoscope
    Ever since we’ve been using Data we’ve been religious. Religious about the way we represent it and equally religious about the way we access it. Be it plain old SQL, DAO, ADO, ADO.Net and I am just referring to religions in MSFT world. A peek outside and I’d need a separate book to list out the Data faiths. Various application areas in networked computing are converging under the HTTP umbrella with a plausible transition to purist HTTP and in turn REST fuelled by the Web2.0 storm. It was time the Data access faiths also gave up the religious silos wrapped around our long worshipped data publishing and access methods. OData is the secular solution we have at hand today. It is an open protocol for sharing data. It can be exposed via REST. It is Open as in the Microsoft Open Specification Promise. This allows virtually everyone to build Data Services for any runtime. OData is one of the key standards for Data publishing/subscribing on Microsoft Codename Dallas. For us .Netters OData data sources can be exposed/consumed via WCF Data Services and the process is very simple, elegant and intuitive. Applications exposing OData Services Sharepoint 2010 IBM Web Sphere Microsoft SQL Azure Windows Azure Table Storage SQL Server Reporting Services   Live OData Services Netflix Open Science Data Initiative Open Government Data Initiatives Northwind database exposed as OData Service and many others Some may prefer to call it commoditization of data, unification of data access strategies or any other sweet name. I for one will stick to my secular definition. :) Technorati Tags: Sarang,OData,MOSP

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • How to migrate user settings and data to new machine?

    - by torbengb
    I'm new to Ubuntu and recently started using it on my PC. I'm going to replace that PC with a new machine. I want to transfer my data and settings to the nettop. What aspects should I consider? Obviously I want to move my data over. What things am I missing if I only copy the entire home folder? This is a home pc (not corporate) so user rights and other security issues are not a concern, except that the files should be accessible on the new machine! Please take into account that the new machine is a nettop that doesn't have an optical drive and doesn't allow me to hook the old SATA disk into it, so any data transfer must be handled via home network (I can have both the old and the new machine turned on and connected to the home LAN) and I have an USB thumbdrive with limited capacity (2GB). This sounds like it might limit the general applicability, but it would in fact make it more general. I'll make this a wiki topic because there could be several "right" answers. Update: Or so I thought. I don't see a choice for that.

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • Filter of Data in Gridview logical error please using asp.net

    - by RajuBabli Abbasi
    I wanted to filter the Data in asp.net but my Data is not filtering i have some logical error So please help me for this case i will be very thanks full to those who will help me please consider my code and replay me with code if you can so please i am waiting for your replay thanks again my asp.cs file is protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DisplayStudentInformation(); } } private void DisplayStudentInformation() { string filter = "%" + filterTextBox.Text + "%"; if (filter == String.Empty) filter = "%"; try { using (SqlDataReader reader = DAC.GetCompanyInformation(filter)) {//reader.Read(); StudentGridView.DataSource = reader; StudentGridView.DataBind(); } } catch (SqlException ex) { StatusLabel.Text = ex.Message; } } my .aspx file is asp:Table ID="Tabel" runat ="server" asp:TableRow asp:TableCell asp:Label ID="filterLabel" runat ="server" Text ="Company Name Filter:" AssociatedControlID="filterTextBox" /asp:TableCell asp:TableCell asp:TextBox ID="filterTextBox" runat="server" MaxLength ="50" /asp:TableCell asp:TableCell asp:Button ID="refreshButton" runat ="server" Text ="Filter" CausesValidation="false" / /asp:TableCell /asp:TableRow /asp:Table My DAC file is public static SqlDataReader GetCompanyInformation(string filter) { SqlDataReader reader; string sql = "SELECT * FROM Student WHERE LastName LIKE @prmLastName "; using(SqlCommand command = new SqlCommand (sql,ConnectionManager.GetConnection())) {//In ExecuteReader we pass the CommandBehavior as singleResult because we need the Single result and also passing the close connection when Datais retriev // command.Parameters.Add("@prmLastName", SqlDbType.VarChar, 25).Value = filter; command.Parameters.AddWithValue("@prmLastName", filter); reader = command.ExecuteReader(CommandBehavior.SingleResult | CommandBehavior.CloseConnection); } return reader; } Note:- When i don't use the If(!Ispostback) Condition and simply pass the DisplayStudentInformation(); method in my page load then Data can be filter but with If(!IspostBack ) condition which is also important for updating the data and for other purpose . Data can be filter . Se aim of the expert is that Filter the Data in a gridview using condition of IF(!IspostBack ) means without the removing is post back condition Filter the Data . I have been ask other about this question but no body solve this so please help me i will be very thanks full to you all ok

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • $.ajax not loading data data everytime from server

    - by Ted
    I have written a simple jQuery.ajax function which loads a user control from the server on click of a button. The first time I click the button, it goes to the server and gets me the user control. But each subsequent click of the same button does not goes to the server to fetch me the user control. Since my user control fetches data from db, I need to reload the user control everytime i hit the button. But if anyhow I get my user control to unload from the page, and re-click the button, it goes to the server and fetches me the user control. Here's the code: $("#btnLoad").click(function() { if ($(this).attr("value") == "Load Control") { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "loadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Unload Control"); } else { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "unloadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Load Control"); } }); Please let me know if there is any other way I can get my user control loaded from server everytime I click the button.

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >