Search Results

Search found 62870 results on 2515 pages for 'usage data'.

Page 37/2515 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Ubuntu One Preferences does not show usage, name, e-mail, current plan

    - by Jim
    Ubuntuone is correctly synchronizing selected files between two computers running Ubuntu 10.10. When I open Ubuntuone Preferences, Account tab, on one computer it does not display the usage, name e-mail or current plan. On the other computer all information is shown correctly. On the Devices tab the 2 computers are not shown. They do show correctly on the other computer. Any ideas on how to fix this problem. I have reinstalled Ubuntuone per this link.

    Read the article

  • Wheres my memory going?

    - by Stu2000
    My machine keeps 'freezing' before eventaully logging out with all the programs exiting. This is rather annoying, and I think its because I keep running out of memory. I am not running any custom software, just netbeans, chrome etc. (Stuff I usually run on other ubuntu computers without issue). For some reason my memory usage is through the roof as seen here, but I can't quite figure out why. Here is a screenshot which may be useful with htop and gnome-system monitor open as user and as root. I notice that my console-kit-daemon is taking up about a gig of 'virtual memory'. Is that normal? Any tips/advice will be helpful. In the meantime I have ordered 2 x 4 gig ram sticks to try and just throw hardware at the issue.

    Read the article

  • Make Your Own Windows 8 Start Button with Zero Memory Usage

    - by The Geek
    After using Windows 8 for a while, I’ve come to the conclusion that removing the Start button from the Taskbar was a huge mistake. Here’s how to make your own “Start” button that brings up the Metro Start screen—but doesn’t waste any memory at all. What we’ll be doing is pretty simple—create a script that simulates pressing the Windows key button, make it into an executable, assign an icon, and pin it to the taskbar so that it sorta looks like the Start button, and works the same way. Since nothing is running, no RAM is wasted. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Where to find PHP version usage stats?

    - by Darren Cook
    My original question was: what percentage of sites are using php 5.4.x? (As it has some very interesting new features.) With secondary questions like how many of the cheap web hosting places have upgraded, which versions of the linux distros include it, etc. But I'm coming up blanks. http://php.net/usage.php stops at July 2007, and the nexen.net website seems to have stopped in 2008. At SecuritySpace they only list the web servers, not php versions. The TIOBE link isn't what I'm after (it doesn't -- and couldn't -- break down by version number. I thought php.net might show download numbers, but I cannot see them anywhere. I kind of answered the distro question, but it requires a lot of clicking around at distrowatch.com. E.g. I see here that Ubuntu offers php 5.4.6 in the latest snapshot, but the latest release (Ubuntu 12.04) has 5.3.10.

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • Ubuntuone Preferences does not show usage, name, e-mail, current plan

    - by Jim
    Ubuntuone is correctly synchronizing selected files between two computers running Ubuntu 10.10. When I open Ubuntuone Preferences, Account tab, on one computer it does not display the usage, name e-mail or current plan. On the other computer all information is shown correctly. On the Devices tab the 2 computers are not shown. They do show correctly on the other computer. Any ideas on how to fix this problem. I have reinstalled Ubuntuone per this link: https://wiki.ubuntu.com/UbuntuOne/FAQ/HowDoICompletelyRemoveAndReinstallUbuntuOne

    Read the article

  • Meaning of the free space indication in Deluge

    - by Tjae Beamon
    Recently I installed Ubuntu 12.0.4 using Wubi with my current Windows Vista. I have already installed all the 265 updates from the Ubuntu software center and downloaded Deluge from there. My hardrive is 80GB according to the disc usage analyzer. It also says 31.2 GB used and 47.8GB free. The confusion comes when I run Deluge. At the bottom it says 2.0GB free space. Is that 2.0GB just a size set from the torrent client and can be changed or am I limited to just that 2.0GB?

    Read the article

  • La beta du SDK de Kinect pour Windows est disponible gratuitement pour un usage non commercial

    La beta du SDK de Kinect pour Windows est disponible gratuitement Pour un usage non commercial Mise à jour du 17/06/11, par Hinault Romaric Comme l'avait annoncé Microsoft lors de la conférence MiX 11 de la Las Vegas en avril (lire ci-avant), le SDK de Kinect pour Windows est disponible aujourd'hui en version Beta. Ce SDK permettra aux développeurs de créer des applications pour PC exploitant son capteur de mouvements, de porter les jeux initialement conçus pour la Xbox 360 vers le PC ou appliquer la technologie à d'autres usages. Pour Microsoft, Kinect est en effet « plus qu'une simple plateforme pour les jeux et le ...

    Read the article

  • Aide On a Low Memory System

    - by Jason Mock
    I have a Linux server running on a Linode.com VPS, where I'm trying to utilize aide to detect any issues. However, the nightly aide run uses up all of my available memory and swap (512MB RAM / 384MB SWAP). I've tried adding a script to /etc/cron.daily that would stop/start services using a lot of memory (apache2, mysql) during the aide run. Unfortunately, it seems like aide continued to use every available byte (including the space freed up from apache2 and mysql). Here's a graph from munin showing what happens when aide runs: Note the spike of memory usage, well into swap, when aide runs Any suggestions on tuning aide to not use so much memory, or is there an alternative to aide that doesn't behave this way?

    Read the article

  • gnome shell with very high CPU usage

    - by 501 - not implemented
    i'm running ubuntu gnome 13.10 on my dell latiude e6510 with a i5 m560. The I5 comes with a embedded Intel HD 3400 Graphics. The average cpu usage of the gnome-shell is by 160% it's to high, I think. Is there a problem with a driver? If i call the command glxinfo | grep OpenGL it returns: OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.3, 128 bits) OpenGL version string: 2.1 Mesa 9.2.1 OpenGL shading language version string: 1.30 OpenGL extensions: Greetings

    Read the article

  • What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits?

    - by user12365
    I have server objects that have corresponding client objects. The data to be kept in sync is inside the server object's key/value dictionary. To keep the client objects in sync with the sever objects, I want the server to send the key/value dictionary every frame for each object. What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits? Bonus constraint 1: For each type of object, the values of some keys change more often than others. Bonus constraint 2: Memory usage on the server side is relatively expensive.

    Read the article

  • Vim and emacs usage/use case/user statistics

    - by G. Kayaalp
    I wonder if there are statistical documents/research based on use of the two major text editors, in which amount of usage is compared to use case, be it programming language, industry, user age, OS and/or many other things I can't think of now. I don't need this information for an assignment/homework or something, I'm just curious about it. I've been searching this for some time, not very intensively, and only thing I have found was this: Emacs user base size Lastly, I want to denote that I'm not looking for estimations. I'm not asking if one editor is better that the other, nor I am expecting help on choice between them. I'm not asking for opinions.

    Read the article

  • DNSCrypt-Proxy specific usage

    - by trekkiejonny
    I have some specific usage questions about DNSCrypt-Proxy. I followed a guide and ended with the command dnscrypt-proxy --daemonize –user=dnscrypt Without any further switches does it default to using OpenDNS's resolver server? I want to use specific servers, I found reference to pointing it to the full path of a CSV file. What is the CSV format for use with DNSCrypt? Would it be "address:port,public key"? Does it go in order of working addresses? For example, first resolver doesn't connect it moves on to the second line of the CSV, then third etc. Lastly, would this question be more appropriate in another StackExchange section?

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • Filter of Data in Gridview logical error please using asp.net

    - by RajuBabli Abbasi
    I wanted to filter the Data in asp.net but my Data is not filtering i have some logical error So please help me for this case i will be very thanks full to those who will help me please consider my code and replay me with code if you can so please i am waiting for your replay thanks again my asp.cs file is protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DisplayStudentInformation(); } } private void DisplayStudentInformation() { string filter = "%" + filterTextBox.Text + "%"; if (filter == String.Empty) filter = "%"; try { using (SqlDataReader reader = DAC.GetCompanyInformation(filter)) {//reader.Read(); StudentGridView.DataSource = reader; StudentGridView.DataBind(); } } catch (SqlException ex) { StatusLabel.Text = ex.Message; } } my .aspx file is asp:Table ID="Tabel" runat ="server" asp:TableRow asp:TableCell asp:Label ID="filterLabel" runat ="server" Text ="Company Name Filter:" AssociatedControlID="filterTextBox" /asp:TableCell asp:TableCell asp:TextBox ID="filterTextBox" runat="server" MaxLength ="50" /asp:TableCell asp:TableCell asp:Button ID="refreshButton" runat ="server" Text ="Filter" CausesValidation="false" / /asp:TableCell /asp:TableRow /asp:Table My DAC file is public static SqlDataReader GetCompanyInformation(string filter) { SqlDataReader reader; string sql = "SELECT * FROM Student WHERE LastName LIKE @prmLastName "; using(SqlCommand command = new SqlCommand (sql,ConnectionManager.GetConnection())) {//In ExecuteReader we pass the CommandBehavior as singleResult because we need the Single result and also passing the close connection when Datais retriev // command.Parameters.Add("@prmLastName", SqlDbType.VarChar, 25).Value = filter; command.Parameters.AddWithValue("@prmLastName", filter); reader = command.ExecuteReader(CommandBehavior.SingleResult | CommandBehavior.CloseConnection); } return reader; } Note:- When i don't use the If(!Ispostback) Condition and simply pass the DisplayStudentInformation(); method in my page load then Data can be filter but with If(!IspostBack ) condition which is also important for updating the data and for other purpose . Data can be filter . Se aim of the expert is that Filter the Data in a gridview using condition of IF(!IspostBack ) means without the removing is post back condition Filter the Data . I have been ask other about this question but no body solve this so please help me i will be very thanks full to you all ok

    Read the article

  • $.ajax not loading data data everytime from server

    - by Ted
    I have written a simple jQuery.ajax function which loads a user control from the server on click of a button. The first time I click the button, it goes to the server and gets me the user control. But each subsequent click of the same button does not goes to the server to fetch me the user control. Since my user control fetches data from db, I need to reload the user control everytime i hit the button. But if anyhow I get my user control to unload from the page, and re-click the button, it goes to the server and fetches me the user control. Here's the code: $("#btnLoad").click(function() { if ($(this).attr("value") == "Load Control") { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "loadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Unload Control"); } else { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "unloadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Load Control"); } }); Please let me know if there is any other way I can get my user control loaded from server everytime I click the button.

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >