Search Results

Search found 75233 results on 3010 pages for 'data distribution service'.

Page 56/3010 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Data and Secularism

    - by kaleidoscope
    Ever since we’ve been using Data we’ve been religious. Religious about the way we represent it and equally religious about the way we access it. Be it plain old SQL, DAO, ADO, ADO.Net and I am just referring to religions in MSFT world. A peek outside and I’d need a separate book to list out the Data faiths. Various application areas in networked computing are converging under the HTTP umbrella with a plausible transition to purist HTTP and in turn REST fuelled by the Web2.0 storm. It was time the Data access faiths also gave up the religious silos wrapped around our long worshipped data publishing and access methods. OData is the secular solution we have at hand today. It is an open protocol for sharing data. It can be exposed via REST. It is Open as in the Microsoft Open Specification Promise. This allows virtually everyone to build Data Services for any runtime. OData is one of the key standards for Data publishing/subscribing on Microsoft Codename Dallas. For us .Netters OData data sources can be exposed/consumed via WCF Data Services and the process is very simple, elegant and intuitive. Applications exposing OData Services Sharepoint 2010 IBM Web Sphere Microsoft SQL Azure Windows Azure Table Storage SQL Server Reporting Services   Live OData Services Netflix Open Science Data Initiative Open Government Data Initiatives Northwind database exposed as OData Service and many others Some may prefer to call it commoditization of data, unification of data access strategies or any other sweet name. I for one will stick to my secular definition. :) Technorati Tags: Sarang,OData,MOSP

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • Why isn't DSM for unstructured memory done today?

    - by sinned
    Ages ago, Djikstra invented IPC through mutexes which then somehow led to shared memory (SHM) in multics (which afaik had the necessary mmap first). Then computer networks came up and DSM (distributed SHM) was invented for IPC between computers. So DSM is basically a not prestructured memory region (like a SHM) that magically get's synchronized between computers without the applications programmer taking action. Implementations include Treadmarks (inofficially dead now) and CRL. But then someone thought this is not the right way to do it and invented Linda & tuplespaces. Current implementations include JavaSpaces and GigaSpaces. Here, you have to structure your data into tuples. Other ways to achieve similar effects may be the use of a relational database or a key-value-store like RIAK. Although someone might argue, I don't consider them as DSM since there is no coherent memory region where you can put data structures in as you like but have to structure your data which can be hard if it is continuous and administration like locking can not be done for hard coded parts (=tuples, ...). Why is there no DSM implementation today or am I just unable to find one?

    Read the article

  • How to migrate user settings and data to new machine?

    - by torbengb
    I'm new to Ubuntu and recently started using it on my PC. I'm going to replace that PC with a new machine. I want to transfer my data and settings to the nettop. What aspects should I consider? Obviously I want to move my data over. What things am I missing if I only copy the entire home folder? This is a home pc (not corporate) so user rights and other security issues are not a concern, except that the files should be accessible on the new machine! Please take into account that the new machine is a nettop that doesn't have an optical drive and doesn't allow me to hook the old SATA disk into it, so any data transfer must be handled via home network (I can have both the old and the new machine turned on and connected to the home LAN) and I have an USB thumbdrive with limited capacity (2GB). This sounds like it might limit the general applicability, but it would in fact make it more general. I'll make this a wiki topic because there could be several "right" answers. Update: Or so I thought. I don't see a choice for that.

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • Filter of Data in Gridview logical error please using asp.net

    - by RajuBabli Abbasi
    I wanted to filter the Data in asp.net but my Data is not filtering i have some logical error So please help me for this case i will be very thanks full to those who will help me please consider my code and replay me with code if you can so please i am waiting for your replay thanks again my asp.cs file is protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DisplayStudentInformation(); } } private void DisplayStudentInformation() { string filter = "%" + filterTextBox.Text + "%"; if (filter == String.Empty) filter = "%"; try { using (SqlDataReader reader = DAC.GetCompanyInformation(filter)) {//reader.Read(); StudentGridView.DataSource = reader; StudentGridView.DataBind(); } } catch (SqlException ex) { StatusLabel.Text = ex.Message; } } my .aspx file is asp:Table ID="Tabel" runat ="server" asp:TableRow asp:TableCell asp:Label ID="filterLabel" runat ="server" Text ="Company Name Filter:" AssociatedControlID="filterTextBox" /asp:TableCell asp:TableCell asp:TextBox ID="filterTextBox" runat="server" MaxLength ="50" /asp:TableCell asp:TableCell asp:Button ID="refreshButton" runat ="server" Text ="Filter" CausesValidation="false" / /asp:TableCell /asp:TableRow /asp:Table My DAC file is public static SqlDataReader GetCompanyInformation(string filter) { SqlDataReader reader; string sql = "SELECT * FROM Student WHERE LastName LIKE @prmLastName "; using(SqlCommand command = new SqlCommand (sql,ConnectionManager.GetConnection())) {//In ExecuteReader we pass the CommandBehavior as singleResult because we need the Single result and also passing the close connection when Datais retriev // command.Parameters.Add("@prmLastName", SqlDbType.VarChar, 25).Value = filter; command.Parameters.AddWithValue("@prmLastName", filter); reader = command.ExecuteReader(CommandBehavior.SingleResult | CommandBehavior.CloseConnection); } return reader; } Note:- When i don't use the If(!Ispostback) Condition and simply pass the DisplayStudentInformation(); method in my page load then Data can be filter but with If(!IspostBack ) condition which is also important for updating the data and for other purpose . Data can be filter . Se aim of the expert is that Filter the Data in a gridview using condition of IF(!IspostBack ) means without the removing is post back condition Filter the Data . I have been ask other about this question but no body solve this so please help me i will be very thanks full to you all ok

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • $.ajax not loading data data everytime from server

    - by Ted
    I have written a simple jQuery.ajax function which loads a user control from the server on click of a button. The first time I click the button, it goes to the server and gets me the user control. But each subsequent click of the same button does not goes to the server to fetch me the user control. Since my user control fetches data from db, I need to reload the user control everytime i hit the button. But if anyhow I get my user control to unload from the page, and re-click the button, it goes to the server and fetches me the user control. Here's the code: $("#btnLoad").click(function() { if ($(this).attr("value") == "Load Control") { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "loadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Unload Control"); } else { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "unloadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Load Control"); } }); Please let me know if there is any other way I can get my user control loaded from server everytime I click the button.

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Using data.table to aggregate

    - by dayne
    After multiple suggestions from SO users, I am finally trying to convert my code over to using data.tables. library(data.table) DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10) > DT plate id val 1: plate1 CTRL 1 2: plate1 CTRL 2 3: plate1 ID1 3 4: plate1 ID2 4 5: plate1 ID3 5 6: plate2 CTRL 6 7: plate2 CTRL 7 8: plate2 ID1 8 9: plate2 ID2 9 10: plate2 ID3 10 What I would like to do is take the average of DT[,val] by plate when the id is "CTRL". I would normally aggregate the data frame, then use match to map the values back to a new column, 'ctrl'. Using the data.table package I can get: DT[id=="CTRL",ctrl:=mean(val),by=plate] > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 NA 4: plate1 ID2 4 NA 5: plate1 ID3 5 NA 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 NA 9: plate2 ID2 9 NA 10: plate2 ID3 10 NA What I need is really: DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10, ctrl = rep(c(1.5,6.5),each=5)) > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 1.5 4: plate1 ID2 4 1.5 5: plate1 ID3 5 1.5 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 6.5 9: plate2 ID2 9 6.5 10: plate2 ID3 10 6.5 Eventually I would like to use much more complicated selections of the values, but I do not know how to select specific values, run some function, then map those values back to the appropriate row using data frames.

    Read the article

  • Fedora distribution update pop-up after fresh installation

    - by Sayan Ghosh
    Hi, We do a kickstart installation of FC-10 at our place. I am quite intimidated by the distribution update pop up that comes up after the O/S installation. I want a keyword to put into the kickstart file that would stop Fedora from intimating with an update pop-up. Is it possible to include such a switch in the kickstart OR a script that could be added to post.bash? Thanks, Sayan

    Read the article

  • Fedora distribution update pop-up after fresh installation

    - by Sayan Ghosh
    Hi, We do a kickstart installation of FC-10 at our place. I am quite intimidated by the distribution update pop up that comes up after the O/S installation. I want a keyword to put into the kickstart file that would stop Fedora from intimating with an update pop-up. Is it possible to include such a switch in the kickstart OR a script that could be added to post.bash? Thanks, Sayan

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >