Search Results

Search found 62355 results on 2495 pages for 'data collection'.

Page 36/2495 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • Data and Secularism

    - by kaleidoscope
    Ever since we’ve been using Data we’ve been religious. Religious about the way we represent it and equally religious about the way we access it. Be it plain old SQL, DAO, ADO, ADO.Net and I am just referring to religions in MSFT world. A peek outside and I’d need a separate book to list out the Data faiths. Various application areas in networked computing are converging under the HTTP umbrella with a plausible transition to purist HTTP and in turn REST fuelled by the Web2.0 storm. It was time the Data access faiths also gave up the religious silos wrapped around our long worshipped data publishing and access methods. OData is the secular solution we have at hand today. It is an open protocol for sharing data. It can be exposed via REST. It is Open as in the Microsoft Open Specification Promise. This allows virtually everyone to build Data Services for any runtime. OData is one of the key standards for Data publishing/subscribing on Microsoft Codename Dallas. For us .Netters OData data sources can be exposed/consumed via WCF Data Services and the process is very simple, elegant and intuitive. Applications exposing OData Services Sharepoint 2010 IBM Web Sphere Microsoft SQL Azure Windows Azure Table Storage SQL Server Reporting Services   Live OData Services Netflix Open Science Data Initiative Open Government Data Initiatives Northwind database exposed as OData Service and many others Some may prefer to call it commoditization of data, unification of data access strategies or any other sweet name. I for one will stick to my secular definition. :) Technorati Tags: Sarang,OData,MOSP

    Read the article

  • Is using MultiMaps code smell? If so what alternative data structures fit my needs?

    - by Pureferret
    I'm trying to model nWoD characters for a roleplaying game in a character builder program. The crux is I want to support saving too and loading from yaml documents. One aspect of the character's is their set of skills. Skills are split between exactly three 'types': Mental, Physical, and Social. Each type has a list of skills under if. My Yaml looks like this: PHYSICAL: Athletics: 0 Brawl: 3 MENTAL: Academics: 2 Computers My initial thought was to use a Multimap of some sort, and have the skill type as an Enum, and key to my map. Each Skill is an element in the collection that backs the multimap. However, I've been struggling to get the yaml to work. On explaining this to a colleague outside of work they said this was probably a sign of code smell, and he's never seen it used 'well'. Are multiMaps really code smell? If so what alternate data structures would suit my goals?

    Read the article

  • Awesome Bookmarklet Collection Ready to Select from and Add to Your Browser

    - by Asian Angel
    Bookmarklets are extremely useful additions to have for your everyday browsing needs without the hassle (or slowdown effect) of extensions. With that in mind tech blog Guiding Tech has put together a terrific collection of 21 bookmarklets that are ready to add to your favorite browser. Just scroll down and select/install the bookmarklets you like from the blog post and enjoy the enhanced browsing! You can see the beginning and end results from our sample use of the Search Site Bookmarklet in the screenshots above and below… Note: We altered the bookmarklet slightly to focus the search results through Google Singapore. How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • Desktop Fun: Underwater Theme Wallpaper Collection Series 2

    - by Akemi Iwaya
    There is a whole new world waiting to be found underneath the waves, one filled with wonders untold, adventure, mystery, and danger for the unwary. Explore the unknown depths on your desktop with the second in our series of Underwater Theme Wallpaper collections. Underwater Theme Series 2 Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution.                 More Underwater Theme Goodness for Your Desktop Desktop Fun: Underwater Theme Wallpaper Collection Series 1 For more great wallpapers make sure to look through our terrific collections in the Desktop Fun section.     

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • Desktop Fun: Fantasy Warriors Wallpaper Collection

    - by Asian Angel
    Whether they are defending their homelands and the innocent, seeking fame and fortune, or out to conquer and plunder these fantasy warriors will add a nice bit of adventure to your desktop. Get ready to journey into other realms with our Fantasy Warriors Wallpaper collection Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup] Get the Old Microsoft Paint UI Back in Windows 7 Relax and Sleep Is a Soothing Sleep Timer Google Rolls Out Two-Factor Authentication Peaceful Early Morning by the Riverside Wallpaper

    Read the article

  • How to migrate user settings and data to new machine?

    - by torbengb
    I'm new to Ubuntu and recently started using it on my PC. I'm going to replace that PC with a new machine. I want to transfer my data and settings to the nettop. What aspects should I consider? Obviously I want to move my data over. What things am I missing if I only copy the entire home folder? This is a home pc (not corporate) so user rights and other security issues are not a concern, except that the files should be accessible on the new machine! Please take into account that the new machine is a nettop that doesn't have an optical drive and doesn't allow me to hook the old SATA disk into it, so any data transfer must be handled via home network (I can have both the old and the new machine turned on and connected to the home LAN) and I have an USB thumbdrive with limited capacity (2GB). This sounds like it might limit the general applicability, but it would in fact make it more general. I'll make this a wiki topic because there could be several "right" answers. Update: Or so I thought. I don't see a choice for that.

    Read the article

  • Desktop Fun: Moonlit Nights Wallpaper Collection

    - by Asian Angel
    Whether it is our own moon or ones featured in fiction and fantasy artwork, moons can be romantic, mysterious, and inspirational. Now you can add a unique touch of style and flair to your desktop with our Moonlit Nights Wallpaper collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • Improving Shopfloor Data Collection with Oracle Manufacturing Operations Center

    Successful factories around the world leverage information to drive their production and supply chains. New tools are available today to further catapult the data collection, analysis, contextualization and collaboration to the various stakeholders involved in the manufacturing process. Oracle Manufacturing Operations Center (MOC) addresses the factory's need for accurate and timely information about product and process quality, insight into shop floor operations, and performance of production assets. It solves the complex problem of connecting fragmented disconnected shop floor data to the business context of your ERP and provides the solid foundation for running Continuous Improvement (CI) programs such as Lean and Six Sigma.

    Read the article

  • Desktop Fun: Feathered Friends Wallpaper Collection

    - by Asian Angel
    Birds can serve as wonderful sources of inspiration with their varied colors, song, and feats of flight. Today we have a beautiful flock of birds that you can add to your desktop with our Feathered Friends Wallpaper collection Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • Desktop Fun: Ocean Life Wallpaper Collection

    - by Asian Angel
    Our oceans are full of beauty and wonder, a separate world waiting for us to explore its’ mysteries and stir our imaginations. Bring the wonders of the underwater world to your desktop with our Ocean Life Wallpaper collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Complex data types in WCF?

    - by Hojou
    I've run into a problem trying to return an object that holds a collection of childobjects that again can hold a collection of grandchild objects. I get an error, 'connection forcibly closed by host'. Is there any way to make this work? I currently have a structure resembling this: pseudo code: Person: IEnumerable<Order> Order: IEnumberable<OrderLine> All three objects have the DataContract attribute and all public properties i want exposed (including the IEnumerable's) have the DataMember attribute. I have multiple OperationContract's on my service and all the methods returning a single object OR an IEnumerable of an object works perfectly. It's only when i try to nest IEnumerable that it turns bad. Also in my client service reference i picked the generic list as my collection type. I just want to emphasize, only one of my operations/methods fail with this error - the rest of them work perfectly. EDIT (more detailed error description): [SocketException (0x2746): An existing connection was forcibly closed by the remote host] [IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.] [WebException: The underlying connection was closed: An unexpected error occurred on a receive.] [CommunicationException: An error occurred while receiving the HTTP response to http://myservice.mydomain.dk/MyService.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details.] I tried looking for logs but i can't find any... also i'm using a WSHttpBinding and an http endpoint.

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • Filter of Data in Gridview logical error please using asp.net

    - by RajuBabli Abbasi
    I wanted to filter the Data in asp.net but my Data is not filtering i have some logical error So please help me for this case i will be very thanks full to those who will help me please consider my code and replay me with code if you can so please i am waiting for your replay thanks again my asp.cs file is protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DisplayStudentInformation(); } } private void DisplayStudentInformation() { string filter = "%" + filterTextBox.Text + "%"; if (filter == String.Empty) filter = "%"; try { using (SqlDataReader reader = DAC.GetCompanyInformation(filter)) {//reader.Read(); StudentGridView.DataSource = reader; StudentGridView.DataBind(); } } catch (SqlException ex) { StatusLabel.Text = ex.Message; } } my .aspx file is asp:Table ID="Tabel" runat ="server" asp:TableRow asp:TableCell asp:Label ID="filterLabel" runat ="server" Text ="Company Name Filter:" AssociatedControlID="filterTextBox" /asp:TableCell asp:TableCell asp:TextBox ID="filterTextBox" runat="server" MaxLength ="50" /asp:TableCell asp:TableCell asp:Button ID="refreshButton" runat ="server" Text ="Filter" CausesValidation="false" / /asp:TableCell /asp:TableRow /asp:Table My DAC file is public static SqlDataReader GetCompanyInformation(string filter) { SqlDataReader reader; string sql = "SELECT * FROM Student WHERE LastName LIKE @prmLastName "; using(SqlCommand command = new SqlCommand (sql,ConnectionManager.GetConnection())) {//In ExecuteReader we pass the CommandBehavior as singleResult because we need the Single result and also passing the close connection when Datais retriev // command.Parameters.Add("@prmLastName", SqlDbType.VarChar, 25).Value = filter; command.Parameters.AddWithValue("@prmLastName", filter); reader = command.ExecuteReader(CommandBehavior.SingleResult | CommandBehavior.CloseConnection); } return reader; } Note:- When i don't use the If(!Ispostback) Condition and simply pass the DisplayStudentInformation(); method in my page load then Data can be filter but with If(!IspostBack ) condition which is also important for updating the data and for other purpose . Data can be filter . Se aim of the expert is that Filter the Data in a gridview using condition of IF(!IspostBack ) means without the removing is post back condition Filter the Data . I have been ask other about this question but no body solve this so please help me i will be very thanks full to you all ok

    Read the article

  • $.ajax not loading data data everytime from server

    - by Ted
    I have written a simple jQuery.ajax function which loads a user control from the server on click of a button. The first time I click the button, it goes to the server and gets me the user control. But each subsequent click of the same button does not goes to the server to fetch me the user control. Since my user control fetches data from db, I need to reload the user control everytime i hit the button. But if anyhow I get my user control to unload from the page, and re-click the button, it goes to the server and fetches me the user control. Here's the code: $("#btnLoad").click(function() { if ($(this).attr("value") == "Load Control") { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "loadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Unload Control"); } else { $.ajax({ url: "AJAXHandler.ashx", data: { "lt": "unloadcontrol" }, dataType: "html", success: function(data) { content.html(data); } }); $(this).attr("value", "Load Control"); } }); Please let me know if there is any other way I can get my user control loaded from server everytime I click the button.

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >