Search Results

Search found 58783 results on 2352 pages for 'data modeling'.

Page 87/2352 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • Microsoft Offloaded Data Transfer (ODX)

    - by Charles Cline
    For all you admins and other technical people out there who have watched the Windows OS spool the data from network storage to your workstation and then back to network storage, watch for Offloaded Data Transfer (ODX).  I saw ODX at TechEd a few weeks ago and the data movement is primarily kept at the backend storage network.  EMC and other storage vendors are already posting about when they will have this functionality.Here's some information about it:http://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspxhttp://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspx

    Read the article

  • Options for transparent data encryption on SQL 2005 and 2008 DBs.

    - by Dan
    Recently, in Massachusetts a law was passed (rather silently) that data containing personally identifiable information, must be encrypted. PII is defined by the state, as containing the residents first and last name, in combination with either, A. SSN B. drivers license or ID card # C. Debit or CC # Due to the nature of the software we make, all of our clients use SQL as the backend. Typically servers will be running SQl2005 Standard or above, sometimes SQL 2008. Almost all client machines use SQL2005 Express. We use replication between client and server. Unfortunately, to get TDE you need to have SQL Enterprise on each machine, which is absolutely not an option. I'm looking for recommendations of products that will encrypt a DB. Right now, I'm not interested in whole disk encryption at all.

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • Send less Server Data with "AFK"

    - by Oliver Schöning
    I am working on a 2D (Realtime) MultiPlayer Game. With Construct2 and a Socket.IO JavaScript Server. Right now the code does not include the Array for each Player. var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; }); }); setInterval(function() { io.sockets.emit("message", 'Pos,' + x); },100); I noticed a very annoying problem with my server today. It sends my X Coordinates every 100 milliseconds. The Problem was, that when I went into another Browser Tab, the Browser stopped the Game from running. And when I went back, I think the Game had to run through all the packages. Because my Offline Debugging Button still worked immediately and the Online Button only responded after some seconds. So then I changed my Code so that it would only send out an update when it received a player Input: var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; io.sockets.emit("message", 'Pos,' + x); }); }); And it Updated Immediately, even when I had been inactive on the Browser Tab for a long time. Confirming my suspicion that it had to get through all the data. Confirm Please! It would be insane to only send information on Client Input in a Real Time Game. But how would I write a AFK function? I would think it is easier to run a AFK Boolean Loop on the Server. Here is what I need help for: playerArray[Me] if ( "Not Given any Input for X amount of Seconds" ) { "Don't send Data" } else { "Send Data" }

    Read the article

  • Excel 2013 Data Explorer and GeoFlow make 3-D maps quick and easy

    - by John Paul Cook
    Excel add-ins Data Explorer and GeoFlow work well together, mainly because they just work. Simple, fast, and powerful. I started Excel 2013, used Data Explorer to search for, examine, and then download latitude-longitude data and finally used GeoFlow to plot an interactive 3-D visualization. I didn’t use any fancy Excel commands and the entire process took less than 3 minutes. You can download the GeoFlow preview from here . It can also be used with Office 365. Start by clicking the DATA EXPLORER...(read more)

    Read the article

  • ubuntu 11.10 CellC data card

    - by Armien
    In 11.10 I find that I can connect to the internet via broadband connection successfully. The problem is that I cannot have the data card attached to the machine while the machine boots up. If I leave the data card in the machine during startup, the data card is not picked up and I then cannot connect to the internet. I must first boot up my machine, login, attach the data card into the usb port, wait so 30 seconds. The broadband connection name will then appear in the network dropdown at the top of the screen. An internet connect is now possible via broadband. Please let me know what must be done to fix this. Thanks Armien

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • R Statistical Analytics with Faster Performance for Enterprise Database Access and Big Data

    - by Mike.Hallett(at)Oracle-BI&EPM
    Further demonstrating commitment to the open source community, Oracle has just released enhanced support of the R statistical programming language for Oracle Solaris and AIX in addition to Linux and Windows, connectivity to Oracle TimesTen In-Memory Database in addition to Oracle Database, and integration of hardware-specific Math libraries for faster performance.  Oracle’s Open Source distribution of R is available with the Oracle Big Data Appliance and available for download now. Oracle also offers Oracle R Enterprise, a component of Oracle Advanced Analytics that enables R processing on Oracle Database servers.   This all goes to make big data analytics more accessible in the enterprise and improving data scientist productivity with faster performance Since its introduction in 1995, R has attracted more than two million users and is widely used today for developing statistical applications that analyze big data. Analyst Report: Oracle Advances its Advanced Analytics Strategy  

    Read the article

  • Passing data between engine layers

    - by spaceOwl
    I am building a software system (game engine with networking support ) that is made up of (roughly) these layers: Game Layer Messaging Layer Networking Layer Game related data is passed to the messaging layer (this could be anything that is game specific), where they are to be converted to network specific messages (which are then serialized to byte arrays). I'm looking for a way to be able to convert "game" data into "network" data, such that no strong coupling between these layers will exist. As it looks now, the Messaging layer sits between both layers (game and network) and "knows" both of them (it contains Converter objects that know how to translate between data objects of both layers back and forth). I am not sure this is the best solution. Is there a good design for passing objects between layers? I'd like to learn more about the different options.

    Read the article

  • How can I get data off of a damaged thumb drive?

    - by Lord Torgamus
    A guy I work with just came by to ask me about a damaged thumb drive/USB flash drive. Apparently his son dropped it on a hard surface and it won't power up anymore. They've tried plugging it into multiple machines without success, even though each port they tried was able to power other USB devices. He knows it's not a lost cause because a local tech store is offering to recover the data for $500, but he says they're not worth that much. I figured someone on SU would have an idea about this; he doesn't care about using the drive in the future, just wants to salvage a few files that would be a pain to recreate. Is this possible without advanced equipment, and if so, how? He said he already tried the advice on the Internet about typing in different drive letters and such, but that failed because there was no power going to the drive. He also said that he opened the case up at one point, but I'm not sure what, if anything, he did inside.

    Read the article

  • Two interesting big data sessions around Openworld

    - by Jean-Pierre Dijcks
    For those who want to talk (not listen) about big data, here are 2 very cool sessions: BOF9877 - A birds of a feather session around all things big data. It is on Monday, Oct 1, 6:15 PM - 7:00 PM - Marriott Marquis - Golden Gate. While all guests on the panel are special, we will have very special guest on the panel. He is a proud owner of a Big Data Appliance (see here). Then there is a Big Data SIG meeting (the invite from Gwen): I'd like to invite everyone to our OOW12 meet up. We'll meet on Tuesday, October 2nd, 8:45 to 9:45 at Moscone West Level 3, Overlook 3. We will network, socialize and discuss plans for the group. Which topics interest us for webinars? Which conferences do we want to meet in? What other activities we are interested in? We can also discuss big data topics, show off our great work, and seek advice on the challenges. Other than figuring out what we are collectively interested in, the discussion will be pretty open. Here is the official invite. See you at Openworld!!

    Read the article

  • How should I handle a redirect to an identity provider during a web api data request

    - by Erds
    Scenario I have a single-page web app consisting purely of html, css, and javascript. After initial load and during use, it updates various views with data from one or more RESTful apis via ajax calls. The api calls return data in a json format. Each web api may be hosted on independent domains. Question During the ajax callout, if my authorization token is not deemed valid by the web api, the web api will redirect me (302) to the identity provider for that particular api. Since this is an ajax callout for data and not necessarily for display, i need to find a way to display the identity provider's authentication page. It seems that I should trap that redirect, and open up another view to display the identity provider's login page. Once the oauth series of redirects is complete, i need to grab the token and retrigger my ajax data call with the token attached. Is this a valid approach, and if so are there any examples showing the ajax handling of the redirects?

    Read the article

  • Data Conversion with Silverlight 3

    In this conclusion to a five-part series on data binding with Silverlight 3 we will discuss data conversion. As with the other parts we ll explain when and why you need to convert data and go through a step-by-step process to show you how it s done.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
     It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software.   Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors.  However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools.  Read all about it right here http://bit.ly/Ay45SG

    Read the article

  • Off the Charts: Getting Cost Data into Google Analytics

    Off the Charts: Getting Cost Data into Google Analytics With Analytics' new Cost Data Upload feature, users can measure and analyze non-Google cost data to calculate paid campaign effectiveness. Developers are able to build solutions to upload exported cost data into Analytics so marketers can have a unified view of their campaign spend - all within the Google Analytics interface. Join Google Analytics' Developer Advocate Pete Frisella to dive into the implementation of this new feature through the robust Analytics APIs. From: GoogleDevelopers Views: 0 0 ratings Time: 30:00 More in Science & Technology

    Read the article

  • Does data size in TCP/UDP make a difference on transmission time

    - by liortal
    While discussing the development of a network component for our game engine, a member of our team suggested that transmitting either 500 bytes or 1k of data using UDP makes no difference from performance perspective of the system (the time it takes to transmit the data). He actually said that as long as you don't cross the MTU size, the size of the transmitted data doesn't really matter as it's all the same. Is that true for UDP? what about TCP? That sounds just plain wrong to me, but i am not a network expert. *I've been reading about other companies' game networking architectures, and it seems they're all trying to keep transmitted data to a minimum, making my colleague's claims seem even more unreasonable.

    Read the article

  • Loading from Multiple Data Sources with Oracle Loader for Hadoop

    - by mannamal
    Oracle Loader for Hadoop can be used to load data from multiple data sources (for example Hive, HBase), and data in multiple formats (for example Apache weblogs, JSON files).   There are two ways to do this: (1) Use an input format implementation.  Oracle Loader for Hadoop includes several input format implementations.  In addition, a user can develop their own input format implementation for proprietary data sources and formats. (2) Leverage the capabilities of Hive, and use Oracle Loader for Hadoop to load from Hive. These approaches are discussed in our Oracle Open World 2013 presentation. 

    Read the article

  • Access to Salesforce.com Data Through Tableau Desktop

    - by dataintegration
    This article will explain how to connect to any of the RSSBus OData Connectors with Tableau's business intelligence tool. While the example uses the Salesforce Connector, the same process can be followed for any of the OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and Tableau Desktop from Tableau. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce.com account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Tableau and select the Connect to data option at the top left of the window. Step 4: Here you will click on the option labeled OData under the section labeled On a server. Step 5: A new pop up will appear. The box under Step 1 of the pop-up must contain the OData URL of the Salesforce Connector table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the OData entry URL, you will need to append the table name that you want Tableau to connect with to the OData entry URL. In this example, we will connect to the Account table. Thus, the URL we enter will be: http://localhost:8181/sfconnector/data/conn/odata.rsc/Account. You will also need to add authentication options in this step. To do this, select the Use a Username and Password option in Step 2 of the pop-up and enter the Username and Password of the user who has access to the Salesforce Connector. When you are done, click the Connect button in Step 3 of the pop-up. Step 6: When the connection to the Salesforce Connector is successful, give the connection a name and click the OK button. Step 7: The table columns will be listed on the left side under the Dimensions section of the workspace. Step 8: To view your Salesforce.com data, you can right click under the table name in the Data section at the top left of the dashboard and select the View Data option. Your Saleforce.com data will appear in Tableau.

    Read the article

  • Linux: Limiting data throughput (pipe) in bytes per second?

    - by sdaau
    Hi all, I was wandering if there is a Linux program that can limit data throughput of a pipe - in actual bytes per second?. From what I gather, applicable for the purposes would be bfr, however, it has been removed from Debian (Removal candidate: bfr) cpipe, however, it seems the lowest resolution it will support is kB/s, meaning that buffer writes can still reach MB/s ([SOLVED] Is there a program to limit terminal pipe speed? - Page 2 - Ubuntu Forums) What I'd want is to be able to specify something like cat example.txt | ratelimit -Bps 100 > /dev/ttyUSB0 ... and actually have a single byte from example.txt sent each 1/100 = 0.01 sec (or 10 ms) to 'output'.. Thanks in advance for any suggestions, Cheers!

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >