Search Results

Search found 97400 results on 3896 pages for 'application data'.

Page 116/3896 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Sync profile/data/software for multiple computers?

    - by ultimatebuster
    Is it possible to sync all the user settings (interface settings, program settings), as well as programs for multiple computers? Data is not as important, though the settings (which is technically data, but not that kind of data, like music, files...) and applications are. Also, it will not sync the drivers. (Example: 1 computer uses bumblebee and the other uses ATI catalyst, and maybe there's different network drivers.. etc.)

    Read the article

  • Application use on website

    - by Palmer
    Is there a good way to run a C# application on clientside in lieu of JavaScript? I have done some front end work with JavaScript and backend C# for web developing at an old job, but I am interested in hosting a website myself soon. I have a C# application I would like to be open source, but before people care to download I'd like them to be able to use it. At its basic level it's a simple text editor, but there's much more to it in the nitty gritty. I could write it in JavaScript, but it would require me to keep documentation and up to date changes on JavaScript and C# in that case. I was thinking of creating an AJAX panel and somehow loading my winform application into a frame, but I don't know how or what words to google because I've never done it before except AJAX.

    Read the article

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • String or binary data would be truncated.

    - by Derek Dieter
    This error message is relatively straight forward. The way it normally happens is when you are trying to insert data from a table that contains values that have larger data lengths than the table you are trying to insert into. An example of this would be trying to insert data from a permanent table, into [...]

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Oracle Social Analytics with the Big Data Appliance

    - by thegreeneman
    Found an awesome demo put together by one of the Oracle NoSQL Database partners, eDBA, on using the Big Data Appliance to do social analytics. In this video, James Anthony is showing off the BDA, Hadoop, the Oracle Big Data Connectors and how they can be used and integrated with the Oracle Database to do an end-to-end sentiment analysis leveraging twitter data.   A really great demo worth the view. 

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Data Everywhere - Step 3 in the Stairway to Reporting Services

    In this article, MVP Jessica Moss talks about data sources and how to connect them to your report. Learn how to add the reusable data sets and data sources for your reporting projects. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Programmer logbook application?

    - by jsoldi
    I've just released my application to the public, and I'm working on an updated version, but I really think I should keep track of ALL the code changes. In case some functionality suddenly starts failing, with a history of all the changes I made it would be a lot easier to figure out where I messed it up, in case the problem wasn't already there. The ideal would be to have a super fast computer with a huge hard drive and an application that automatically saves a backup of the whole project every time I change a line in the code, with some file comparison tool that would show me every difference between any two backed up projects, but that's not really possible for now. So, do you know any application that makes it easy for a programmer to keep track of the changes made to the source code?

    Read the article

  • Using ASP.NET 3.5 ListView in a Web Application

    This tutorial will show an example of how to use the ListView web control featuring data updating and validation before the data is inserted or updated to the MS SQL server database. Examples of how to use ListView controls to retrieve information from the data are featured in the first part of this tutorial which appeared yesterday.... Test Drive the Next Wave of Productivity Find Microsoft Office 2010 and SharePoint 2010 trials, demos, videos, and more.

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • Send less Server Data with "AFK"

    - by Oliver Schöning
    I am working on a 2D (Realtime) MultiPlayer Game. With Construct2 and a Socket.IO JavaScript Server. Right now the code does not include the Array for each Player. var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; }); }); setInterval(function() { io.sockets.emit("message", 'Pos,' + x); },100); I noticed a very annoying problem with my server today. It sends my X Coordinates every 100 milliseconds. The Problem was, that when I went into another Browser Tab, the Browser stopped the Game from running. And when I went back, I think the Game had to run through all the packages. Because my Offline Debugging Button still worked immediately and the Online Button only responded after some seconds. So then I changed my Code so that it would only send out an update when it received a player Input: var io = require("socket.io").listen(80); var x = 10; io.sockets.on("connection", function (socket) { socket.on("message", function(data) { x = x+1; io.sockets.emit("message", 'Pos,' + x); }); }); And it Updated Immediately, even when I had been inactive on the Browser Tab for a long time. Confirming my suspicion that it had to get through all the data. Confirm Please! It would be insane to only send information on Client Input in a Real Time Game. But how would I write a AFK function? I would think it is easier to run a AFK Boolean Loop on the Server. Here is what I need help for: playerArray[Me] if ( "Not Given any Input for X amount of Seconds" ) { "Don't send Data" } else { "Send Data" }

    Read the article

  • Options for transparent data encryption on SQL 2005 and 2008 DBs.

    - by Dan
    Recently, in Massachusetts a law was passed (rather silently) that data containing personally identifiable information, must be encrypted. PII is defined by the state, as containing the residents first and last name, in combination with either, A. SSN B. drivers license or ID card # C. Debit or CC # Due to the nature of the software we make, all of our clients use SQL as the backend. Typically servers will be running SQl2005 Standard or above, sometimes SQL 2008. Almost all client machines use SQL2005 Express. We use replication between client and server. Unfortunately, to get TDE you need to have SQL Enterprise on each machine, which is absolutely not an option. I'm looking for recommendations of products that will encrypt a DB. Right now, I'm not interested in whole disk encryption at all.

    Read the article

  • Microsoft Offloaded Data Transfer (ODX)

    - by Charles Cline
    For all you admins and other technical people out there who have watched the Windows OS spool the data from network storage to your workstation and then back to network storage, watch for Offloaded Data Transfer (ODX).  I saw ODX at TechEd a few weeks ago and the data movement is primarily kept at the backend storage network.  EMC and other storage vendors are already posting about when they will have this functionality.Here's some information about it:http://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspxhttp://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspx

    Read the article

  • Excel 2013 Data Explorer and GeoFlow make 3-D maps quick and easy

    - by John Paul Cook
    Excel add-ins Data Explorer and GeoFlow work well together, mainly because they just work. Simple, fast, and powerful. I started Excel 2013, used Data Explorer to search for, examine, and then download latitude-longitude data and finally used GeoFlow to plot an interactive 3-D visualization. I didn’t use any fancy Excel commands and the entire process took less than 3 minutes. You can download the GeoFlow preview from here . It can also be used with Office 365. Start by clicking the DATA EXPLORER...(read more)

    Read the article

  • R Statistical Analytics with Faster Performance for Enterprise Database Access and Big Data

    - by Mike.Hallett(at)Oracle-BI&EPM
    Further demonstrating commitment to the open source community, Oracle has just released enhanced support of the R statistical programming language for Oracle Solaris and AIX in addition to Linux and Windows, connectivity to Oracle TimesTen In-Memory Database in addition to Oracle Database, and integration of hardware-specific Math libraries for faster performance.  Oracle’s Open Source distribution of R is available with the Oracle Big Data Appliance and available for download now. Oracle also offers Oracle R Enterprise, a component of Oracle Advanced Analytics that enables R processing on Oracle Database servers.   This all goes to make big data analytics more accessible in the enterprise and improving data scientist productivity with faster performance Since its introduction in 1995, R has attracted more than two million users and is widely used today for developing statistical applications that analyze big data. Analyst Report: Oracle Advances its Advanced Analytics Strategy  

    Read the article

  • GeoToolkit Demo Embedded in an Application Framework via Maven

    - by Geertjan
    As a follow on to yesterday's blog entry, here's the equivalent starter application for GeoToolkit (also known as Geotk) on the NetBeans Platform, which ends up looking like this: The above is a border.shp file I found on-line, while here's a USA states shape file rendered in the application: Note that the navigation bar is also included, though that could later be migrated into the menu bar of the NetBeans Platform.  Download the Maven based NetBeans Platform application with GeoToolkit integration here: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/tutorials/geospatial/geotoolkit/MyGeospatialSystem It was quite tricky getting this sample together, parts of it, especially the installer, which creates the database, comes from the Puzzle GIS project, while the files come from on-line locations, with the JAI-related dependencies providing problems of their own. But it's definitely a starting point and you now have the basic Maven structure needed for getting started with GeoToolkit in the context of all the services and components provided by the NetBeans Platform.  Many thanks to Johann Sorel for his patience and help. 

    Read the article

  • ubuntu 11.10 CellC data card

    - by Armien
    In 11.10 I find that I can connect to the internet via broadband connection successfully. The problem is that I cannot have the data card attached to the machine while the machine boots up. If I leave the data card in the machine during startup, the data card is not picked up and I then cannot connect to the internet. I must first boot up my machine, login, attach the data card into the usb port, wait so 30 seconds. The broadband connection name will then appear in the network dropdown at the top of the screen. An internet connect is now possible via broadband. Please let me know what must be done to fix this. Thanks Armien

    Read the article

  • Passing data between engine layers

    - by spaceOwl
    I am building a software system (game engine with networking support ) that is made up of (roughly) these layers: Game Layer Messaging Layer Networking Layer Game related data is passed to the messaging layer (this could be anything that is game specific), where they are to be converted to network specific messages (which are then serialized to byte arrays). I'm looking for a way to be able to convert "game" data into "network" data, such that no strong coupling between these layers will exist. As it looks now, the Messaging layer sits between both layers (game and network) and "knows" both of them (it contains Converter objects that know how to translate between data objects of both layers back and forth). I am not sure this is the best solution. Is there a good design for passing objects between layers? I'd like to learn more about the different options.

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • My application's bounce rate jumped from 30% to 80% overnight

    - by davidrac
    My application is provided as a service that is embedded in other sites. I have google analytics installed on the login popup dialog which is a page of my application, which is opened from the host site (OAuth). About a week ago, I've noticed a sharp decrease in the number of new users registrations and a jump in the bounce rate (from ~30% to ~80%). This happened without any change in the application. I looked into technical parameters like page load time and error rates, but could not see any change in there. Any ideas what can cause this behavior?

    Read the article

  • Two interesting big data sessions around Openworld

    - by Jean-Pierre Dijcks
    For those who want to talk (not listen) about big data, here are 2 very cool sessions: BOF9877 - A birds of a feather session around all things big data. It is on Monday, Oct 1, 6:15 PM - 7:00 PM - Marriott Marquis - Golden Gate. While all guests on the panel are special, we will have very special guest on the panel. He is a proud owner of a Big Data Appliance (see here). Then there is a Big Data SIG meeting (the invite from Gwen): I'd like to invite everyone to our OOW12 meet up. We'll meet on Tuesday, October 2nd, 8:45 to 9:45 at Moscone West Level 3, Overlook 3. We will network, socialize and discuss plans for the group. Which topics interest us for webinars? Which conferences do we want to meet in? What other activities we are interested in? We can also discuss big data topics, show off our great work, and seek advice on the challenges. Other than figuring out what we are collectively interested in, the discussion will be pretty open. Here is the official invite. See you at Openworld!!

    Read the article

  • How to design console application with good seperation of UI from Logic

    - by JavaSa
    Is it considered an overkill for console application to be design like MVC , MVP or N tier architecture? If not which is more common and if you can link me to simple example of it. I want to implement a tic tac toe game in console application. I have a solution which hold two projects: TicTacToeBusinessLogic (Class library project) and TicTacToeConsoleApplication (Console application project) to represent the view logic. In the TicTacToeConsoleApplication I've Program.cs class which holds the main entry point (public static void Main). Now I face a problem. I want the game to handle its own game flow so I can: Create new GameManager class (from BL) but this causing the view to directly know the BL part. So I'm a little confused how to write it in an acceptable way. Should I use delegates? Please show me a simple example.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >