Search Results

Search found 65558 results on 2623 pages for 'large data'.

Page 100/2623 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Passing data between engine layers

    - by spaceOwl
    I am building a software system (game engine with networking support ) that is made up of (roughly) these layers: Game Layer Messaging Layer Networking Layer Game related data is passed to the messaging layer (this could be anything that is game specific), where they are to be converted to network specific messages (which are then serialized to byte arrays). I'm looking for a way to be able to convert "game" data into "network" data, such that no strong coupling between these layers will exist. As it looks now, the Messaging layer sits between both layers (game and network) and "knows" both of them (it contains Converter objects that know how to translate between data objects of both layers back and forth). I am not sure this is the best solution. Is there a good design for passing objects between layers? I'd like to learn more about the different options.

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS device. When I write a file to the device using eSata, or a SMB share I cannot write files over 4GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • How can I get data off of a damaged thumb drive?

    - by Lord Torgamus
    A guy I work with just came by to ask me about a damaged thumb drive/USB flash drive. Apparently his son dropped it on a hard surface and it won't power up anymore. They've tried plugging it into multiple machines without success, even though each port they tried was able to power other USB devices. He knows it's not a lost cause because a local tech store is offering to recover the data for $500, but he says they're not worth that much. I figured someone on SU would have an idea about this; he doesn't care about using the drive in the future, just wants to salvage a few files that would be a pain to recreate. Is this possible without advanced equipment, and if so, how? He said he already tried the advice on the Internet about typing in different drive letters and such, but that failed because there was no power going to the drive. He also said that he opened the case up at one point, but I'm not sure what, if anything, he did inside.

    Read the article

  • Two interesting big data sessions around Openworld

    - by Jean-Pierre Dijcks
    For those who want to talk (not listen) about big data, here are 2 very cool sessions: BOF9877 - A birds of a feather session around all things big data. It is on Monday, Oct 1, 6:15 PM - 7:00 PM - Marriott Marquis - Golden Gate. While all guests on the panel are special, we will have very special guest on the panel. He is a proud owner of a Big Data Appliance (see here). Then there is a Big Data SIG meeting (the invite from Gwen): I'd like to invite everyone to our OOW12 meet up. We'll meet on Tuesday, October 2nd, 8:45 to 9:45 at Moscone West Level 3, Overlook 3. We will network, socialize and discuss plans for the group. Which topics interest us for webinars? Which conferences do we want to meet in? What other activities we are interested in? We can also discuss big data topics, show off our great work, and seek advice on the challenges. Other than figuring out what we are collectively interested in, the discussion will be pretty open. Here is the official invite. See you at Openworld!!

    Read the article

  • How should I handle a redirect to an identity provider during a web api data request

    - by Erds
    Scenario I have a single-page web app consisting purely of html, css, and javascript. After initial load and during use, it updates various views with data from one or more RESTful apis via ajax calls. The api calls return data in a json format. Each web api may be hosted on independent domains. Question During the ajax callout, if my authorization token is not deemed valid by the web api, the web api will redirect me (302) to the identity provider for that particular api. Since this is an ajax callout for data and not necessarily for display, i need to find a way to display the identity provider's authentication page. It seems that I should trap that redirect, and open up another view to display the identity provider's login page. Once the oauth series of redirects is complete, i need to grab the token and retrigger my ajax data call with the token attached. Is this a valid approach, and if so are there any examples showing the ajax handling of the redirects?

    Read the article

  • Data Conversion with Silverlight 3

    In this conclusion to a five-part series on data binding with Silverlight 3 we will discuss data conversion. As with the other parts we ll explain when and why you need to convert data and go through a step-by-step process to show you how it s done.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
     It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software.   Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors.  However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools.  Read all about it right here http://bit.ly/Ay45SG

    Read the article

  • Off the Charts: Getting Cost Data into Google Analytics

    Off the Charts: Getting Cost Data into Google Analytics With Analytics' new Cost Data Upload feature, users can measure and analyze non-Google cost data to calculate paid campaign effectiveness. Developers are able to build solutions to upload exported cost data into Analytics so marketers can have a unified view of their campaign spend - all within the Google Analytics interface. Join Google Analytics' Developer Advocate Pete Frisella to dive into the implementation of this new feature through the robust Analytics APIs. From: GoogleDevelopers Views: 0 0 ratings Time: 30:00 More in Science & Technology

    Read the article

  • Does data size in TCP/UDP make a difference on transmission time

    - by liortal
    While discussing the development of a network component for our game engine, a member of our team suggested that transmitting either 500 bytes or 1k of data using UDP makes no difference from performance perspective of the system (the time it takes to transmit the data). He actually said that as long as you don't cross the MTU size, the size of the transmitted data doesn't really matter as it's all the same. Is that true for UDP? what about TCP? That sounds just plain wrong to me, but i am not a network expert. *I've been reading about other companies' game networking architectures, and it seems they're all trying to keep transmitted data to a minimum, making my colleague's claims seem even more unreasonable.

    Read the article

  • Access to Salesforce.com Data Through Tableau Desktop

    - by dataintegration
    This article will explain how to connect to any of the RSSBus OData Connectors with Tableau's business intelligence tool. While the example uses the Salesforce Connector, the same process can be followed for any of the OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and Tableau Desktop from Tableau. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce.com account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Tableau and select the Connect to data option at the top left of the window. Step 4: Here you will click on the option labeled OData under the section labeled On a server. Step 5: A new pop up will appear. The box under Step 1 of the pop-up must contain the OData URL of the Salesforce Connector table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the OData entry URL, you will need to append the table name that you want Tableau to connect with to the OData entry URL. In this example, we will connect to the Account table. Thus, the URL we enter will be: http://localhost:8181/sfconnector/data/conn/odata.rsc/Account. You will also need to add authentication options in this step. To do this, select the Use a Username and Password option in Step 2 of the pop-up and enter the Username and Password of the user who has access to the Salesforce Connector. When you are done, click the Connect button in Step 3 of the pop-up. Step 6: When the connection to the Salesforce Connector is successful, give the connection a name and click the OK button. Step 7: The table columns will be listed on the left side under the Dimensions section of the workspace. Step 8: To view your Salesforce.com data, you can right click under the table name in the Data section at the top left of the dashboard and select the View Data option. Your Saleforce.com data will appear in Tableau.

    Read the article

  • Loading from Multiple Data Sources with Oracle Loader for Hadoop

    - by mannamal
    Oracle Loader for Hadoop can be used to load data from multiple data sources (for example Hive, HBase), and data in multiple formats (for example Apache weblogs, JSON files).   There are two ways to do this: (1) Use an input format implementation.  Oracle Loader for Hadoop includes several input format implementations.  In addition, a user can develop their own input format implementation for proprietary data sources and formats. (2) Leverage the capabilities of Hive, and use Oracle Loader for Hadoop to load from Hive. These approaches are discussed in our Oracle Open World 2013 presentation. 

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • Linux: Limiting data throughput (pipe) in bytes per second?

    - by sdaau
    Hi all, I was wandering if there is a Linux program that can limit data throughput of a pipe - in actual bytes per second?. From what I gather, applicable for the purposes would be bfr, however, it has been removed from Debian (Removal candidate: bfr) cpipe, however, it seems the lowest resolution it will support is kB/s, meaning that buffer writes can still reach MB/s ([SOLVED] Is there a program to limit terminal pipe speed? - Page 2 - Ubuntu Forums) What I'd want is to be able to specify something like cat example.txt | ratelimit -Bps 100 > /dev/ttyUSB0 ... and actually have a single byte from example.txt sent each 1/100 = 0.01 sec (or 10 ms) to 'output'.. Thanks in advance for any suggestions, Cheers!

    Read the article

  • kvm process has too large a memory footprint on host

    - by gucki
    I'm using latest ubuntu quantal and start a kvm guest which should have 2048 MB of memory. Now after a few hours I can see that the kvm process of this guest is around 2700 MB, so 700 MB more than the guest should be able to consume. I mean a small overhead like 1% would be ok, but not 30%?! root 8631 74.0 22.2 4767484 2752336 ? Sl Nov07 512:58 kvm -cpu kvm64 -smp sockets=1,cores=2 -cpu kvm64 -m 2048 -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive file=rbd:data/vm-disk-1,if=none,id=drive-virtio0,cache=writeback,aio=native -device virtio-net-pci,netdev=net0,bus=pci.0,addr=0x12,id=net0,mac=02:7a:86:e6:1a:6c,bootindex=200 -netdev type=tap,id=net0,vhost=on -usbdevice tablet -nodefaults -enable-kvm -daemonize -boot menu=on -vga cirrus root 8694 0.0 0.0 0 0 ? S Nov07 0:00 [kvm-pit/8631] How is this possible and how to prevent it?

    Read the article

  • Validation and Error Generation when using the Data Mapper Pattern

    - by AndyPerlitch
    I am working on saving state of an object to a database using the data mapper pattern, but I am looking for suggestions/guidance on the validation and error message generation step (step 4 below). Here are the general steps as I see them for doing this: (1) The data mapper is used to get current info (assoc array) about the object in db: +=====================================================+ | person_id | name | favorite_color | age | +=====================================================+ | 1 | Andy | Green | 24 | +-----------------------------------------------------+ mapper returns associative array, eg. Person_Mapper::getPersonById($id) : $person_row = array( 'person_id' => 1, 'name' => 'Andy', 'favorite_color' => 'Green', 'age' => '24', ); (2) the Person object constructor takes this array as an argument, populating its fields. class Person { protected $person_id; protected $name; protected $favorite_color; protected $age; function __construct(array $person_row) { $this->person_id = $person_row['person_id']; $this->name = $person_row['name']; $this->favorite_color = $person_row['favorite_color']; $this->age = $person_row['age']; } // getters and setters... public function toArray() { return array( 'person_id' => $this->person_id, 'name' => $this->name, 'favorite_color' => $this->favorite_color, 'age' => $this->age, ); } } (3a) (GET request) Inputs of an HTML form that is used to change info about the person is populated using Person::getters <form> <input type="text" name="name" value="<?=$person->getName()?>" /> <input type="text" name="favorite_color" value="<?=$person->getFavColor()?>" /> <input type="text" name="age" value="<?=$person->getAge()?>" /> </form> (3b) (POST request) Person object is altered with the POST data using Person::setters $person->setName($_POST['name']); $person->setFavColor($_POST['favorite_color']); $person->setAge($_POST['age']); *(4) Validation and error message generation on a per-field basis - Should this take place in the person object or the person mapper object? - Should data be validated BEFORE being placed into fields of the person object? (5) Data mapper saves the person object (updates row in the database): $person_mapper->savePerson($person); // the savePerson method uses $person->toArray() // to get data in a more digestible format for the // db gateway used by person_mapper Any guidance, suggestions, criticism, or name-calling would be greatly appreciated.

    Read the article

  • Get Started with Silverlight 3 Data Binding

    If you ve learned about data binding from other Microsoft technologies you ll be glad to hear that Silverlight 3 also gives you a smooth way to handle data binding. This article the first one in a multi-part series gets you started by teaching you some of the techniques you ll need to handle data binding successfully.... Test Drive the Next Wave of Productivity Find Microsoft Office 2010 and SharePoint 2010 trials, demos, videos, and more.

    Read the article

  • Generating SQL Server Test Data with Visual Studio 2010

    As a database developer or tester sometimes you need to have production like data in your environment for your development or testing, but you cannot have the production data because of security and privacy issues. So how you can generate test data or replicate similar data as in production for your development or test environment? Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Generating Data for Database Tests

    It is more and more essential for developers to work on development databases that have realistic data in both type and quantity, but without using real data. It isn't exactly easy, even with third-party tools to hand. Phil Factor shows how it can be done, taking the classic PUBS database and giving it a more realistic set of data. Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • Uploading many large files to a remote server

    - by TiernanO
    I am in the process of creating an offsite backup, and need to do a initial load of data. Currently, that's about 400Gb, give or take 10Gb or so... The backup system is producing files which are about 4Gb each, and has some other, smaller related files also. So, i need to transfer all 400ish gigs to a remote server, but how? What is the best method? I have full remote access to the server, so i can install anything i need to install. There are Windows, Linux and a Solaris VM running on the box itself, so any of those can be used there, and i have Windows and Linux at home. I have 2 internet connections in house, 10Mb/s uploading on each, so something that could potentially split the number of connections would be handy (kind of like GetRight, but in reverse... PutRight?).

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >