Search Results

Search found 65101 results on 2605 pages for 'big data'.

Page 725/2605 | < Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >

  • MySQL: How to pull information from multiple tables based on information in other tables?

    - by Greg
    Ok, I have 5 tables which I need to pull information from based on one variable. gameinfo id | name | platforminfoid gamerinfo id | name | contact | tag platforminfo id | name | abbreviation rosterinfo id | name | gameinfoid rosters id | gamerinfoid | rosterinfoid The 1 variable would be gamerinfo.id, which would then pull all relevant data from gamerinfo, which would pull all relevant data from rosters, which would pull all relevant data from rosterinfo, which would pull all relevant data from gameinfo, which would then pull all relevant data from platforminfo. Basically it breaks down like this: gamerinfo contains the gamers basic information. rosterinfo contains basic information about the rosters (ie name and the game the roster is aimed towards) rosters contains the actual link from the gamer to the different rosters (gamers can be on multiple rosters) gameinfo contains basic information about the games (ie name and platform) platform info contains information about the different platforms the games are played on (it is possible for a game to be played on multiple platforms) I am pretty new to SQL queries involving JOINs and UNIONs and such, usually I would just break it up into multiple queries but I thought there has to be a better way, so after looking around the net, I couldn't find (or maybe I just couldn't understand what I was looking at) what I was looking for. If anyone can point me in the right direction I would be most grateful.

    Read the article

  • Passing a web service an unknown number of parameters

    - by Billyhole
    I'm relatively new to utilizing web services. I'm trying to create one that will be accepting data from a ASP.Net form whose input controls are created dynamically at runtime and I don't how many control values will be getting passed. I'm thinking I'll be using jQuery's serialize() on the form to get the data, but what do I have the web service accept for a parameter? I thought maybe I could use serializeArray(), but still I don't know what type of variable to accept for the JavaScript array. Finally, I was thinking that I might need to create a simple data transfer object with the data before sending it along to the web service. I just didn't wanna go through with the DTO route if there was a much simpler way or an established best practice that I should follow. Thanks in advance for any direction you can provide and let me know I wasn't clear enough, or if you have any questions.

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • How do I perform this XPath query with Linq?

    - by John Hansen
    In the following code I am using XPath to find all of the matching nodes using XPath, and appending the values to a StringBuilder. StringBuilder sb = new StringBuilder(); foreach (XmlNode node in this.Data.SelectNodes("ID/item[@id=200]/DAT[1]/line[position()>1]/data[1]/text()")) { sb.Append(node.Value); } return sb.ToString(); How do I do the same thing, except using Linq to XML instead? Assume that in the new version, this.Data is an XElement object.

    Read the article

  • PHP Form - Edit & Delete via Text File Db

    - by Jax
    hi, I pieced together the script below from various tutorials, examples, etc... Right now the script currently: Saves Id, Name, Url with a "|" delimiter to a text file Db like: 1|John|http://www.john.com| 2|Mark|http://www.mark.com| 3|Fred|http://www.fred.com| But I'm having a hard time trying to make the "UPDATE" and "DELETE" buttons work. Can someone please post code which will: let me update/save any changed data for that row (for UPDATE button) let me delete that row (for DELETE button) PLEASE copy n paste the code below and try for yourself. I would like to keep the output format of the script below too. thanks D- $file = "data.txt"; $name = $_POST['name']; $url = $_POST['url']; $data = file('data.txt'); $i = 1; foreach ($data as $line) { $line = explode('|', $line); $i++; } if (isset($_POST['submits'])) { $fp = fopen($file, "a+"); fwrite($fp, $i."|".$name."|".$url."|\n"); fclose($fp); } ? '); } ?

    Read the article

  • How to get this JavaScript class member to return a value?

    - by George Edison
    I have a JavaScript class that has a method: function someClass() { this.someMethod = someClass_someMethod; } function someClass_someMethod() { // make an AJAX request } The problem is that someClass_someMethod() needs to return the value of the AJAX request. I'm using jQuery's $.getJSON() method to fetch the data. The data needs to be returned but it seems the only way to get the data is through a callback. How can this be done?

    Read the article

  • How to respond to any kind of interruption?

    - by mystify
    My app is playing a pretty complex animation. It's like a flipbook. What I do is: I have a huge loop with selectors, and after every delayed call the next one is called. Now someone calls the user and the device suddenly shows up this fat green status bar and maybe some big pick-up-the-phone-call overlay. Or: The alarm clock rings, and a big alert sheet appears in front of just about everything. It would be great to just pause the whole animation in case of ANY interruption. Probably I've also missed like 5 more possible interruptions. How are you doing that? How do you get notified for all interruptions and then call one single -stopEverything method?

    Read the article

  • Selectind DOM elements based on attribute content

    - by sameold
    I have several types of <a> in my document. I'd like to select only the <a> whose title attribute begins with the text "more data ..." like <a title="more data *"> For example given the markup below, I'd like to select the first and second <a>, but skip the third because title doesn't begin with more data, and skip the 4th because <a> doesn't even have a title attribute. <a title="more data some text" href="http://mypage.com/page.html">More</a> <a title="more data other text" href="http://mypage.com/page.html">More</a> <a title="not needed" href="http://mypage.com/page.html">Not needed</a> <a href="http://mypage.com/page.html">Not needed</a> I'm using DOMXPath. How would my query look like? $xpath = new DOMXPath($doc); $q = $xpath->query('//a');

    Read the article

  • What strategies are efficient to handle concurrent reads on heterogeneous multi-core architectures?

    - by fabrizioM
    I am tackling the challenge of using both the capabilities of a 8 core machine and a high-end GPU (Tesla 10). I have one big input file, one thread for each core, and one for the the GPU handling. The Gpu thread, to be efficient, needs a big number of lines from the input, while the Cpu thread needs only one line to proceed (storing multiple lines in a temp buffer was slower). The file doesn't need to be read sequentially. I am using boost. My strategy is to have a mutex on the input stream and each thread locks - unlocks it. This is not optimal because the gpu thread should have a higher precedence when locking the mutex, being the fastest and the most demanding one. I can come up with different solutions but before rush into implementation I would like to have some guidelines. What approach do you use / recommend ?

    Read the article

  • Simple C question

    - by Meko
    HI all. I am trying to make little program that reads data from file which has name of user and some data for that user. I am new on C , and how can i calculate this data for its user?line by line reading and adding each char in array? And how can I read line? is there any function?

    Read the article

  • C# TCP Client/Server communication issue

    - by Jamie
    What i'm currently trying to do is make a very basic webchat for irc using silverlight. Basically how i'm trying to do it is have a tcp server listening for connections from silverlight. When a client connects it creates a new connection to irc and data is passed to/from the client/irc via the server application. I've gotten it to work fine for one client connection, but as soon as two (or more) clients connect multiple connections are made to irc but all data passed from the clients just goes through the latest irc connection (if that makes sense). For example Client1, Client2 and Client3 are all connected to irc, but no matter who sends data it all comes through Client3. Between the client and server app it recongises the data coming in from different clients so i believe the problems lies within the way i've connected to the irc. When the TCP server accepts a new client a new thread is made to listen to incoming data, and from there a new thread is made to connect to irc. I'm sure thats where the problem exists, but i've confused myself a lot now and am wondering if anyone can help me figure out a solution.

    Read the article

  • Large scale Merge Replication strategy - what can go wrong?

    - by niidto
    Hi, I'm developing a piece of software that uses Merge Replication and SQL Compact on Windows Mobile 6. At the moment it is running on 5 devices reasonably well. The issues I've come up against are as follows: The schema has had to change a lot, and will continue to have to change as the application evolves. There have been various errors replicating these schema changes down to the device, uploads failing due to schema inconsistencies. Subscriptions expiring (after 14 days) and unable to reinitialize with upload - AKA, potential data los of unsynced data up to that point. Basically, the worst case scenario is data loss, and when merge replication fails, there seems to be no way back to get the data off. My method until now has been to drop and create the subscription on the device. I don't hear many people doing this, though it seems to solve everything. The long term plan is to role this out to 500+ devices. Any advice on people who have undertaken similar projects, and how to minimise data loss and make it so that there's appropriate error handling code to recover from sync failures would be much appreciated. James

    Read the article

  • Output of System.out.println(object)

    - by Shaarad Dalvi
    I want to know what exactly the output tells when I do the following : class data { int a=5; } class main { public static void main(String[] args) { data dObj=new data(); System.out.println(dObj); } } I know it gives something related to object as the output in my case is data@1ae73783. I guess the '1ae73783' is a hex number. I also did some work around and printed System.out.println(dObj.hashCode()); I got number 415360643. I got an integer value. I don't know what hashCode() returns, still out of curiosity, when I converted 1ae73783 to decimal, I got 415360643! That's why I am curious that what exactly is this number?? Is this some memory location of Java's sandbox or some other thing? Any light on this matter will be helpful..thanks! :)

    Read the article

  • Which network protocol to use for lightweight notification of remote apps (Delphi 2005)

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • Backbone.js - Getting JSON back from url

    - by Brian
    While trying to learn Backbone.js, I've been trying to grab the content of a JSON file using the following code: (function($){ var MyModel = Backbone.Model.extend(); var MyCollection = Backbone.Collection.extend({ model : MyModel, url: '/backbone/data.json', parse: function(response) { console.log(response); return response; } }); var stuff = new MyCollection; console.log(stuff.fetch()); console.log(stuff.toJSON()); })(jQuery) 'stuff.fetch()' returns the entire object (with the data I'm after in responseText), 'stuff.toJSON' returns nothing ([]), but the console in the parse method is returning exactly what I want (the json object of my data). I feel like I'm missing something obvious here, but I just can't seem to figure it out why I can't get the right data out. Could someone point me in the right direction or show me what I'm doing wrong here? Am I using a model for the wrong thing?

    Read the article

  • Ajax(jQuery) strange file post problem

    - by faya
    Hello, I have a problem posting file via ajax jQuery function. I have something like this: $('#my_form').submit(function() { var serialized = $(this).formSerialize(); var sUrl = "xxx"; $.ajax({ url: sUrl, type: "POST", data: serialized, success: function(data) { $(".main_container").html(data); } }) return false; // THIS return statment blocks sending file content }); When I remove return false statement everything is okey, server side gets the file content and etc, but when it's there (i monitor with firebug) that this posting sends only file name. What can be wrong? P.S. - I need this return false statement, because I want to manipulate return data myself.

    Read the article

  • Silverlight 4, Google Chrome, and HttpWebRequest problem

    - by synergetic
    My Silvrlight 4 application hosted in ASP.NET MVC 2 working fine when used through Internet Explorer 8, both in development server and remote web server (IIS 6.0). However when I try to browse through Google Chrome (version 5.0.375.70) it throws "remote server returned not found" error. The code causing the problem is the following: public class MyWebClient { private HttpWebRequest _request; private Uri _uri; private AsyncOperation _asyncOp; public MyWebClient(Uri uri) { _uri = uri; } public void Start(XElement data) { _asyncOp = AsyncOperationManager.CreateOperation(null); _data = data; _request = (HttpWebRequest)WebRequest.Create(_uri); _request.Method = "POST"; _request.BeginGetRequestStream(new AsyncCallback(BeginRequest), null); } private void BeginRequest(IAsyncResult result) { Stream stream = _request.EndGetRequestStream(result); using (StreamWriter writer = new StreamWriter(stream)) { writer.Write(((XElement)_data).ToString()); } stream.Close(); _request.BeginGetResponse(new AsyncCallback(BeginResponse), null); } private void BeginResponse(IAsyncResult result) { HttpWebResponse response = (HttpWebResponse)_request.EndGetResponse(result); if (response != null) { //process returned data ... } } ... } In short, the above code sends some XML data to web server (to ASP.NET MVC controller) and gets back a processed data. It works when I use Internet Explorer 8. Can someone please explain what is the problem with Google Chrome?

    Read the article

  • Total Number of records required in paged .NET datagrid control

    - by sumitchauhan
    I am using a data grid and has bound a data source with it. I am trying to get the total number of records in the grid in overriden InitializePager method from pagedDataSource DataSourceCount. I thought DataSourceCount returns number of records from SelectCountMethod of ObjectDataSource, but DataSourceCount is giving me the page size and not the total number of records, whereas when I debug and see in SelectCountMethod it is returning correct number of total Records. I am not sure how to get the data from SelectCountMethod in DataGrid.

    Read the article

  • event .split() no response

    - by klox
    i've been modified the code like below function addtext() { var barCode = this.(text); $("#model").change(function() { barCode = $this.val(); var data = barCode.split(""); $("#model").val(data[0]); $("#serial").val(data[1]); }); }; but..still not separate..please help.

    Read the article

  • Database Programming in C#, returning output from Stored Proc

    - by jpavlov
    I am working at gaining an understanding at how to interface stored procedures with applications. My example is simple, but it doesn't display my columns and rows in the command prompt, instead it display System.Data.SqlClient.SqlDataReader. How do I display the rows from my stored procudure? ----Stored Proc-- ALTER PROCEDURE dbo.SelectID AS SELECT * FROM tb_User; ----- Below is the code: using System; using System.Data.SqlClient; using System.IO; namespace ExecuteStoredProc { class Program { static void Main(string[] args) { SqlConnection cnnUserMan; SqlCommand cmmUser; //SqlDataReader drdUser; //Instantiate and open the connection cnnUserMan = new SqlConnection("Data Source=.\\SQLEXPRESS;AttachDbFilename=c:\\Program Files\\Microsoft SQL Server\\MSSQL10.SQLEXPRESS\\MSSQL\\DATA\\UserDB.mdf; Integrated Security=True;Connect Timeout=30;User Instance=True"); cnnUserMan.Open(); //Instantiate and initialize command cmmUser = new SqlCommand("SelectID", cnnUserMan); cmmUser.CommandType = System.Data.CommandType.StoredProcedure; //drdUser = cmmUser.ExecuteReader(); Console.WriteLine(cmmUser.ExecuteReader()); Console.ReadLine(); } } } Thanks.

    Read the article

  • Do superfluous calls to addEventListenter("event", thisSpecificFunction) waste resources?

    - by Richard Haven
    I have ItemRenderers that need to listen for events. When they hear an event (and when data changes), they dispatch an event with their current data value. As item renderers are reused, each of them is going to add its callback in set data(value...)and pass the callback function in the event as well as the current data value. So, the listener of the item renderer's bubbling event will set someEventDispatcher.addEventListener("someEvent", itemRendererEvent.callbackListener). This will happen more than once. Does setting the same event listener on the same event for the same dispatcher waste resources? Does the displatcher see that it already has the listener?

    Read the article

< Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >