Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 576/1260 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • Adding a new column to Table which contains live data

    - by Ardman
    I have a large table consisting of over 60 millions records and I would like to add 2 new columns for data migration purposes. There are indexes on the table and some of them are large. So, by me adding the 2 new columns to the table, will I run the risk of slowing down the database whilst it attempts to add them and maybe time-out? Or will it just work? I know that if I try and rearrange the columns SQL Server will ask me to drop and re-create the table, so I definately don't want this. Is this something everyone is challenged with?

    Read the article

  • Inconsistent accessibility

    - by user1312412
    I am getting the following error Inconsistent accessibility: parameter type 'Db.Form1.ConnectionString' is less accessible than method 'Db.Form1.BuildConnectionString(Db.Form1.ConnectionString)' //Name spaces using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Microsoft.VisualBasic; using System.Collections; using System.Diagnostics; using System.Data.OleDb; using System.IO; using System.Drawing.Printing; // namespace Db { public partial class Form1 : Form { public Form1() { InitializeComponent(); } public void SetBusy() { this.Cursor = Cursors.WaitCursor; Application.DoEvents(); } public void SetFree() { this.Cursor = Cursors.Default; Application.DoEvents(); } //connection string into parts struct ConnectionString { public string Provider; public string DataSource; public string UserId; public string Password; public string Database; } //Declare public string BuildConnectionString(ConnectionString connStr) ------> getting error here { string[] parts = new string[5]; parts[0] = "Provider=" + connStr.Provider; parts[1] = "Data Source=" + connStr.DataSource; parts[2] = "User Id=" + connStr.UserId; parts[3] = "Password=" + connStr.Password; parts[4] = "Initial Catalog=" + connStr.Database; return string.Join(";", parts); } // settings public bool IsValidConnectionForPrinting() { SetBusy(); ConnectionString connStr = new ConnectionString(); connStr.Provider = cboProvider.Text; connStr.DataSource = cboDataSource.Text; connStr.UserId = txtUserId.Text; connStr.Password = txtPassword.Text; connStr.Database = cboDatabase.Text; //connection string to database string connectionString = BuildConnectionString(connStr); OleDbConnection conn = new OleDbConnection(connectionString); try { conn.Open(); OleDbCommand cmd = conn.CreateCommand; cmd.CommandType = CommandType.TableDirect; cmd.CommandText = "vw_pr_DL"; cmd.ExecuteScalar(); cmd.CommandText = "vw_pr_VR"; cmd.ExecuteScalar(); //cmd.CommandText = "vw_pr_VR" //cmd.ExecuteScalar() conn.Close(); } //Exception messages catch (Exception ex) { SetFree(); if (ex.Message.StartsWith("Invalid object name")) { MessageBox.Show(ex.Message.Replace("Invalid object name", "Table or view not found"), "Connection Test"); } else { MessageBox.Show(ex.GetBaseException().Message, "Connection Test"); } return false; } SetFree(); return true; } // when user click testbutton private void btnConnTest_Click(object sender, EventArgs e) { if (IsValidConnectionForPrinting()) { MessageBox.Show("Connection succeeded", "Connection Test"); } }

    Read the article

  • MySQL: How do I combine a Stored procedure with another Function?

    - by Laxmidi
    Hi I need some help in combining a stored procedure with another function. I've got a stored procedure that pulls latitudes and longitudes from a database. I've got another function that checks whether a point is inside a polygon. My goal is to combine the two functions, so that I can check whether the latitude and longitude points pulled from the db are inside a specific area. This stored procedure pulls latitude and longitudes from the database based on offense: DROP PROCEDURE IF EXISTS latlongGrabber; DELIMITER $$ CREATE PROCEDURE latlongGrabber(IN offense_in VARCHAR(255)) BEGIN DECLARE latitude_val VARCHAR(255); DECLARE longitude_val VARCHAR(255); DECLARE no_more_rows BOOLEAN; DECLARE latlongGrabber_cur CURSOR FOR SELECT latitude, longitude FROM myTable WHERE offense = offense_in; DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_rows = TRUE; OPEN latlongGrabber_cur; the_loop: LOOP FETCH latlongGrabber_cur INTO latitude_val, longitude_val; IF no_more_rows THEN CLOSE latlongGrabber_cur; LEAVE the_loop; END IF; SELECT latitude_val, longitude_val; END LOOP the_loop; END $$ DELIMITER ; This function checks whether a point is inside a polygon. I'd like the function to test the points produced by the procedure. I can hard-code the polygon for now. (Once, I know how to combine these two functions, I'll use the same pattern to pull the polygons from the database). DROP FUNCTION IF EXISTS myWithin; DELIMITER $$ CREATE FUNCTION myWithin(p POINT, poly POLYGON) RETURNS INT(1) DETERMINISTIC BEGIN DECLARE n INT DEFAULT 0; DECLARE pX DECIMAL(9,6); DECLARE pY DECIMAL(9,6); DECLARE ls LINESTRING; DECLARE poly1 POINT; DECLARE poly1X DECIMAL(9,6); DECLARE poly1Y DECIMAL(9,6); DECLARE poly2 POINT; DECLARE poly2X DECIMAL(9,6); DECLARE poly2Y DECIMAL(9,6); DECLARE i INT DEFAULT 0; DECLARE result INT(1) DEFAULT 0; SET pX = X(p); SET pY = Y(p); SET ls = ExteriorRing(poly); SET poly2 = EndPoint(ls); SET poly2X = X(poly2); SET poly2Y = Y(poly2); SET n = NumPoints(ls); WHILE i<n DO SET poly1 = PointN(ls, (i+1)); SET poly1X = X(poly1); SET poly1Y = Y(poly1); IF ( ( ( ( poly1X <= pX ) && ( pX < poly2X ) ) || ( ( poly2X <= pX ) && ( pX < poly1X ) ) ) && ( pY > ( poly2Y - poly1Y ) * ( pX - poly1X ) / ( poly2X - poly1X ) + poly1Y ) ) THEN SET result = !result; END IF; SET poly2X = poly1X; SET poly2Y = poly1Y; SET i = i + 1; END WHILE; RETURN result; End $$ DELIMITER ; This function is called as follows: SET @point = PointFromText('POINT(5 5)') ; SET @polygon = PolyFromText('POLYGON((0 0, 0 10, 10 10, 10 0, 0 0))'); SELECT myWithin(@point, @polygon) AS result I've tested the stored procedure and the function and they work well. I just have to figure out how to combine them. I'd like to call the procedure with the offense parameter and have it test all of the latitudes and longitudes pulled from the database to see whether they are inside or outside of the polygon. Any advice or suggestions? Thank you. -Laxmidi

    Read the article

  • Tracking DB changes with Zend Framework?

    - by Chad Johnson
    I am trying to decide between the Zend Framework and Ruby On Rails for my web application. If I go with ZF, I need the following: A way to incrementally track changes to my database, as with RoR's migration feature (001_something.sql, 002_something_else.sql). A place to put SQL for the next release of my software. At work in our custom PHP solution, we just have release.sql, which gets run, archived, and blanked out upon release. ZF has Zend_Db_Schema_Manager, which does the same thing, but I'm not interested as its not official, complete, or maintained. Is there an official mechanism that ZF provides for doing something similar to what I described? EDIT I ended up going with Rails. Nothing compares.

    Read the article

  • From comma separated list to individual item result set. *mysql

    - by Raziel
    I'm doing some data migration from a horribly designed database to a less horribly designed database. There is a many to many relationship that has a primary key in one table that corresponds to a comma separated list in another. FK_ID | data ------------- 1,2 | foo 3 | bar 1,3,2 | blarg Is there a way to output the FK_ID field with each comma separated element as a single line in the result set? result set FK_ID | data ------------- 1 | foo 2 | foo 3 | bar 1 | blarg 2 | blarg 3 | blarg I'm thinking this would require some sort of recursive query which I don't think mysql has. Thanks in advance.

    Read the article

  • CoreData could not fulfill a fault when adding new attribute

    - by cagreen
    I am receiving a "CoreData could not fulfill a fault for ..." error message when trying to access a new attribute in a new data model. If I work with new data I'm ok, but when I attempt to read existing data I get the error. Do I need to handle the entity differently myself if the attribute isn't in my original data? I was under the impression that Core Data could handle this for me. My new attribute is marked as optional with a default value. I have created a new .xcdatamodel (and set it to be the current version) and updated my NSPersistentStoreCoordinator initialization to take advantage of the lightweight migration as follows: NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } Any help is appreciated.

    Read the article

  • How to create Empty Label without any entries in it using Google Mail API?

    - by Pari
    Hi, I want to create an empty label in Google apps using Google Mail API. Using Below code: MailItemService mailItemService = new MailItemService(domain, "Sample Migration Application"); mailItemService.setUserCredentials(userEmail, password); MailItemEntry[] entries = new MailItemEntry[1]; entries[0] = new MailItemEntry(); entries[0].Labels.Add(new LabelElement("Empty Label")); entries[0].BatchData = new GDataBatchEntryData(); entries[0].BatchData.Id = "0"; MailItemFeed feed = mailItemService.Batch(domain, username, entries); Above code is not giving any error but not creating label also. If i assign some more values to entries it work nicely but it result in cretion of mail inside Label (But i want empty label) Can anyone help me out here? Thanx

    Read the article

  • Is there any reason why someone would want to create an Core Data model programmatically?

    - by mystify
    I wonder in which cases it would be good to make an NSManagedObjectModel completely programmatically, with NSEntityDescription instances and all this stuff. I'm that kind of person who prefers to code programmatically, rejecting Interface Builder. But when it comes to Core Data, I have a hard time figuring out why I should kill my time NOT using the nice Xcode Data Modeler tool. And since data models are stuck to a given state (except when you want to do some ugly migration operations where thinks probably go wrong and users get mad, really mad), I see no big sense in a data model that's made programmatically for the purpose of changing it all the time. Did I miss something?

    Read the article

  • Using T-Sql, how can I insert from one table on a remote server into another table on my local server?

    - by GenericTypeTea
    Given the remote server 'Production' (currently accessible via an IP) and the local database 'Development', how can I run an INSERT into 'Development' from 'Production' using T-SQL? I'm using MS SQL 2005 and the table structures are a lot different between the two databases hence the need for me to manually write some migration scripts. UPDATE: T-SQL really isn't my bag. I've tried the following (not knowing what I'm doing): EXEC sp_addlinkedserver @server = N'20.0.0.1\SQLEXPRESS', @srvproduct=N'SQL Server' ; GO EXEC sp_addlinkedsrvlogin '20.0.0.1\SQLEXPRESS', 'false', 'Domain\Administrator', 'sa', 'saPassword' SELECT * FROM [20.0.0.1\SQLEXPRESS].[DatabaseName].[dbo].[Table] And I get the error: Login failed for user ''. The user is not associated with a trusted SQL Server connection.

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • Documents stored in SQL table

    - by vradenburg
    I have a legacy FoxPro application which stores documents in an SQL table in a field with the image datatype. FoxPro accesses the image datatype as a "General" field which can be used to store various files. I have a FoxPro control which interfaces with the General field for modifying/viewing the document that was stored. I need to migrate this control to .NET and make it easy for users to view/modify documents of various types. Does anyone have any suggestions on some ways to go about this or know of things that I'll need to consider for the migration to .NET? I'm pretty sure that I'll need to migrate the field to either a varbinary(max) or FileStream data type.

    Read the article

  • modify the name of the :id to :another_id in rails 3

    - by figuedmundo
    Well I googled my question but I couldn't find anything or I it's not the correct question.. The issue is I need modify the primary_key name of the database :id with :another_id, in my project I need to use pgrouting and it contains several plsql functions and these functions uses the primary-key with the name gid and instead of modify the plsql functions is better change the id name, and I was thinking do this with a migration becouse I thought it's the rails way. Is it possible, and how I can do this ?? Thanks in advance and sorry for my english.

    Read the article

  • Fastest way to convert file from latin1 to utf-8 in python.

    - by xsaero00
    I need fastest way to convert files from latin1 to utf-8 in python. The files are large ~ 2G. ( I am moving DB data ). So far I have import codecs infile = codecs.open(tmpfile, 'r', encoding='latin1') outfile = codecs.open(tmpfile1, 'w', encoding='utf-8') for line in infile: outfile.write(line) infile.close() outfile.close() but it is still slow. The conversion takes one fourth of the whole migration time. I could also use a linux command line utility if it is faster than native python code.

    Read the article

  • Managed C++ prospects

    - by Srikanth
    Has anyone tried coding in managed C++? I have a few questions : How productive is the language compared to C#? Is there any restriction on type of projects that can be written? Can we write a web application in managed C++? Is it possible to mix managed and unmanaged C++ code in one application? Is MFC still valid in managed C++? Will it be the best option when considering migration of a VC++ application?

    Read the article

  • Getting code-first Entity Framework to build tables on SQL Azure

    - by NER1808
    I am new the code-first Entity Framework. I have tried a few things now, but can't get EF to construct any tables in the my SQL Azure database. Can anyone advise of some steps and settings I should check. The membership provider has no problems create it's tables. I have added the PersistSecurityInfo=True in the connection string. The connection string is using the main user account for the server. When I implement the tables in the database using sql everything works fine. I have the following in the WebRole.cs //Initialize the database Database.SetInitializer<ReykerSCPContext>(new DbInitializer()); My DbInitializer (which does not get run before I get a "Invalid object name 'dbo.ClientAccountIFAs'." when I try to access the table for the first time. Sometime after startup. public class DbInitializer:DropCreateDatabaseIfModelChanges<ReykerSCPContext> { protected override void Seed(ReykerSCPContext context) { using (context) { //Add Doc Types context.DocTypes.Add(new DocType() { DocTypeId = 1, Description = "Statement" }); context.DocTypes.Add(new DocType() { DocTypeId = 2, Description = "Contract note" }); context.DocTypes.Add(new DocType() { DocTypeId = 3, Description = "Notification" }); context.DocTypes.Add(new DocType() { DocTypeId = 4, Description = "Invoice" }); context.DocTypes.Add(new DocType() { DocTypeId = 5, Description = "Document" }); context.DocTypes.Add(new DocType() { DocTypeId = 6, Description = "Newsletter" }); context.DocTypes.Add(new DocType() { DocTypeId = 7, Description = "Terms and Conditions" }); //Add ReykerAccounttypes context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 1, Description = "ISA" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 2, Description = "Trading" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 3, Description = "SIPP" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 4, Description = "CTF" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 5, Description = "JISA" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 6, Description = "Direct" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 7, Description = "ISA & Direct" }); //Save the changes context.SaveChanges(); } and my DBContext class looks like public class ReykerSCPContext : DbContext { //set the connection explicitly public ReykerSCPContext():base("ReykerSCPContext"){} //define tables public DbSet<ClientAccountIFA> ClientAccountIFAs { get; set; } public DbSet<Document> Documents { get; set; } public DbSet<DocType> DocTypes { get; set; } public DbSet<ReykerAccountType> ReykerAccountTypes { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { //Runs when creating the model. Can use to define special relationships, such as many-to-many. } The code used to access the is public List<ClientAccountIFA> GetAllClientAccountIFAs() { using (DataContext) { var caiCollection = from c in DataContext.ClientAccountIFAs select c; return caiCollection.ToList(); } } and it errors on the last line. Help!

    Read the article

  • pitfalls with mixing storage engines in mysql with django?

    - by Dave Orr
    I'm running a django system over mysql in amazon's cloud, and the database default is innodb. But now I want to put a fulltext index on a couple of tables for searching, which evidently requires myisam. The obvious solution is to just tell mysql to ALTER TABLE to myisam, but are there going to be any issues with that? One that comes to mind is that I'll have to remember to do that any time I build a new version of the database, which should theoretically be rare, but there doesn't seem to be a way to tell django to please set the storage engine at the table level. I guess I could write a migration (we use south). Any other things I might be missing? What could possibly go wrong?

    Read the article

  • .NET on Windows to Mono on Ubuntu

    - by Srikanth
    I am looking at a possibility to change my ASP.NET 2.0 application to the Mono framework. I have used the Mono Migration Analyzer tool and it does detect some P/Invoke and interop dependencies. For example: 1) We use Excel interops and on Linux we are looking to use StarOffice/OpenOffice instead. Is there an easy way of substituting Excel with StarOffice? (I know it sounds bizarre, but I just don't want to miss out in case anyone has done it already.) 2) LDAP authentication: What could be the best alternative in Ubuntu (or an other flavour of Linux)? 3) Is there an Ajax framework for Mono? Preferably with similar controls as Atlas? I hope I am not too ambitious here..

    Read the article

  • Migrating from mssql to firebird: pro and cons

    - by user193655
    i am considering the migration for 3 reasons: 1) SQLSERVER installation is a nightmar, expecially for 1-user software. Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express eidtion has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platiforms, but this will backfire I fear.

    Read the article

  • ActionScript 3 Context Menu Per Sprite?

    - by TheDarkIn1978
    is it not possible to have different context menus for different sprites on the stage? i've tried adding a custom context menu to a sprite but it's applied to the entire stage: mySprite.contextMenu = myMenu; then after reading the documentation where it states: You can attach a ContextMenu object to a specific button, movie clip, or text field object, or to an entire movie level. You use the menu property of the Button, MovieClip, or TextField class to do this. ok, so i though i had to write it like: mySprite.menu.contextMenu = myMenu; only to be greeted with a nice migration issue stating that menu is legacy code and to use contextMenu instead. ??? um, thanks for the headsup, documentation. this process would be entirely much more easier if i could extend the ContextMenu, but for some reason it's marked as "final" and can't be extended... i'm sure adobe's reasons for finalizing the context menu class are as good as their reasons for including misleading documentation. thoughts?

    Read the article

  • What is the best way to set default values in ActiveRecord?

    - by ryw
    What is the best way to set default value in ActiveRecord? I see a post from Pratik that describes an ugly, complicated chunk of code: http://m.onkey.org/2007/7/24/how-to-set-default-values-in-your-model class Item < ActiveRecord::Base def initialize_with_defaults(attrs = nil, &block) initialize_without_defaults(attrs) do setter = lambda { |key, value| self.send("#{key.to_s}=", value) unless !attrs.nil? && attrs.keys.map(&:to_s).include?(key.to_s) } setter.call('scheduler_type', 'hotseat') yield self if block_given? end end alias_method_chain :initialize, :defaults end YUCK! I have seen the following examples googling around: def initialize super self.status = ACTIVE unless self.status end and def after_initialize return unless new_record? self.status = ACTIVE end I've also seen people put it in their migration, but I'd rather see it defined in the model code. What's the best way to set default value for fields in ActiveRecord model?

    Read the article

  • How to implement this .NET client and PHP server synchronization scenario?

    - by pbean
    I have a PHP webserver and a .NET mobile application. The .NET application needs data from a database, which is provided (for now) by the php webserver. I'm fairly new to this kind of scenario so I'm not sure what the best practices are. I ran into a couple of problems and I am not certain how to overcome them. For now, I have the following setup. I run a PHP SOAP server which has a couple of operations which simply retrieve data from the database. For this web service I have created a WSDL file. In Visual Studio I added a web reference to my project using the WSDL file and it generated some classes for it. I simply call something like "MyWebService.GetItems();" and I get an array of items in my .NET application, which come straight from the database. Next I also serialize all these retrieved objects to local (permanent) storage. I face a couple of challenges which I don't know how to resolve properly. The idea is for the mobile client to synchronize the data once (at the start of the day), before working, and then use the local storage throughout the day, and synchronize it back at the end of the day. At the moment all data is downloaded through SOAP, and not a subset (only what is needed). How would I know which new information should be sent to the client? Only the server knows what is new, but only the client knows for sure which data it already has. I seem to be doing double work now. The data which is transferred with SOAP basically already are serialized objects. But at the moment I first retrieve all objects through SOAP and the .NET framework automatically deserializes it. Then I serialize all data again myself. It would be more efficient to simply download it to storage, and then deserialize it. The mobile device does not have a lot of memory. In my test data this is not really a problem, but with potentially tenths of thousands of records I can imagine this will become a problem. Currently when I call the SOAP method, it loads all data into memory (as an array) but as described above perhaps it would be better to have it stored into storage directly, and only retrieve from storage the objects that are needed. But how would I store this? An array would serialize to one big XML (or binary) file from which I cannot choose which objects to load. Would I make (possible tenths of thousands) separate files? Also at the end of the day when I want to send the changes back to the server, how would I know which objects to send... since they're not in memory? I hope my troubles are clear and I hope you are able to help me figure out how to implement this the best way. :) Some things are already fixed (like using .NET on the mobile device, using PHP and having a MySQL database server) but other things can most certainly be changed (like using SOAP).

    Read the article

  • Facebook RunWithFriend redirects to homepage

    - by Clynamen
    I'm following the Canvas app Example of Facebook, but it seems that I'm unable to login succesfully, thus I'm unable to use the example app because I'm always redirected to the main page. I think I set all the parameters correctly, except for the OAuth Migration that should be enabled. There is no such option in Facebook developer panel. This is a screen of the main page as it appears to me after the login: I have the same problem explained here: Canvas demo (runwithfriends) always redirects to main page under my own GAE with the exception that even the application on facebook GAE redirects me to the main page even if I successfully log in. Did anyone experienced a similar problem?

    Read the article

  • rails: best way to store comments in mysql

    - by ciss
    Hello. Okay i have two models: posts and comments. as you can think comments has column :post_id. My models Comments belongs_to :post Post has_many :comments So, this is pretty simple association but i have some problems with ordering comments. at first time, when i create my comments migration file i just add column :position. This column indicate comment position in the post. But now i think what where is more good way to do this. so i can't make my choise: 1) uses t.column :datatime :created_at, :default = Time.now() 2) or use timestamps? this is undiscovered for me, please tell me about your exp.

    Read the article

  • What next generation low level language is the best bet to migrate the code base ?

    - by e-satis
    Let's say you have a company running a lot of C/C++, and you want to start planning migration to new technologies so you don't end up like COBOL companies 15 years ago. For now, C/C++ runs more than fine and there is plenty dev on the market for it. But you want to start thinking about it now, because given the huge running code base and the data sensitivity, you feel it can take 5-10 years to move to the next step without overloading the budget and the dev teams. You have heard about D, starting to be quite mature, and Go, promising to be quite popular. What would be your choice and why?

    Read the article

  • What kind of work benifits from OpenCL

    - by Daniel
    Hey All First of all: I am well aware that OpenCL does not magically make everything faster I am well aware that OpenCL has limitations So now to my question, i am used to do different scientific calculations using programming. Some of the things i work with is pretty intense in regards to the complexity and number of calculations. SO i was wondering, maybe i could speed things up bu using OpenCL. So, what i would love to hear from you all is answers to some of the following [bonus for links]: *What kind of calculations/algorithms/general problems is suitable for OpenCL *What is the general guidelines for determining if some particular code would benefit by migration to OpenCL? Regards

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >