Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 648/1302 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • What are the correct steps to capture data with uploadify?

    - by Curtis
    Does anyone have a demo of using uploadify with additional fields and saving to database? The site's examples don't explain the process in depth and I am not familiar with jquery. I don't know why but I can't get my head around how too integrate the two. I have the demo working and I have my app which uses traditional php/html forms working. I need to conduct basic form collection with uploadify in the mix. Collect data and check data with feedback and then process into database and post back a confirmation. How do I feed information back to the uploadify form if there is a problem from the upload.php file? How do I access my data in the upload.php file? how do I redirect the upload page to a confirmation page after a succeeful upload?

    Read the article

  • Trouble using South with Django and Heroku

    - by Dan
    I had an existing Django project that I've just added South to. I ran syncdb locally. I ran manage.py schemamigration app_name locally I ran manage.py migrate app_name --fake locally I commit and pushed to heroku master I ran syncdb on heroku I ran manage.py schemamigration app_name on heroku I ran manage.py migrate app_name on heroku I then receive this: $ heroku run python notecard/manage.py migrate notecards Running python notecard/manage.py migrate notecards attached to terminal... up, run.1 Running migrations for notecards: - Migrating forwards to 0005_initial. > notecards:0003_initial Traceback (most recent call last): File "notecard/manage.py", line 14, in <module> execute_manager(settings) File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/app/lib/python2.7/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/app/lib/python2.7/site-packages/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/app/lib/python2.7/site-packages/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 81, in run_migration migration_function() File "/app/lib/python2.7/site-packages/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/app/notecard/notecards/migrations/0003_initial.py", line 15, in forwards ('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/app/lib/python2.7/site-packages/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/app/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) django.db.utils.DatabaseError: relation "notecards_semester" already exists I have 3 models. Section, Semester, and Notecards. I've added one field to the Notecards model and I cannot get it added on Heroku. Thank you.

    Read the article

  • fluent nhibernate not caching queries in asp.net mvc

    - by AWC
    I'm using a fluent nhibernate with asp.net mvc and I not seeing anything been cached when making queries against the database. I'm not currently using an L2 cache implementation. Should I see queries being cached without configuring an out of process L2 cache? Mapping are like this: Table("ApplicationCategories"); Not.LazyLoad(); Cache.ReadWrite().IncludeAll(); Id(x => x.Id); Map(x => x.Name).Not.Nullable(); Map(x => x.Description).Nullable(); Example Criteria: return session .CreateCriteria<ApplicationCategory>() .Add(Restrictions.Eq("Name", _name)) .SetCacheable(true); Everytime I make a request for an application cateogry by name it is hitting the database is this expected behaviour?

    Read the article

  • How do I list all tables in all databases in SQL Server in a single result set?

    - by msorens
    I am looking for T-SQL code to list all tables in all databases in SQL Server (at least in SS2005 and SS2008; would be nice to also apply to SS2000). The catch, however, is that I would like a single result set. This precludes the otherwise excellent answer from Pinal Dave: sp_msforeachdb 'select "?" AS db, * from [?].sys.tables' The above stored proc generates one result set per database, which is fine if you are in an IDE like SSMS that can display multiple result sets. However, I want a single result set because I want a query that is essentially a "find" tool: if I add a clause like WHERE tablename like '%accounts' then it would tell me where to find my BillAccounts, ClientAccounts, and VendorAccounts tables regardless of which database they reside in.

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • SQL error C# - Parameter already defined

    - by jakesankey
    Hey there. I have a c# application that parses txt files and imports the data from them into a sql db. I was using sqlite and am now working on porting it to sql server. It was working fine with sqlite but now with sql i am getting an error when it is processing the files. It added the first row of data to the db and then says "parameter @PartNumber has already been declared. Variable names must be unique within a batch or stored procedure". Here is my whole code and SQL table layout ... the error comes at the last insertCommand.ExecuteNonQuery() instance at the end of the code... SQL TABLE: CREATE TABLE Import ( RowId int PRIMARY KEY IDENTITY, PartNumber text, CMMNumber text, Date text, FeatType text, FeatName text, Value text, Actual text, Nominal text, Dev text, TolMin text, TolPlus text, OutOfTol text, FileName text ); CODE: using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'Import')) BEGIN SELECT DISTINCT FileName FROM Import; END"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data... done"); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\", "R303717*.txt*", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); foreach (string file in files.Except(ImportedFiles)) { string FileNameExt1 = Path.GetFileName(file); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'Import')) BEGIN SELECT COUNT(*) FROM Import WHERE FileName = @FileExt; END"; cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Parsing CMM data for SQL database... Please wait."); insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Value", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); // insertCommand.ExecuteNonQuery(); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • Script.PostDeployment.sql not executing in CI build VS 2010

    - by Suirtimed
    I have a database project in VS 2010 (SQL 2008). The local Deploy Solution action works and executes all of the SQL in the Script.PostDeployment.sql file. When I check changes in, the Build Definition for the continuous integration server executes. The database is deployed into the CI environment, but the PostDeployment script doesn't get executed. I wasn't able to find anything specific to this particular scenario. I also expect I'll need to provide additional information unless this is a trivial problem that I missed somewhere. Additional Information: The build is executing vsdbcmd.exe to deploy. The deployment manifest references the PostDeployment.sql file and it's present in the path with the rest of the files. Here is a reference to a thread on social.msdn.microsoft.com regarding this problem.

    Read the article

  • Size SKU in Magento

    - by latvian
    Hi, How can I allow SKU be longer than 34 characters (for simple products) for all products? When i add new product(simple) and enter more than 34 characters, Magento cuts it to 34 after saving. In the database the 'sku' attributes ( for quete item,order item,invoice item, shipment item) from eav_attribute table hold varchar(255). For Catalog_Product_Entity the attributed is VarChar(64). In either case, it is more than 34characters. Thus, I could change SKU to 64 characters without making any change in database, correct? How do i do that? I have good understanding of the user side magento code, but not Admin side. Can you suggest me good tutorial on making changes in Admin side that could help me figure this question myself. Thank you, Margots

    Read the article

  • building objects from xml file at runtime and intializing, in one pass?

    - by KaluSingh Gabbar
    I have to parse the XML file and build objects representation based on that, now once I get all these data I create entries in various database for these data objects. I have to do second pass over that for value as in the first pass all I could do is build the assets in various databases. and in second pass I get the values for all the data and put it in the database. I have a feeling that this can be done in a single pass but I just want to see what are your opinions. As I am just a student who started with professional work, experienced ppl please help. Can someone who have ideas or done similar work, please provide some light on the topic so that I can think over the possibility of the work and get the prototype going based on your suggestion. Thanks a lot for your precious time, I honestly appreciate it.

    Read the article

  • MS SQL Server 2000 tables

    - by klork
    We currently have an MS SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: 1) Does anyone have any experience with a MS SQL Server (2000/2005) installation with millions of tables? 2) What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. 3) What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • JTA Transaction: What happens if an exception happens but rollback is not called on the transaction?

    - by kellyfj
    We have some third party code wherein they do the following 1) Create a User Transaction e.g. txn = (UserTransaction)ctx.lookup( "UserTransaction" ); txn.begin( ); 2) Do some work persisting to the database (via JPA) to a MySQL database 3) txn.commit() They have Exception blocks but NONE of them call txn.rollback. Good coding practice says they NEED to call rollback if an exception occurs but my question is If the txn is not commited, and an exception occurs what is the negative effect of them NOT calling rollback?

    Read the article

  • Method of documentation for SQL Stored Procedures

    - by Chapso
    I work in a location where a single person is responsible for creating and maintaining all stored procedures for SQL servers, and is the conduit between software developers and the database. There are a lot of stored procedures in place, and with a database diagram it is simple enough 90% of the time to figure out what the stored procedure needs for arguments/returns as output. For the other 10% of the time, however, it would be helpful to have a reference. Since the DBA is a busy guy (aren't we all), it would be good to have some program which documents the stored procedures to a file so that the developers can see it without being able to access the SPs themselves. The question is, does anyone know of a good program to accomplish this? Basically what we need is something that gives the name of the SP, the argument list and the output, both with datatypes and a nullable flag.

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • How to config Remote BLOB Storage(RBS) with Microsoft Dynamics CRM 4.0 ?

    - by jk
    Hi We have working site for Dynamic crm 4.0 and in it we are storing image into the database. Now database is growing very fast and server is dying.. now I want to enable the Remote BLOB Storage with Dynamic CRM 4.0. for that I tried to install RBS for testing but everywhere is configure with Sharepoint 2010 not with Dynamic Crm. Does anybody know how to install and configure with Dyanmic CRM 4.0? Does RBS with Standard Edition of SQL Server 2008? I followed following path to install but it with Sharepoint? http://technet.microsoft.com/en-us/library/ee663474.aspx Any help is appreciate. Thanks

    Read the article

  • Apache basic auth, mod_authn_dbd and password salt

    - by Cristian Vrabie
    Using Apache mod_auth_basic and mod_authn_dbd you can authenticate a user by looking up that user's password in the database. I see that working if the password is held in clear, but what if we use a random string as a salt (also stored in the database) then store the hash of the concatenation? mod_authn_dbd requires you to specify a query to select that password not to decide if the user is authenticated of not. So you cannot use that query to concatenate the user provided password with the salt then compare with the stored hash. AuthDBDUserRealmQuery "SELECT password FROM authn WHERE user = %s AND realm = %s" Is there a way to make this work?

    Read the article

  • How to persist non-trivial fields in Play Framework

    - by AlexR
    I am trying to persist complex objects using Ebeans in Play Framework (2.03). In particular, I've created a class that contains a field of type weka.classifier.Classifier (Weka is a popular machine learning library - see http://weka.sourceforge.net/doc/weka/classifiers/Classifier.html). Classifier implements Serializeable so I hoped that I can get away with something like @Entity @Table(name = "classifiers") public class ClassifierData extends Model { @Id public Long id; public Classifier classifier; } However, the Evolutions script suggests the following database structure: create table classifiers ( id bigint auto_increment not null, constraint pk_classifiers primary key (id)) ) In other words, it ignores the field of type Classifier. (The database is MySQL if it makes any difference) What should I do to store complex serializeable objects using Ebean/Evolutions/PlayFramework?

    Read the article

  • Best approach to limit users to a single node of a given content type in Drupal

    - by Chaulky
    I need to limit users to a single node of a given content type. So a user can only create one node of TypeX. I've come up with two approaches. Which would be better to use... 1) Edit the node/add/typex menu item to check the database to see if the user has already created a node of TypeX, as well as if they have permissions to create it. 2) When a user creates a node of TypeX, assign them to a different role that doesn't have permissions to create that type of node. In approach 1, I have to make an additional database call on every page load to see if they should be able to see the "Create TypeX" (node/add/typex). But in approach 2, I have to maintain two separate roles. Which approach would you use?

    Read the article

  • System.Data.DataException thrown when launching application after install

    - by jwarzech
    I currently have a (Visual Studio 2008) installer that has a 'Primary output...' custom action under 'Install' with 'InstallerClass' set to false. During the install I get a System.Data.DataException. When I run the debugger I am getting an exception being thrown from a bit of database access code that runs when the application starts. The database access is using Windows Authentication and works correctly when the application is started manually. Is this something to do with permission settings not allowing db access when launched from the installer? (I'm out of ideas)

    Read the article

  • MySQL driver for Rails in Windows 7 x64

    - by Darth
    I've got problem with connecting to MySQL database on my freshly installed Windows 7 machine. I'm getting this error when I try to migrate my database. !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! 193: %1 is not valid Win32 application - C:/Ruby/lib/ruby/gems/1.8/gems/mysql-2.8.1-x86-mswin32/lib/1.8/mysql_api.so I currently have installed ruby 1.8.6 (2008-08-11 patchlevel 287) [i386-mswin32] mysql version 5.0.86 for Win64 gem 1.3.1 mysql-2.8.1-x86-mswin32

    Read the article

  • Longish list of allowed referrers

    - by Tim
    I want to allow hotlinking only from a list of referrers (paying customers, probably a few hundred). I am on Apache 1.3 and I do not have access to the configuration (only .htaccess). What is the fastest way to implement this? My thoughts so far: PHP with database and readfile() (SSI with) Perl and database the list implemented as symlinks named after the allower referrer, then RewriteCond using HTTP_REFERER everything in .htaccess, lots of RewriteCond's everything in .htaccess, lots of SetEnvIf's Any better (faster) ways to do this? Thanks!

    Read the article

  • Remote Postgresql - extremely slow

    - by Muffinbubble
    Hi, I have setup PostgreSQL on a VPS I own - the software that accesses the database is a program called PokerTracker. PokerTracker logs all your hands and statistics whilst playing online poker. I wanted this accessible from several different computers so decided to installed it on my VPS and after a few hiccups I managed to get it connecting without errors. However, the performance is dreadful. I have done tons of research on 'remote postgresql slow' etc and am yet to find an answer so am hoping someone is able to help. Things to note: The query I am trying to execute is very small. Whilst connecting locally on the VPS, the query runs instantly. While running it remotely, it takes about 1 minute and 30 seconds to run the query. The VPS is running 100MBPS and then computer I'm connecting to it from is on an 8MB line. The network communication between the two is almost instant, I am able to remotely connect fine with no lag whatsoever and am hosting several websites running MSSQL and all the queries run instantly, whether connected remotely or locally so it seems specific to PostgreSQL. I'm running their newest version of the software and the newest compatible version of PostgreSQL with their software. The database is a new database, containing hardly any data and I've ran vacuum/analyze etc all to no avail, I see no improvements. I don't understand how MSSQL can query almost instantly yet PostgreSQL struggles so much. I am able to telnet to the post 5432 on the VPS IP with no problems, and as I say the query does execute it just takes an extremely long time. What I do notice is on the router when the query is running that hardly any bandwidth is being used - but then again I wouldn't expect it to for a simple query but am not sure if this is the issue. I've tried connecting remotely on 3 different networks now (including different routers) but the problem remains. Connecting remotely via another machine via the LAN is instant. I have also edited the postgre conf file to allow for more memory/buffers etc but I don't think this is the problem - what I am asking it to do is very simple - it shouldn't be intensive at all. Thanks, Ricky

    Read the article

  • A question about making a C# class persistent during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • Getting child elements that are related to a parent in same table

    - by Madawar
    I have the following database schema class posts(Base): __tablename__ = 'xposts' id = Column(Integer, primary_key=True) class Comments(Base): __tablename__ = 'comments' id = Column(Integer, primary_key=True) comment_parent_id=Column(Integer,unique=True) #comment_id fetches comment of a comment ie the comment_parent_id comment_id=Column(Integer,default=None) comment_text=Column(String(200)) Values in database are 1 12 NULL Hello First comment 2 NULL 12 First Sub comment I want to fetch all Comments and sub comments of a post using sqlalchemy and have this so far qry=session.query(Comments).filter(Comments.comment_parent_id!=None) print qry.count() Is there a way i can fetch the all the subcomments of a comment in a query i have tried outerjoin on the same table(comments) and it seemed stupid and it failed.

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >