Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 315/1698 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • More FP-correct way to create an update sql query

    - by James Black
    I am working on access a database using F# and my initial attempt at creating a function to create the update query is flawed. let BuildUserUpdateQuery (oldUser:UserType) (newUser:UserType) = let buf = new System.Text.StringBuilder("UPDATE users SET "); if (oldUser.FirstName.Equals(newUser.FirstName) = false) then buf.Append("SET first_name='").Append(newUser.FirstName).Append("'" ) |> ignore if (oldUser.LastName.Equals(newUser.LastName) = false) then buf.Append("SET last_name='").Append(newUser.LastName).Append("'" ) |> ignore if (oldUser.UserName.Equals(newUser.UserName) = false) then buf.Append("SET username='").Append(newUser.UserName).Append("'" ) |> ignore buf.Append(" WHERE id=").Append(newUser.Id).ToString() This doesn't properly put a , between any update parts after the first, for example: UPDATE users SET first_name='Firstname', last_name='lastname' WHERE id=... I could put in a mutable variable to keep track when the first part of the set clause is appended, but that seems wrong. I could just create an list of tuples, where each tuple is oldtext, newtext, columnname, so that I could then loop through the list and build up the query, but it seems that I should be passing in a StringBuilder to a recursive function, returning back a boolean which is then passed as a parameter to the recursive function. Does this seem to be the best approach, or is there a better one?

    Read the article

  • Normalization of database for timesheet tool and ensure data integrity

    - by fireeyedboy
    I'm creating a timesheet application. I have the following entities (amongst others): Company Employee = an employee associated with a company Client = a client associated with a company So far I have the following (abbreviated) database setup: Company - id - name Employee - id - companyId (FK to Company.id) - name Client - id - companyId (FK to Company.id) - name Now, I want an employee to be associated with a client, but only if that client is associated with the company the employee works for. How would you guarantee this data integrity on a database level? Or should I just depend on the application to guarantee this data integrity? I thought about creating a many to many table like this: EmployeeClient - employeeId (FK to Employee.id) - companyId \ (combined FK to Client.companyId, Client.id) - clientId / Thus, when I insert a client for an employee along with the employee's company id, the database should prevent this when the client is not associated with the employee's company id. Does this make sense? Because this still doesn't guarantee the employee is associated with the company. How do you deal with these things? UPDATE The scenario is as followed: A company has multiple employees. Employees will only be linked to one company. A company has multiple clients also. Clients will only be linked to one company. (Company is a sandbox, so to speak). An employee of a company can be linked to a client of it's company, but only if the client is part of the company's clientele. In other words: The application will allow a company to create/add employees and create/add clients (hence the companyId FK in the Employee and Client tables). Next, the company will be allowed to assign certain clients to certain of it's employees (EmployeeClient table). Imagine an employee working on projects for a few clients for which s/he can write billable hours, but the employee must not be allowed to write billable hours for clients they are not assigned to by their employer (the company). So, employees will not automatically have access to all their company's clients, but only to those that the company has selected for them. Hopefully this has shed some more light on the matter.

    Read the article

  • how to listen for changes in Contact Database

    - by hap497
    Hi, I am trying to listen for any change in the contact database. So I create my contentObserver which is a child class of ContentObserver: private class MyContentObserver extends ContentObserver { public MyContentObserver() { super(null); } @Override public void onChange(boolean selfChange) { super.onChange(selfChange); System.out.println (" Calling onChange" ); } } MyContentObserver contentObserver = new MyContentObserver(); context.getContentResolver().registerContentObserver (People.CONTENT_URI, true, contentObserver); But When I use 'EditContactActivity' to change the contact database, My onChange function does not get called.

    Read the article

  • SQL Table Setup Advice

    - by Ozzy
    Hi all. Basically I have an xml feed from an offsite server. The xml feed has one parameter ?value=n now N can only be between 1 and 30 What ever value i pick, there will always be 4000 rows returned from the XML file. My script will call this xml file 30 times for each value once a day. So thats 120000 rows. I will be doing quite complicated queries on these rows. But the main thing is I will always filter by value first so SELECT * WHERE value = 'N' etc. That will ALWAYS be used. Now is it better to have one table where all 120k rows are stored? or 30 tables were 4k rows are stored? EDIT: the SQL database in question will be MySQL

    Read the article

  • Count query with 3 coloumn iin SQL

    - by asher baig
    I have one database Library with table named called Medien. Having multiple columns named as Fname,Mname,Lname and ISBN. I want to calculate database records with ISBN and without ISBN? I have execute following command Select COUNT(ISBN) as Verf1 FROM library.MEDIEN where verf1 = isbn; Select COUNT(ISBN) as Verf2 FROM library.MEDIEN where verf2 = isbn; Select COUNT(ISBN) as Verf3 FROM library.MEDIEN where verf3 = isbn; Select COUNT(ISBN) as Ntverf1 FROM library.MEDIENwhere verf1 != isbn; Select COUNT(ISBN) as Ntverf2 FROM library.MEDIENwhere verf2 != isbn; Select COUNT(ISBN) as Ntverf3 FROM library.MEDIENwhere verf3 != isbn; I am not sure i execute correct command or not. Because some ISBN records have Fname,Mname or Fname,Lname or Mname,Lname or Fname , Lname,Mname only respectively. Please kindly help me solving this query

    Read the article

  • Database Logon Error in crystal report

    - by yasar arafat
    am really very dumb as this problem is being answered so many times but eally its above my head. i dont have any idea about working with crystal reports or asp.net but due to some reason i have to work on it. after trying very hard i m able to design on crystal report connecting it with mysql database. now when i see in the crystalreport preview i am able to see the report and the data ![enter image description here][1] but when i try to run it on server it says database logon failed![enter image description here][2] now i really dont know what to do i am pasting my aspx and aspx.vb code here and please repley in english not computer language believe me i m super dumb enter code here aspx codes <%@ Page Title="" Language="VB" MasterPageFile="~/Master/Site.master"AutoEventWireup="false" CodeFile="ManHourCostExpenditure.aspx.vb" Inherits="ManHourCostExpenditure" %> <%@ Register assembly="CrystalDecisions.Web, Version=13.0.2000.0, Culture=neutral, PublicKeyToken=692fbea5521e1304" namespace="CrystalDecisions.Web" tagprefix="CR" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" Runat="Server"> <CR:CrystalReportViewer ID="CrystalReportViewer1" runat="server" AutoDataBind="True" EnableDatabaseLogonPrompt="False" EnableParameterPrompt="False" GroupTreeImagesFolderUrl="" Height="1202px" ReportSourceID="CrystalReportSource1" ToolbarImagesFolderUrl="" ToolPanelView="None" ToolPanelWidth="200px" Width="903px" BorderColor="#660033" BorderStyle="Solid" BorderWidth="1px" HasCrystalLogo="False" /> <CR:CrystalReportSource ID="CrystalReportSource1" runat="server"> <Report FileName="CrDiscProjManHoursActVsEst.rpt"> </Report> </CR:CrystalReportSource> </asp:Content> aspxdotvb codes Imports CrystalDecisions.CrystalReports.Engine Imports CrystalDecisions.Shared Imports CrystalDecisions.Enterprise Imports CrystalDecisions.ReportSource Imports CrystalDecisions.Web Imports CrystalDecisions.Windows.Forms Imports MySql Imports MySql.Data Imports MySql.Data.MySqlClient Imports System.Collections Imports System.Collections.Generic Imports System.Text Imports CrystalDecisions Imports CrystalDecisions.CrystalReports Imports CrystalReportsReportDefModelLib Partial Class ManHourCostExpenditure Inherits System.Web.UI.Page Dim CrRpt As New CrystalDecisions.CrystalReports.Engine.ReportDocument Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim StrConn As String = "SERVER=localhost;DATABASE=fluor;UID=root; PWD=root;" Dim Conn As New MySqlConnection(StrConn) Conn.Open() Dim CrRpt As New CrystalDecisions.CrystalReports.Engine.ReportDocument 'Session.Add("REPORT_KEY", CrRpt) CrRpt.Load(Server.MapPath("CrManHourCostExpenditure.rpt")) CrRpt.PrintOptions.PaperOrientation = PaperOrientation.Landscape 'If IsPostBack Then ' CrRpt = CType(Session.Item("REPORT_KEY"), Group) ' CrystalReportViewer1.ReportSource = CrRpt 'End If CrystalReportViewer1.ReportSource = CrRpt 'CrystalReportViewer1.DataBind() CrystalReportViewer1.RefreshReport() Conn.Close() 'CrRpt.Close() 'CrRpt.Dispose() End Sub Protected Sub Page_Unload(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Unload 'CrystalReportViewer1.ReportSource.Close() CrRpt.Close() CrRpt.Dispose() End Sub 'Protected Sub CrystalReportViewer1_Unload(ByVal sender As Object, ByVal e As System.EventArgs) Handles CrystalReportViewer1.Unload ' CrRpt.Close() ' CrRpt.Dispose() 'End Sub End Class please help me. thanks in advance.... yasir

    Read the article

  • How should I store an Java Enum in JavaDB?

    - by Jonas
    How should I store an Java Enum in JavaDB? Should I try to map the enums to SMALLINT and keep the values in source code only? The embedded database is only used by a single application. Or should I just store the values as DECIMAL? None of these solutions feels good/robust for me. Is there any better alternatives? Here is my enum: import java.math.BigDecimal; public enum Vat { NORMAL(new BigDecimal("0.25")), FOOD(new BigDecimal("0.12")), BOOKS(new BigDecimal("0.06")), NONE(new BigDecimal("0.00")); private final BigDecimal value; Vat(BigDecimal val) { value = val; } public BigDecimal getValue() { return value; } } I have read other similar questions on this topic, but the problem or solution doesn't match my problem. Enum storage in Database field, Best method to store Enum in Database, Best way to store enum values in database - String or Int

    Read the article

  • error opening sql 2005 database in VS 2008

    - by Ken
    When I created my database using sql server 2005, i was able to connect and view it in Visual Studio 2008. I then detached the database onto my flash drive. Brought it home to work in VS 2008 - that worked. finally when i detached it from home and brought back to work, it will not open. It is saying that this version of sql server is not compatible with another version. I forget the exact wording of the error, as it was lengthy. any help you guys and provide would be very helpful! Thank you in advance!

    Read the article

  • Automatically deploying changes to a web application

    - by Adrian Pritchard
    What's the best way to automatically deploy changes to a database driven web application? Is there a single product out there that can modify the following... Website (dlls, aspx, css files etc) Database Schema (add tables, columns, etc) Database data (modify table contents) Reporting Services reports I've seen various separate products, but not one that does everything.

    Read the article

  • Searching Database by Arbitrary Date in PHP

    - by jverdi
    Suppose you have a messaging system built in PHP with a MySQL database backend, and you would like to support searching for messages using arbitrary date strings. The database includes a messages table, with a 'date_created' field represented as a datetime. Examples of the arbitrary date strings that would be accepted by the user should mirror those accepted by strtotime. For the following examples, searches performed on March 21, 2010: "January 26, 2009" would return all messages between 2009-01-26 00:00:00 and 2009-01-27 00:00:00 "March 8" would return all messages between 2010-03-08 00:00:00 and 2010-01-26 00:00:00 "Last week" would return all messages between 2010-03-14 00:00:00 and 2010-03-21 018:25:00 "2008" would return all messages between 2008-01-01 00:00:00 and 2008-12-31 00:00:00 I began working with date_parse, but the number of variables grew quickly. I wonder if I am re-inventing the wheel. Does anyone have a suggestion that would work either as a general solution or one that would capture most of the possible input strings?

    Read the article

  • My system administrator set up 2 databases that sync. Master-Master. However, these two databases a

    - by Alex
    DB1 and DB2. I made changes to DB1, and it does not seem to be on DB2. When I do "SHOW SLAVE STATUS\G" on DB2, there seems to be an error: mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: Master_Port: Connect_Retry: 60 Master_Log_File: mysql-bin.0005496 Read_Master_Log_Pos: 5445649315 Relay_Log_File: mysqld-relay-bin.0041705 Relay_Log_Pos: 1624302119 Relay_Master_Log_File: mysql-bin.0004461 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '4779' for key 1' on query. Default database: 'falc'. Query: 'INSERT INTO `log` (`anon_id`, `created_at`, `query`, `episode_url`, `detail_id`, `ip`) VALUES ('fdzn1d45kMavF4qbyePv', '2009-11-19 04:19:13', 'amazon', '', '', '130.126.40.57')' Skip_Counter: 0 Exec_Master_Log_Pos: 162301982 Relay_Log_Space: 136505187184 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) Then, I did show tables, and it seems like DB2 is lacking a table that I created on DB1...that means that for some reason, DB2 stopped syncing with DB1. How can I simply allow them to be in full synchronization again? All I want is DB2 to be exactly the same as DB1!

    Read the article

  • Running an existing LINQ query against a dynamic object (DataTable like)

    - by TomTom
    Hello, I am working on a generic OData provider to go against a custom data provider that we have here. Thsi is fully dynamic in that I query the data provider for the table it knows. I have a basic storage structure in place so far based on the OData sample code. My problem is: OData supports queries and expects me to hand in an IQueryable implementation. On the lowe rside, I dont have any query support. Not a joke - the provider returns tables and the WHERE clause is not supported. Performance is not an issue here - the tables are small. It is ok to sort them in the OData provider. My main problem is this. I submit a SQL statement to get out the data of a table. The result is some sort of ADO.NET data reader here. I need to expose an IQueryable implementation for this data to potentially allow later filtering. Any ide ahow to best touch that? .NET 3.5 only (no 4.0 planned for some time). I was seriously thinking of creating dynamic DTO classes for every table (emitting bytecode) so I can use standard LINQ. Right now I am using a dictionary per entry (not too efficient) but I see no real way to filter / sort based on them.

    Read the article

  • How to migrate large amounts of data from old database to new

    - by adam0101
    I need to move a huge amount of data from a couple tables in an old database to a couple different tables in a new database. The databases are SQL Server 2005 and are on the same box and sql server instance. I was told that if I try to do it all in one shot that the transaction log would fill up. Is there a way to disable the transaction log per table? If not, what is a good method for doing this? Would a cursor do it? This is just a one-time conversion.

    Read the article

  • Multiprogramming in Django, writing to the Database

    - by Marcus Whybrow
    Introduction I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model: class BookProfile(): # ... def save(self, *args, **kwargs): uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection} # Test for other objects with identical values profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk)) # If none are found create the object, else fail. if len(profiles) == 0: super(BookProfile, self).save(*args, **kwargs) else: raise ValidationError('A Book Profile for that book instance in that collection already exists') I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk). I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors. Problem Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second. I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless. My Thoughts The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects. It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors. If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?

    Read the article

  • Updating a specific key/value inside of an array field with MongoDB

    - by Jesta
    As a preface, I've been working with MongoDB for about a week now, so this may turn out to be a pretty simple answer. I have data already stored in my collection, we will call this collection content, as it contains articles, news, etc. Each of these articles contains another array called author which has all of the author's information (Address, Phone, Title, etc). The Goal - I am trying to create a query that will update the author's address on every article that the specific author exists in, and only the specified author block (not others that exist within the array). Sort of a "Global Update" to a specific article that affects his/her information on every piece of content that exists. Here is an example of what the content with the author looks like. { "_id" : ObjectId("4c1a5a948ead0e4d09010000"), "authors" : [ { "user_id" : null, "slug" : "joe-somebody", "display_name" : "Joe Somebody", "display_title" : "Contributing Writer", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, { "user_id" : null, "slug" : "jane-somebody", "display_name" : "Jane Somebody", "display_title" : "Editor", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, ], "tags" : [ "tag1", "tag2", "tag3" ], "title" : "Title of the Article" } I can find every article that this author has created by running the following command: db.content.find({authors: {$elemMatch: {slug: 'joe-somebody'}}}); So theoretically I should be able to update the authors record for the slug joe-somebody but not jane-somebody (the 2nd author), I am just unsure exactly how you reach in and update every record for that author. I thought I was on the right track, and here's what I've tried. b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {address: '1234 Avenue Rd.'} }, false, true ); I just believe there's something I am missing in the $set statement to specify the correct author and point inside of the correct array. Any ideas? **Update** I've also tried this now: b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {'authors.$.address': '1234 Avenue Rd.'} }, false, true );

    Read the article

  • Inserting nil Values into Sqlite Database?

    - by Chris
    I am working on an iPhone App where I am pulling data from an XML file and inserting it into a sqlite database located in the App. I am able to successfully do this process, but it appears that if I try to sqlite3_bind_text with a NSString that has a value of "nil", the App quickly dies. This is an example of code that fails: (modified for this example) // idvar = 1 // valuevar = nil const char *sqlStatement = "insert into mytable (id, value) VALUES(?, ?)"; sqlite3_stmt *compiledStatement = nil; sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL); sqlite3_bind_int(compiledStatement, 1, [idvar intValue]);; sqlite3_bind_text(compiledStatement, 2, [valuevar UTF8String], -1, SQLITE_TRANSIENT);

    Read the article

  • Sorting the data returned by a database

    - by Rishabh Ohri
    hi all, In our project we have a requirement that when a set of records are returned by the database the records should be sorted with respect to the TITLE field in the record. The records will have to be sorted alphabetically but if the title of a record has a number in it then it should come after the records whose title only consists of alphabets. Details: we are using SQL Server , and c#. The data from the database comes to an Entity class whic forwards the data to other layers. So, What will be the possible and effective solution for this requirement.

    Read the article

  • WMI query to check setting "Reversible Encryption" in Windows XP

    - by Mart
    In Windows XP, there are two settings in Group Policy I'm looking at: Password must meet complexity requirements Store password using reversible encryption Both of these settings are under Local Computer Policy/Computer Config/Windows Settings/Security Settings/Account Policies/Password Policy. For the first one, I have found the setting in RSOP_SecurityEventLogSettingBoolean class in WMI. However, I can't find the latter. Does anybody know in which class in WMI can I read that particular setting?

    Read the article

  • Best way to save complex Python data structures across program sessions (pickle, json, xml, database

    - by Malcolm
    Looking for advice on the best technique for saving complex Python data structures across program sessions. Here's a list of techniques I've come up with so far: pickle/cpickle json jsonpickle xml database (like SQLite) Pickle is the easiest and fastest technique, but my understanding is that there is no guarantee that pickle output will work across various versions of Python 2.x/3.x or across 32 and 64 bit implementations of Python. Json only works for simple data structures. Jsonpickle seems to correct this AND seems to be written to work across different versions of Python. Serializing to XML or to a database is possible, but represents extra effort since we would have to do the serialization ourselves manually. Thank you, Malcolm

    Read the article

  • Excel and SQL, order by help

    - by perlnoob
    Im stuck in Excel 2007, running a query, it worked until I wanted to add a 2nd row containing "field 2". Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Chicago') Union all Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Denver') Order By "Site Location" ASC; Basically I want 2 different cells for the locations, example name - Chicago - denver user1 - 100 - 20 user2 - 34 - 1002 Right now for some odd reason, its combining it like: name - chicago user1 - 120 user2 - 1036 Please note updating to 2010 beta is not a viable option for me at this point. Any and all input that will help me is greatly apprecaited. I have read over http://www.techonthenet.com/sql/order_by.php however its not gotten me very far in this question. If you have another SQL resource you recomend for people trying to get their feet wet, I'd greatly apprecaite it. If it helps all the info is on the same table.

    Read the article

  • Load data from CSV to mySQL database Java+hibernate+spring

    - by mona
    I am trying to load a CSV file in to mySQL database using Java+Hibernate+Spring. I am using the following query in the DAO to help me load in to the database: entityManager.createQuery("LOAD DATA INFILE :fileName INTO TABLE test").setParameter("fileName", "C:\\samples\\test\\abcd.csv").executeUpdate(); I got some idea to use this from http://dev.mysql.com/doc/refman/5.1/en/load-data.html and how to import a csv file into a mysql from an hibernate+spring application? But I am getting the error: java.lang.IllegalArgumentException: node to traverse cannot be null! Please help! Thanks

    Read the article

  • perl dancer: passing database info to template

    - by Bubnoff
    Following Dancer tutorial here: http://search.cpan.org/dist/Dancer/lib/Dancer/Tutorial.pod I'm using my own sqlite3 database with this schema CREATE TABLE if not exists location (location_code TEXT PRIMARY KEY, name TEXT, stations INTEGER); CREATE TABLE if not exists session (id INTEGER PRIMARY KEY, date TEXT, sessions INTEGER, location_code TEXT, FOREIGN KEY(location_code) REFERENCES location(location_code)); My dancer code ( helloWorld.pm ) for the database: package helloWorld; use Dancer; use DBI; use File::Spec; use File::Slurp; use Template; our $VERSION = '0.1'; set 'template' => 'template_toolkit'; set 'logger' => 'console'; my $base_dir = qq(/home/automation/scripts/Area51/perl/dancer); # database crap sub connect_db { my $db = qw(/home/automation/scripts/Area51/perl/dancer/sessions.sqlite); my $dbh = DBI->connect("dbi:SQLite:dbname=$db", "", "", { RaiseError => 1, AutoCommit => 1 }); return $dbh; } sub init_db { my $db = connect_db(); my $file = qq($base_dir/schema.sql); my $schema = read_file($file); $db->do($schema) or die $db->errstr; } get '/' => sub { my $branch_code = qq(BPT); my $dbh = connect_db(); my $sql = q(SELECT * FROM session); my $sth = $dbh->prepare($sql) or die $dbh->errstr; $sth->execute or die $dbh->errstr; my $key_field = q(id); template 'show_entries.tt', { 'branch' => $branch_code, 'data' => $sth->fetchall_hashref($key_field), }; }; init_db(); true; Tried the example template on the site, doesn't work. <% FOREACH id IN data.keys.nsort %> <li>Date is: <% data.$id.sessions %> </li> <% END %> Produces page but with no data. How do I troubleshoot this as no clues come up in the console/cli? Thanks Bubnoff

    Read the article

  • Slow performance of MySQL database on one server and fast on another one, with similar configurations

    - by Alon_A
    We have a web application that run on two servers of GoDaddy. We experince slow preformance on our production server, although it has stronger hardware then the testing one, and it is dedicated. I'll start with the configurations. Testing: CentOS Linux 5.8, Linux 2.6.18-028stab101.1 on i686 Intel(R) Xeon(R) CPU L5609 @ 1.87GHz, 8 cores 60 GB total, 6.03 GB used Apache/2.2.3 (CentOS) MySQL 5.5.21-log PHP Version 5.3.15 Production: CentOS Linux 6.2, Linux 2.6.18-028stab101.1 on x86_64 Intel(R) Xeon(R) CPU L5410 @ 2.33GHz, 8 cores 120 GB total, 2.12 GB used Apache/2.2.15 (CentOS) MySQL 5.5.27-log - MySQL Community Server (GPL) by Remi PHP Version 5.3.15 We are running the same code on both servers. The Problem We have some function that executes ~30000 PDO-exec commands. On our testing server it takes about 1.5-2 minutes to complete and our production server it can take more then 15 minutes to complete. As you can see here, from qcachegrind: Researching the problem, we've checked the live graphs on phpMyAdmin and discovered that the MySQL server on our testing server was preforming at steady level of 1000 execution statements per 2 seconds, while the slow production MySQL server was only 250 executions statements per 2 seconds and not steady at all, jumping from 0 to 250 every seconds. You can clearly see it in the graphs: Testing server: Production server: You can see here the comparison between both of the configuration of the MySQL servers.Left is the fast testing and right is the slow production. The differences are highlighted, but I cant find anything that can cause such a behavior difference, as the configs are mostly the same. Maybe you can see something that I cant see. Note that our tables are all InnoDB, so the MyISAM difference is (probably) not relevant. Maybe it is the MySQL Community Server (GPL) that is installed on the production server that can cause the slow performance? Or maybe it needs to be configured differently for 64bit ? I'm currently out of ideas...

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >