Search Results

Search found 5380 results on 216 pages for 'primary'.

Page 174/216 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • JPA @ManyToMany on only one side?

    - by Ethan Leroy
    I am trying to refresh the @ManyToMany relation but it gets cleared instead... My Project class looks like this: @Entity public class Project { ... @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER) @JoinTable(name = "PROJECT_USER", joinColumns = @JoinColumn(name = "PROJECT_ID", referencedColumnName = "ID"), inverseJoinColumns = @JoinColumn(name = "USER_ID", referencedColumnName = "ID")) private Collection<User> users; ... } But I don't have - and I don't want - the collection of Projects in the User entity. When I look at the generated database tables, they look good. They contain all columns and constraints (primary/foreign keys). But when I persist a Project that has a list of Users (and the users are still in the database), the mapping table doesn't get updated gets updated but when I refresh the project afterwards, the list of Users is cleared. For better understanding: Project project = ...; // new project with users that are available in the db System.out.println(project getUsers().size()); // prints 5 em.persist(project); System.out.println(project getUsers().size()); // prints 5 em.refresh(project); System.out.println(project getUsers().size()); // prints 0 So, how can I refresh the relation between User and Project?

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • Mysql - Help me alter this search query to get desired results

    - by sandeepan-nath
    Following is a dump of the tables and data needed to answer understand the system:- The system consists of tutors and classes. The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `All_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, `id_wc` int(10) unsigned default NULL, KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `All_Tag_Relations` (`id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, NULL), (2, 1, NULL), (3, 1, 1), (4, 1, 1), (6, 2, NULL), (7, 2, NULL), (5, 2, 2), (4, 2, 2); Following is my query:- This query searches for "first class" (tag for first = 3 and for class = 4, in Tags table) and returns all those classes such that both the terms first and class are present in the class name. SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 And it returns the class with id_wc = 1. But, I want the search to show all those classes such that all the search terms are present in the class name or its tutor name So that searching "Sandeepan class" (wtagrels.id_tag = 1,4) or "Sandeepan Nath" also returns the class with id_wc=1. And Searching. Searching "Bob First" should not return any classes. Please modify the above query or suggest a new query, if possible using MyIsam - fulltext search, but somehow help me get the result.

    Read the article

  • SqlBulkCopy slow as molasses

    - by Chris
    I'm looking for the fastest way to load bulk data via c#. I have this script that does the job but slow. I read testimonies that SqlBulkCopy is the fastest. 1000 records 2.5 seconds. files contain anywhere near 5000 records to 250k What are some of the things that can slow it down? Table Def: CREATE TABLE [dbo].[tempDispositions]( [QuotaGroup] [varchar](100) NULL, [Country] [varchar](50) NULL, [ServiceGroup] [varchar](50) NULL, [Language] [varchar](50) NULL, [ContactChannel] [varchar](10) NULL, [TrackingID] [varchar](20) NULL, [CaseClosedDate] [varchar](25) NULL, [MSFTRep] [varchar](50) NULL, [CustEmail] [varchar](100) NULL, [CustPhone] [varchar](100) NULL, [CustomerName] [nvarchar](100) NULL, [ProductFamily] [varchar](35) NULL, [ProductSubType] [varchar](255) NULL, [CandidateReceivedDate] [varchar](25) NULL, [SurveyMode] [varchar](1) NULL, [SurveyWaveStartDate] [varchar](25) NULL, [SurveyInvitationDate] [varchar](25) NULL, [SurveyReminderDate] [varchar](25) NULL, [SurveyCompleteDate] [varchar](25) NULL, [OptOutDate] [varchar](25) NULL, [SurveyWaveEndDate] [varchar](25) NULL, [DispositionCode] [varchar](5) NULL, [SurveyName] [varchar](20) NULL, [SurveyVendor] [varchar](20) NULL, [BusinessUnitName] [varchar](25) NULL, [UploadId] [int] NULL, [LineNumber] [int] NULL, [BusinessUnitSubgroup] [varchar](25) NULL, [FileDate] [datetime] NULL ) ON [PRIMARY] and here's the code private void BulkLoadContent(DataTable dt) { OnMessage("Bulk loading records to temp table"); OnSubMessage("Bulk Load Started"); using (SqlBulkCopy bcp = new SqlBulkCopy(conn)) { bcp.DestinationTableName = "dbo.tempDispositions"; bcp.BulkCopyTimeout = 0; foreach (DataColumn dc in dt.Columns) { bcp.ColumnMappings.Add(dc.ColumnName, dc.ColumnName); } bcp.NotifyAfter = 2000; bcp.SqlRowsCopied += new SqlRowsCopiedEventHandler(bcp_SqlRowsCopied); bcp.WriteToServer(dt); bcp.Close(); } }

    Read the article

  • Getting a users Facebook profile url

    - by Greg Pabst
    I am creating a registry site so similar people can find each other easily. I don't want to use Facebook Connect as the primary log in method or use Facebook to store their information. I'll be creating a database on my end to store that info. For security reasons I won't be displaying the users address, phone number or email address so I wanted to provide the next best way for people to connect with each other, this is where Facebook comes in. Normally I would just ask them to type their Facebook URL in a text box but I don't think most people know what their url is which is why I think I need to use Facebook Connect. So here is my idea..when the users signs up there is a check box that when checked signifies they are allowing people to find them on Facebook. I assume once they click the register button that a Facebook Connect popup will show up asking for permission to access their Facebook account. When they "allow" it, then I can get their profile url. All I need is their Facebook profile url, I don't want any other Facebook features or information. Is Facebook Connect the best thing to use for this scenario? Is there an easier way? Several months ago on the Facebook Connect site their used to be examples of doing this, but all the documentation has been rearranged and changed and I can't seem to find the information. Any help you can provide would be great!

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • Are Parameters really enough to prevent Sql injections?

    - by Rune Grimstad
    I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks. But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server? There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting. Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string. So instead of doing this: SELECT * FROM Table WHERE Name = 'a name' We do this: SELECT * FROM Table WHERE Name = @Name and then set the value of the @Name parameter on the query / command object.

    Read the article

  • Rendering Swing Components to an Offscreen buffer

    - by Nick C
    I have a Java (Swing) application, running on a 32-bit Windows 2008 Server, which needs to render it's output to an off-screen image (which is then picked up by another C++ application for rendering elsewhere). Most of the components render correctly, except in the odd case where a component which has just lost focus is occluded by another component, for example where there are two JComboBoxes close to each other, if the user interacts with the lower one, then clicks on the upper one so it's pull-down overlaps the other box. In this situation, the component which has lost focus is rendered after the one occluding it, and so appears on top in the output. It renders correctly in the normal Java display (running full-screen on the primary display), and attempting to change the layers of the components in question does not help. I am using a custom RepaintManager to paint the components to the offscreen image, and I assume the problem lies with the order in which addDirtyRegion() is called for each of the components in question, but I can't think of a good way of identifying when this particular state occurs in order to prevent it. Hacking it so that the object which has just lost focus is not repainted stops the problem, but obviously causes the bigger problem that it is not repainted in all other, normal, circumstances. Is there any way of programmatically identifying this state, or of reordering things so that it does not occur? Many thanks, Nick

    Read the article

  • ASP.NET webservice API security.

    - by Tejaswi Yerukalapudi
    Hi, I have an iPhone app accessing an ASP.NET Webservice for data. Since I'm building both the ASP.NET end and the iPhone part of the app, and we'll shortly be publishing it in the Appstore, I'd like to know what security checks I need to make. The basic flow of the program (without divulging too much info about it) is as follows - . Login (Enter Username, pass on the app) . Primary screen where the data is loaded from a webservice and presented . And post data back after a few updates by the user I'm using POST to send the data to the Webservice via HTTPS. I'm sanitizing the inputs, checking for length of the inputs, but that's the limit of my knowledge as far as security goes. Any other tips are greatly appreciated! Edit: I should probably add that our service needs to be subscribed to separately and the iPhone component of it cannot be used alone. So the average user will not have login credentials. And the app itself has healthcare data in it, so I'd rather not have anyone trying attacks from my login page. Thanks, Teja.

    Read the article

  • Zend Sessions problem with IE8

    - by Emil
    I'm running a Zend Framework powered website and it seems to have serious problems with sessions. I have a 5 step process where I save the form data in the session between the steps and then save it into the database on the last step. When we built the site sometimes the session just went away and forced us to restart. Now it seems to work again but recently we discovered an issue with Internet Explorer 8. It fails between step 2 - 3 and forgets the session. It works fine in IE6, IE7, FF, Chrome, Safari and even in my mobile web browser (SE P1). We're storing our sessions in the database and if I deactivate the session db handler it works. What's the difference between using the database and not using it for sessions? Do I loose something if I switch back? Bootstrap: /* Start session */ $saveHandler = new Zend_Session_SaveHandler_DbTable(array( 'name' => 'sessions', 'primary' => 'id', 'modifiedColumn' => 'modified', 'dataColumn' => 'data', 'lifetimeColumn' => 'lifetime' )); Zend_Session::rememberMe((int) $config->session->lifetime); $saveHandler->setLifetime((int) $config->session->lifetime) ->setOverrideLifetime(true); Zend_Session::setSaveHandler($saveHandler); Zend_Session::start(); and in my step controller $session = new Zend_Session_Namespace('wizard'); Then I'm just working with $session saving data in a stdClass in $session.

    Read the article

  • db:migrate creates sequences but doesn't alter table?

    - by RewbieNewbie
    Hello, I have a migration that creates a postres sequence for auto incrementing a primary identifier, and then executes a statement for altering the column and specifying the default value: execute 'CREATE SEQUENCE "ServiceAvailability_ID_seq";' execute <<-SQL ALTER TABLE "ServiceAvailability" ALTER COLUMN "ID" set DEFAULT NEXTVAL('ServiceAvailability_ID_seq'); SQL If I run db:migrate everything seems to work, in that no errors are returned, however, if I run the rails application I get: Mnull value in column "ID" violates not-null constraint I have discovered by executing the sql statement in the migration manually, that this error is because the alter statement isn't working, or isn't being executed. If I manually execute the following statement: CREATE SEQUENCE "ServiceAvailability_ID_seq; I get: error : ERROR: relation "serviceavailability_id_seq" already exists Which means the migration successfully created the sequence! However, if I manually run: ALTER TABLE "ServiceProvider" ALTER COLUMN "ID" set DEFAULT NEXTVAL('ServiceProvider_ID_seq'); SQL It runs successfully and creates the default NEXTVAL. So the question is, why is the migration file creating the sequence with the first execute statement, but not altering the table in the second execute? (Remembering, no errors are output on running db:migrate) Thank you and apologies for tl:dr

    Read the article

  • I have two choices of Master's classes this fall. Which is the most useful?

    - by ahplummer
    (For background purposes and context): I am a Software Engineer, and manage other Software Engineers currently. I kind of wear two hats right now: one of a programmer, and one as a 'team lead'. In this regard, I've started going back to school to get my Master's degree with an emphasis in Computer Science. I already have a Bachelor's in Computer Science, and have been working in the field for about 13 years. Our primary development environment is a Windows environment, writing in .NET, Delphi, and SQL Server. Choice #1: CST 798 DATA VISUALIZATION Course Description: Basically, this is a course on the "Processing" language: http://processing.org/ Choice #2: CST 711 INFORMATICS Course Description: (From catalog): Informatics is the science of the use and processing of data, information, and knowledge. This course covers a variety of applied issues from information technology, information management at a variety of levels, ranging from simple data entry, to the creation, design and implementation of new information systems, to the development of models. Topics include basic information representation, processing, searching, and organization, evaluation and analysis of information, Internet-based information access tools, ethics and economics of information sharing.

    Read the article

  • How to generate a key for a group entity?

    - by user246114
    Hi, I'm trying to make a group entity. Something like: class User { } class UserColor { } ... Key key = new KeyFactory.Builder( User.class.getSimpleName(), username). .addChild(UserColor.class.getSimpleName(), ???).getKey(); I know the unique username up-front to use for the key of the User object. But I just want app engine to generate a random unique value for the key value of the UserColor instance. I think this is described here, but I don't understand their wording: http://code.google.com/appengine/docs/java/datastore/transactions.html To create an object with a system-generated numeric ID and an entity group parent, you must use an entity group parent key field (such as customerKey, above). Assign the key of the parent to the parent key field, then leave the object's key field set to null. When the object is saved, the datastore populates the key field with the complete key, including the entity group parent. and this is their example: @Persistent @Extension(vendorName="datanucleus", key="gae.parent-pk", value="true") private Key customerKey; but I don't understand - should UserColor look like this then?: class UserColor { @Persistent @Extension(vendorName="datanucleus", key="gae.parent-pk", value="true") private Key mKeyParent; @Primary private Key mKey; // leave null } ... Key keyParent = new KeyFactory.Builder( User.class.getSimpleName(), username); UserColor uc = new UserColor(); uc.setKeyParent(keyParent); pm.makePersistent(uc); // now generated for me automatically? is that correct? Using this method, I should be able to use a User and a UserColor object in a transaction together, right? Thanks

    Read the article

  • Replace textfields with dropdown select fields

    - by 47
    I have three model classes that look as below: class Model(models.Model): model = models.CharField(max_length=20, blank=False) manufacturer = models.ForeignKey(Manufacturer) date_added = models.DateField(default=datetime.today) def __unicode__(self): name = ''+str(self.manufacturer)+" "+str(self.model) return name class Series(models.Model): series = models.CharField(max_length=20, blank=True, null=True) model = models.ForeignKey(Model) date_added = models.DateField(default=datetime.today) def __unicode__(self): name = str(self.model)+" "+str(self.series) return name class Manufacturer(models.Model): MANUFACTURER_POPULARITY_CHOICES = ( ('1', 'Primary'), ('2', 'Secondary'), ('3', 'Tertiary'), ) manufacturer = models.CharField(max_length=15, blank=False) date_added = models.DateField(default=datetime.today) manufacturer_popularity = models.CharField(max_length=1, choices=MANUFACTURER_POPULARITY_CHOICES) def __unicode__(self): return self.manufacturer I want to have the fields for model series and manufacturer represented as dropdowns instead of text fields. I have customized the model forms as below: class SeriesForm(ModelForm): series = forms.ModelChoiceField(queryset=Series.objects.all()) class Meta: model = Series exclude = ('model', 'date_added',) class ModelForm(ModelForm): model = forms.ModelChoiceField(queryset=Model.objects.all()) class Meta: model = Model exclude = ('manufacturer', 'date_added',) class ManufacturerForm(ModelForm): manufacturer = forms.ModelChoiceField(queryset=Manufacturer.objects.all()) class Meta: model = Manufacturer exclude = ('date_added',) However, the dropdowns are populated with the unicode in the respective class...how can I further customize this to get the end result I want? Also, how can I populate the forms with the correct data for editing? Currently only SeriesForm is populated. The starting point of all this is from another class whose declaration is as below: class CommonVehicle(models.Model): year = models.ForeignKey(Year) series = models.ForeignKey(Series) .... def __unicode__(self): name = ''+str(self.year)+" "+str(self.series) return name

    Read the article

  • code igniter codeigniter question, making anchor load page containing data from referenced row in DB

    - by thrice801
    Hi, Im trying to learn the code igniter library and object oriented php in general and have a question. Ok so Ive gotten as far as making a page which loads all of the rows from my database and in there, Im echoing an anchor tag which is a link to the following structure. [code]echo anchor("videos/video/$row-video_id", $row-video_title);[/code] So, I have a class called Videos which extends the controller, within that class there is index and video, which is being called correctly (when you click on the video title, it sends you to videos/video/5 for example, 5 being the primary key of the table im working with. So basically all Im trying to do is pass that 5 back to the controller, and then have the particular video page output the particular rows data from the videos table. My function in my controller for video looks like this - [code] function video() { $data['main_content'] = 'video'; $data['video_title'] = 'test'; $this-load-view('includes/template', $data); } [/code] So ya, basically test should be instead of test, a returned value of a query which says get in the table "videos", the row with the video_id of "5", and make $data['video_title'] = value of video_title in database... Should have this figured out by now but dont, any help would be appreciated!

    Read the article

  • MySQL Cursor Issue

    - by James Inman
    I've got the following code - this is the first time I've really attempted using cursors. DELIMITER $$ DROP PROCEDURE IF EXISTS demo$$ DROP TABLE IF EXISTS temp$$ CREATE TEMPORARY TABLE temp( id INTEGER NOT NULL AUTO_INCREMENT, start DATETIME NOT NULL, end DATETIME NOT NULL, PRIMARY KEY(id) ) $$ CREATE PROCEDURE demo() BEGIN DECLARE done INT DEFAULT 0; DECLARE a, b DATETIME; DECLARE cur1 CURSOR FOR SELECT MAX(end) AS end FROM ( SELECT id, start, end, @r := @r + (start > @edate) AS num, @edate := GREATEST(@edate, end) FROM ( SELECT @r := 0, @edate := '0001-01-01' ) vars, student_lectures WHERE ( student_id = 1 AND start >= '2010-04-26 00:00:00' AND end <= '2010-04-30 23:59:59' ) ORDER BY start ) q GROUP BY num; DECLARE cur2 CURSOR FOR SELECT MIN(start) AS start FROM ( SELECT id, start, end, @r := @r + (start > @edate) AS num, @edate := GREATEST(@edate, end) FROM ( SELECT @r := 0, @edate := '0001-01-01' ) vars, student_lectures WHERE ( student_id = 1 AND start >= '2010-04-26 00:00:00' AND end <= '2010-04-30 23:59:59' ) ORDER BY start ) q GROUP BY num LIMIT 1, 18446744073709551615; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; OPEN cur1; OPEN cur2; REPEAT FETCH cur1 INTO a; FETCH cur2 INTO b; IF NOT done THEN INSERT INTO temp(start, end) VALUES(a,b); END IF; UNTIL done END REPEAT; CLOSE cur1; CLOSE cur2; END $$ SELECT * FROM temp; I'm not getting anything outputted into the temp table. Running the following query gives me output, so I know there's rows it should be matching - but I imagine I've made some obvious mistake. SELECT MAX(end) AS end FROM ( SELECT id, start, end, @r := @r + (start > @edate) AS num, @edate := GREATEST(@edate, end) FROM ( SELECT @r := 0, @edate := '0001-01-01' ) vars, student_lectures WHERE ( student_id = 1 AND start >= '2010-04-26 00:00:00' AND end <= '2010-04-30 23:59:59' ) ORDER BY start ) q GROUP BY num; The output this query returns: +---------------------+ | end | +---------------------+ | 2010-04-26 13:00:00 | | 2010-04-26 18:15:00 | | 2010-04-27 11:00:00 | | 2010-04-27 13:00:00 | | 2010-04-27 18:15:00 | | 2010-04-28 13:00:00 | | 2010-04-29 13:00:00 | | 2010-04-29 18:15:00 | | 2010-04-30 13:00:00 | | 2010-04-30 15:15:00 | | 2010-04-30 17:15:00 | +---------------------+ 11 rows in set (0.02 sec)

    Read the article

  • Masspay and MySql

    - by Mike
    Hi, I am testing Paypal's masspay using their 'MassPay NVP example' and I having difficulty trying to amend the code so inputs data from my MySql database. Basically I have user table in MySql which contains email address, status of payment (paid,unpaid) and balance. CREATE TABLE `users` ( `user_id` int(10) unsigned NOT NULL auto_increment, `email` varchar(100) collate latin1_general_ci NOT NULL, `status` enum('unpaid','paid') collate latin1_general_ci NOT NULL default 'unpaid', `balance` int(10) NOT NULL default '0', PRIMARY KEY (`user_id`) ) ENGINE=MyISAM AUTO_INCREMENT=6 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci Data : 1 [email protected] paid 100 2 [email protected] unpaid 11 3 [email protected] unpaid 20 4 [email protected] unpaid 1 5 [email protected] unpaid 20 6 [email protected] unpaid 15 I then have created a query which selects users with an unpaid balance of $10 and above : $conn = db_connect(); $query=$conn->query("SELECT * from users WHERE balance >='10' AND status = ('unpaid')"); What I would like to is for each record returned from the query for it to populate the code below: Now the code which I believe I need to amend is as follows: for($i = 0; $i < 3; $i++) { $receiverData = array( 'receiverEmail' => "[email protected]", 'amount' => "example_amount",); $receiversArray[$i] = $receiverData; } However I just can't get it to work, I have tried using mysqli_fetch_array and then replaced "[email protected]" with $row['email'] and "example_amount" with row['balance'] in various methods of coding but it doesn't work. Also I need it to loop to however many rows that were retrieved from the query as <3 in the for loop above. So the end result I am looking for is for the $nvpStr string to pass with something like this: $nvpStr = "&EMAILSUBJECT=test&RECEIVERTYPE=EmailAddress&CURRENCYCODE=USD&[email protected]&L_Amt=11&[email protected]&L_Amt=11&[email protected]&L_Amt=20&[email protected]&L_Amt=20&[email protected]&L_Amt=15"; Thanks

    Read the article

  • Hibernate Auto-Increment Setup

    - by dharga
    How do I define an entity for the following table. I've got something that isn't working and I just want to see what I'm supposed to do. USE [BAMPI_TP_dev] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[MemberSelectedOptions]( [OptionId] [int] NOT NULL, [SeqNo] [smallint] IDENTITY(1,1) NOT NULL, [OptionStatusCd] [char](1) NULL ) ON [PRIMARY] GO SET ANSI_PADDING OFF This is what I have already that isn't working. @Entity @Table(schema="dbo", name="MemberSelectedOptions") public class MemberSelectedOption extends BampiEntity implements Serializable { @Embeddable public static class MSOPK implements Serializable { private static final long serialVersionUID = 1L; @Column(name="OptionId") int optionId; @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; //Getters and setters here... } private static final long serialVersionUID = 1L; @EmbeddedId MSOPK pk = new MSOPK(); @Column(name="OptionStatusCd") String optionStatusCd; //More Getters and setters here... } I get the following ST. [5/25/10 15:49:40:221 EDT] 0000003d JDBCException E org.slf4j.impl.JCLLoggerAdapter error Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. [5/25/10 15:49:40:221 EDT] 0000003d AbstractFlush E org.slf4j.impl.JCLLoggerAdapter error Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: could not insert: [com.bob.proj.ws.model.MemberSelectedOption] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:90) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2285) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2678) at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:366) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at com.bcbst.bamp.ws.dao.MemberSelectedOptionDAOImpl.saveMemberSelectedOption(MemberSelectedOptionDAOImpl.java:143) at com.bcbst.bamp.ws.common.AlertReminder.saveMemberSelectedOptions(AlertReminder.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • SSIS: "Failure inserting into the read-only column <ColumnName>"

    - by Cory
    I have an Excel source going into an OLE DB destination. I'm inserting data into a view that has an INSTEAD OF trigger that handles all inserts. When I try to execute the package I receive this error: "Failure inserting into the read-only column ColumnName" What can I do to let SSIS know that this view is safe to insert into because there is an INSTEAD OF trigger that will handle the insert? EDIT (Additional info): Some more additional info. I have a flat file that is being inserted into a normalized database. My initial problem was how do I take a flat file and insert that data into multiple tables while keeping track of all the primary/foreign key relationships. My solution was to create a VIEW that mimicked the structure of the flat file and then create an INSTEAD OF trigger on that view. In my INSTEAD OF trigger I would handle the logic of maintaining all the relationships between tables My view looks something like this. CREATE VIEW ImportView AS SELECT CONVERT(varchar(100, NULL) AS CustomerName, CONVERT(varchar(100), NULL) AS Address1, CONVERT(varchar(100), NULL) AS Address2, CONVERT(varchar(100), NULL) AS City, CONVERT(char(2), NULL) AS State, CONVERT(varchar(250), NULL) AS ItemOrdered, CONVERT(int, NULL) AS QuantityOrdered ... I will never need to select from this view, I only use it to insert data into it from this flat file I receive. I need someway to tell SQL Server that the fields aren't really read only because there is an INSTEAD OF trigger on this view.

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Unknown user 'app' with capistrano

    - by trobrock
    This is my first time trying to set up capistrano to deploy a rails application. I am deploying from my local machine to my remote server that has the repo, web, app, and mysql servers all on the same machine. I am following this walk through: http://www.capify.org/index.php/From_The_Beginning I get to the command cap deploy:start Then I get this error: *** [err :: example.com] sudo: unknown user: app command finished failed: "sh -c 'cd /var/www/example/current && sudo -p '\\''sudo password: '\\'' -u app nohup script/spin'" on example.com Am I supposed to add an 'app' user, or is there a way of changing what user the command runs as? This is my deploy.rb: set :application, "example" set :repository, "[email protected]:example.git" set :user, "trobrock" set :branch, 'master' set :deploy_to, "/var/www/example" set :scm, :git # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` role :web, "example.com" # Your HTTP server, Apache/etc role :app, "example.com" # This may be the same as your `Web` server role :db, "example.com", :primary => true # This is where Rails migrations will run And obviously everywhere it says example.com is my servers hostname and every it just says example is the app name.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >