Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 346/398 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • ASP.NET MVC Membership, Get new UserID

    - by Vishal Bharakhda
    I am trying to register a new user and also understand how to get the new userID so i can start creating my own user tables with a userID mapping to the asp.net membership user table. Below is my code: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Register(string userName, string email, string position, string password, string confirmPassword) { ViewData["PasswordLength"] = MembershipService.MinPasswordLength; ViewData["position"] = new SelectList(GetDeveloperPositionList()); if (ValidateRegistration(userName, email, position, password, confirmPassword)) { // Attempt to register the user MembershipCreateStatus createStatus = MembershipService.CreateUser(userName, password, email); if (createStatus == MembershipCreateStatus.Success) { FormsAuth.SignIn(userName, false /* createPersistentCookie */); return RedirectToAction("Index", "Home"); } else { ModelState.AddModelError("_FORM", ErrorCodeToString(createStatus)); } } // If we got this far, something failed, redisplay form return View(); } I've done some research and many sites inform me to use Membership.GetUser().ProviderUserKey; but this throws an error as Membership is NULL. I placed this line of code just above "return RedirectToAction("Index", "Home");" within the if statement. Please can someone advise me on this... Thanks in advance

    Read the article

  • Update SQL Server 2000 to SQL Server 2008: Benefits please?

    - by Ciaran Archer
    Hi there I'm looking for the benefits of upgrading from SQL Server 2000 to 2008. I was wondering: What database features can we leverage with 2008 that we can't now? What new TSQL features can we look forward to using? What performance benefits can we expect to see? What else will make management go for it? And the converse: What problems can we expect to encounter? What other problems have people found when migrating? Why fix something that isn't (technically) broken? We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however. Some background: Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation. Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade? Any and all contributions are appreciated!

    Read the article

  • Linq to Entities custom ordering via position mapping table

    - by Bigfellahull
    Hi, I have a news table and I would like to implement custom ordering. I have done this before via a positional mapping table which has newsIds and a position. I then LEFT OUTER JOIN the position table ON news.newsId = position.itemId with a select case statement CASE WHEN [position] IS NULL THEN 9999 ELSE [position] END and order by position asc, articleDate desc. Now I am trying to do the same with Linq to Entities. I have set up my tables with a PK, FK relationship so that my News object has an Entity Collection of positions. Now comes the bit I can't work out. How to implement the LEFT OUTER JOIN. I have so far: var query = SelectMany (n => n.Positions, (n, s) => new { n, s }) .OrderBy(x => x.s.position) .ThenByDescending(x => x.n.articleDate) .Select(x => x.n); This kinda works. However this uses a INNER JOIN so not what I am after. I had another idea: ret = ret.OrderBy(n => n.ShufflePositions.Select(s => s.position)); However I get the error DbSortClause expressions must have a type that is order comparable. I also tried ret = ret.GroupJoin(tse.ShufflePositions, n => n.id, s => s.itemId, (n, s) => new { n, s }) .OrderBy(x => x.s.Select(z => z.position)) .ThenByDescending(x => x.n.articleDate) .Select(x => x.n); but I get the same error! If anyone can help me out, it would be much appreciated!

    Read the article

  • Getting the ranking of a photo in SQL

    - by Jake Petroules
    I have the following tables: Photos [ PhotoID, CategoryID, ... ] PK [ PhotoID ] Categories [ CategoryID, ... ] PK [ CategoryID ] Votes [ PhotoID, UserID, ... ] PK [ PhotoID, UserID ] A photo belongs to one category. A category may contain many photos. A user may vote once on any photo. A photo can be voted for by many users. I want to select the ranks of a photo (by counting how many votes it has) both overall and within the scope of the category that photo belongs to. The count of SELECT * FROM Votes WHERE PhotoID = @PhotoID being the number of votes a photo has. I want the resulting table to have generated columns for overall rank, and rank within category, so that I may order the results by either. So for example, the resulting table from the query should look like: PhotoID VoteCount RankOverall RankInCategory 1 48 1 7 3 45 2 5 19 33 3 1 2 17 4 3 7 9 5 5 ... ...you get the idea. How can I achieve this? So far I've got the following query to retrieve the vote counts, but I need to generate the ranks as well: SELECT PhotoID, UserID, CategoryID, DateUploaded, (SELECT COUNT(CommentID) AS Expr1 FROM dbo.Comments WHERE (PhotoID = dbo.Photos.PhotoID)) AS CommentCount, (SELECT COUNT(PhotoID) AS Expr1 FROM dbo.PhotoVotes WHERE (PhotoID = dbo.Photos.PhotoID)) AS VoteCount, Comments FROM dbo.Photos

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • Yii CGridView and dropdown filter

    - by Dmitriy Gunchenko
    I created dropdown filter, it's display, but don't worked right. As I anderstand trouble in search() method view: $this->widget('zii.widgets.grid.CGridView', array( 'dataProvider'=>$model->search(), 'filter' => $model, 'columns'=>array( array( 'name' => 'client_id', 'filter' => CHtml::listData(Client::model()->findAll(), 'client_id', 'name'), 'value'=>'$data->client->name' ), 'task' ) )); I have to tables, and they relations are shown down model: public function relations() { // NOTE: you may need to adjust the relation name and the related // class name for the relations automatically generated below. return array( 'client' => array(self::BELONGS_TO, 'Client', 'client_id'), ); } public function search() { // Warning: Please modify the following code to remove attributes that // should not be searched. $criteria=new CDbCriteria; $criteria->with = array('client'); $criteria->compare('task_id',$this->task_id); $criteria->compare('client_id',$this->client_id); $criteria->compare('name',$this->client->name); $criteria->compare('task',$this->task,true); $criteria->compare('start_date',$this->start_date,true); $criteria->compare('end_date',$this->end_date,true); $criteria->compare('complete',$this->complete); return new CActiveDataProvider($this, array( 'criteria'=>$criteria, )); }

    Read the article

  • How do I programatically verify, create, and update SQL table structure?

    - by JYelton
    Scenario: I have an application (C#) that expects a SQL database and login, which are set by a user. Once connected, it checks for the existence of several table and creates them if not found. I'd like to expand on this by having the program be capable of adding columns to those tables if I release a new version of the program which relies upon the new columns. Question: What is the best way to programatically check the structure of an existing SQL table and create or update it to match an expected structure? I am planning to iterate through the list of required columns and alter the existing table whenever it does not contain the new column. I can't help but wonder if there's an approach that is different or better. Criteria: Here are some of my expectations and self-imposed rules: Newer versions of the program might no longer use certain columns, but they would be retained for data logging purposes. In other words, no columns will be removed. Existing data in the table must be preserved, so the table cannot simply be dropped and recreated. In all cases, newly added columns would allow null data, so the population of old records is taken care of by having default null values. Example: Here is a sample table (because visual examples help!): id sensor_name sensor_status x1 x2 x3 x4 1 na019 OK 0.01 0.21 1.41 1.22 Then, in a new version, I may want to add the column x5. The "x-columns" are all data-storage columns that accept null.

    Read the article

  • Commercial web application--scalable database design

    - by Rob Campbell
    I'm designing a set of web apps to track scientific laboratory data. Each laboratory has several members, each of whom will access both their own data and that of their laboratory as a whole. Many typical queries will thus be expected to return records of multiple members (e.g. my mouse, joe's mouse and sally's mouse). I think I have the database fairly well normalized. I'm now wondering how to ensure that users can efficiently access both their own data and their lab's data set when it is mixed among (hopefully) a whole ton of records from other labs. What I've come up with so far is that most tables will end with two fields: user_id and labgroup_id. The WHERE clause of any SELECT statement will include the appropriate reference to one of the id fields ("...WHERE 'labroup_id=n..." or "...WHERE user_id=n..."). My questions are: Is this an approach that will scale to 10^6 or more records? If so, what's the best way to use these fields in a query so that it most efficiently searches the relevant subset of the database? e.g. Should the first step in querying be to create a temporary table containing just the labgroup's data? Or will indexing using some combination of the id, user_id, and labroup_id fields be sufficient at that scale? I thank any responders very much in advance.

    Read the article

  • SQL Server Long Query

    - by thormj
    Ok... I don't understand why this query is taking so long (MSSQL Server 2005): [Typical output 3K rows, 5.5 minute execution time] SELECT dbo.Point.PointDriverID, dbo.Point.AssetID, dbo.Point.PointID, dbo.Point.PointTypeID, dbo.Point.PointName, dbo.Point.ForeignID, dbo.Pointtype.TrendInterval, coalesce(dbo.Point.trendpts,5) AS TrendPts, LastTimeStamp = PointDTTM, LastValue=PointValue, Timezone FROM dbo.Point LEFT JOIN dbo.PointType ON dbo.PointType.PointTypeID = dbo.Point.PointTypeID LEFT JOIN dbo.PointData ON dbo.Point.PointID = dbo.PointData.PointID AND PointDTTM = (SELECT Max(PointDTTM) FROM dbo.PointData WHERE PointData.PointID = Point.PointID) LEFT JOIN dbo.SiteAsset ON dbo.SiteAsset.AssetID = dbo.Point.AssetID LEFT JOIN dbo.Site ON dbo.Site.SiteID = dbo.SiteAsset.SiteID WHERE onlinetrended =1 and WantTrend=1 PointData is the biggun, but I thought its definition should allow me to pick up what I want easily enough: CREATE TABLE [dbo].[PointData]( [PointID] [int] NOT NULL, [PointDTTM] [datetime] NOT NULL, [PointValue] [real] NULL, [DataQuality] [tinyint] NULL, CONSTRAINT [PK_PointData_1] PRIMARY KEY CLUSTERED ( [PointID] ASC, [PointDTTM] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX [IX_PointDataDesc] ON [dbo].[PointData] ( [PointID] ASC, [PointDTTM] DESC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO PointData is 550M rows, and Point (source of PointID) is only 28K rows. I tried making an Indexed View, but I can't figure out how to get the Last Timestamp/Value out of it in a compatible way (no Max, no subquery, no CTE). This runs twice an hour, and after it runs I put more data into those 3K PointID's that I selected. I thought about creating LastTime/LastValue tables directly into Point, but that seems like the wrong approach. Am I missing something, or should I rebuild something? (I'm also the DBA, but I know very little about A'ing a DB!)

    Read the article

  • Overflow in table cells

    - by Ezdaroth
    I need to create a chat layout that uses all the available space and scales nicely, but has few fixed sizes. Here's the structure: <table style="width: 100%; height: 100%"> <tr> <td></td> <td style="width: 200px; background: red;"></td> </tr> <tr> <td style="height: 100px; background: blue"></td> <td></td> </tr> </table> However, I want to place a lot of content in the first table cell and I want it to scroll, so it won't expand the table. Is it possible to make it overflow properly, without having a fixed height for the cell? Simply adding overflow: auto doesn't seem to work. PS. I hate tables, but can't figure out a very clean and cross-browser way to do a layout like this with divs and css. If someone can come up with one, I'll gladly use it.

    Read the article

  • MySQL db Audit Trail Trigger

    - by Natkeeran
    I need to track changes (audit trail) in certain tables in a MySql Db. I am trying to implement the solution suggested here. I have an AuditLog Table with the following columns: AuditLogID, TableName, RowPK, FieldName, OldValue, NewValue, TimeStamp. The mysql stored procedure is the following (this executes fine, and creates the procedure): The call to the procedure such as: CALL addLogTrigger('ProductTypes', 'ProductTypeID'); executes, but does not create any triggers (see the image). SHOW TRIGGERS returns empty set. Please let me know what could be the issue, or an alternate way to implement this. DROP PROCEDURE IF EXISTS addLogTrigger; DELIMITER $ CREATE PROCEDURE addLogTrigger(IN tableName VARCHAR(255), IN pkField VARCHAR(255)) BEGIN SELECT CONCAT( 'DELIMITER $\n', 'CREATE TRIGGER ', tableName, '_AU AFTER UPDATE ON ', tableName, ' FOR EACH ROW BEGIN ', GROUP_CONCAT( CONCAT( 'IF NOT( OLD.', column_name, ' <=> NEW.', column_name, ') THEN INSERT INTO AuditLog (', 'TableName, ', 'RowPK, ', 'FieldName, ', 'OldValue, ', 'NewValue' ') VALUES ( ''', table_name, ''', NEW.', pkField, ', ''', column_name, ''', OLD.', column_name, ', NEW.', column_name, '); END IF;' ) SEPARATOR ' ' ), ' END;$' ) FROM information_schema.columns WHERE table_schema = database() AND table_name = tableName; END$ DELIMITER ;

    Read the article

  • Could DataGridView be this dumb? or its me?lol

    - by Selase
    Am trying to bind data to a dropdown list on pageload based on a condition. Code explains further below. public partial class AddExhibit : System.Web.UI.Page { string adminID, caseIDRetrieved; DataSet caseDataSet = new DataSet(); SqlDataAdapter caseSqlDataAdapter = new SqlDataAdapter(); string strConn = WebConfigurationManager.ConnectionStrings["CMSSQL3ConnectionString1"].ConnectionString; protected void Page_Load(object sender, EventArgs e) { adminID = Request.QueryString["adminID"]; caseIDRetrieved = Request.QueryString["caseID"]; if (caseIDRetrieved != null) { CaseIDDropDownList.Text = caseIDRetrieved; //CaseIDDropDownList.Enabled = false; } else { try { CreateDataSet(); DataView caseDataView = new DataView(caseDataSet.Tables[0]); CaseIDDropDownList.DataSource = caseDataView; CaseIDDropDownList.DataBind(); } catch (Exception ex) { string script = "<script>alert('" + ex.Message + "');</script>"; } } } The CreateDataset method that is called in the if..else statement is contains the following code. private void CreateDataSet() { SqlConnection caseConnection = new SqlConnection(strConn); caseSqlDataAdapter.SelectCommand = new SqlCommand("Select CaseID FROM Cases", caseConnection); caseSqlDataAdapter.Fill(caseDataSet); } However when i load the page and as usual the condition that is supposed to bid the data is met, the gridview decides to displays as follows... IS IT ME OR ITS THE DATAGRID?...??

    Read the article

  • Class design when working with dataset

    - by MC
    If you have to retrieve data from a database and bring this dataset to the client, and then allow the user to manipulate the data in various ways before updating the database again, what is a good class design for this if the data tables will not have a 1:1 relationship with the class objects? Here are some I came up with: Just manipulate the DataSet itself on the client and then send it back to the database as is. This will work though obviously the code will be very dirty and not well-structured. Same as #1, but wrap the dataset code around classes. What I mean is that you may have a class that takes a dataset or a datatable in its constructor, and then provides public methods and properties to simplify the code. Inside these methods and properties it will be reading or manipulating the dataset. To update the database afterwards will be easy because you already have the updated dataset. Get rid of the dataset entirely on the client, convert to objects, then convert back to a dataset when needing to update the database. Is there any good resources where I can find information on this?

    Read the article

  • Getting the avg of the top 10 students from each school

    - by dave
    Hi all -- We have a school district with 38 elementary schools. The kids took a test. The averages for the schools are widely dispersed, but I want to compare the averages of JUST THE TOP 10 students from each school. Requirement: use temporary tables only. I have done this in a very work-intensive, error-prone sort of way as follows. (sch_code = e.g., 9043; -- schabbrev = e.g., "Carter"; -- totpct_stu = e.g., 61.3) DROP TEMPORARY TABLE IF EXISTS avg_top10 ; CREATE TEMPORARY TABLE avg_top10 ( sch_code VARCHAR(4), schabbrev VARCHAR(75), totpct_stu DECIMAL(5,1) ); INSERT INTO avg_top10 SELECT sch_code , schabbrev , totpct_stu FROM test_table WHERE sch_code IN ('5489') ORDER BY totpct_stu DESC LIMIT 10; -- I do that last query for EVERY school, so the total -- length of the code is well in excess of 300 lines. -- Then, finally... SELECT schabbrev, ROUND( AVG( totpct_stu ), 1 ) AS top10 FROM avg_top10 GROUP BY schabbrev ORDER BY top10 ; -- OUTPUT: ----------------------------------- schabbrev avg_top10 ---------- --------- Goulding 75.4 Garth 77.7 Sperhead 81.4 Oak_P 83.7 Spring 84.9 -- etc... Question: So this works, but isn't there a lot better way to do it? Thanks! PS -- Looks like homework, but this is, well...real.

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

  • Database table relationships: Always also relate to specified value (Linq to SQL in .NET Framework)

    - by sinni800
    I really can not describe my question better in the title. If anyone has suggestions: Please tell! I use the Linq to SQL framework in .NET. I ran into something which could be easily solved if the framework supported this, it would be a lot of extra coding otherwise: I have a n to n relation with a helper table in between. Those tables are: Items, places and the connection table which relates items to places and the other way. One item can be found in many places, so can one place have many items. Now of course there will be many items which will be in ALL places. Now there is a problem: Places can always be added. So I need a place-ID which encompasses ALL places, always. Like maybe a place-id "0". If the helper table has a row with the place-id of zero, this should be visible in all places. In SQL this would be a simple "Where [...] or place-id = 0", but how do I do this in Linq relations? Also, for a little side question: How could I manage "all but this place" kind of exclusions?

    Read the article

  • Non-english domain naming issues in programming

    - by Svend
    Most programming code, I imagine is written in english. But I'm curious how people handling the issue of naming herein. Alot of programming is done within some bussiness domain, usually with well established terms for certain procedures, items. I'm from Denmark for instance, and something I work alot with has a term called "indblikskode", which sorta translates to "insight code". So, do I use the line "string indblikskode = ..." in the C# code for some webservice related to this? Or do I try to use a translation, such as "insightcode"? The bussiness I'm in isn't even consistent in it's language, for instance using the term "organisatorisk enhed" (organizatorical unit), but just as often using the abbreviation "OU", which is obviously abbreviated from the english. How do other people handle this naming issue, while keeping consistent, and sane (in everything from simple variable names in your code, to database tables, to server names)? Duplicates: Should identifiers and comments be always in English or in the native language of the application and developers? Do you use another language instead of english ?

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • PHP Commercial Project Function define

    - by Shiro
    Currently I am working with a commercial project with PHP. I think this question not really apply to PHP for all programming language, just want to discuss how your guys solve it. I work in MVC framework (CodeIgniter). all the database transaction code in model class. Previously, I seperate different search criteria with different function name. Just an example function get_student_detail_by_ID($id){} function get_student_detail_by_name($name){} as you can see the function actually can merge to one, just add a parameter for it. But something you are rushing with project, you won't look back previously got what similar function just make some changes can meet the goal. In this case, we found that there is a lot function there and hard to maintenance. Recently, we try to group the entity to one ultimate search something like this function get_ResList($is_row_count=FALSE, $record_start=0, $arr_search_criteria='', $paging_limit=20, $orderby='name', $sortdir='ASC') we try to make this function to fit all the searching criteria. However, our system getting bigger and bigger, the search criteria not more 1-2 tables. It require join with other table with different purpose. What we had done is using IF ELSE, if(bla bla bla) { $sql_join = JOIN_SOME_TABLE; $sql_where = CONDITION; } at the end, we found that very hard to maintance the function. it is very hard to debug as well. I would like to ask your opinion, what is the commercial solution they solve this kind of issue, how to define a function and how to revise it. I think this is link project management skill. Hope you willing to share with us. Thanks.

    Read the article

  • Help a CRUD programmer think about an "approval workflow"

    - by gerdemb
    I've been working on a web application that is basically a CRUD application (Create, Read, Update, Delete). Recently, I've started working on what I'm calling an "approval workflow". Basically, a request is generated for a material and then sent for approval to a manager. Depending on what is requested, different people need to approve the request or perhaps send it back to the requester for modification. The approvers need to keep track of what to approve what has been approved and the requesters need to see the status of their requests. As a "CRUD" developer, I'm having a hard-time wrapping my head around how to design this. What database tables should I have? How do I keep track of the state of the request? How should I notify users of actions that have happened to their requests? Is their a design pattern that could help me with this? Should I be drawing state-machines in my code? I think this is a generic programing question, but if it makes any difference I'm using Django with MySQL.

    Read the article

  • Getting the most recent post based on date

    - by camcim
    Hi guys, How do I go about displaying the most recent post when I have two tables, both containing a column called creation_date This would be simple if all I had to do was get the most recent post based on posts created_on value however if a post contains replies I need to factor this into the equation. If a post has a more recent reply I want to get the replies created_on value but also get the posts post_id and subject. The posts table structure: CREATE TABLE `posts` ( `post_id` bigint(20) unsigned NOT NULL auto_increment, `cat_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `subject` tinytext NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `status` varchar(10) NOT NULL default 'INACTIVE', `private_post` varchar(10) NOT NULL default 'PUBLIC', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`post_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ; The replies table structure: CREATE TABLE `replies` ( `reply_id` bigint(20) unsigned NOT NULL auto_increment, `post_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `notify` varchar(5) NOT NULL default 'YES', `status` varchar(10) NOT NULL default 'INACTIVE', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`reply_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; Here is my query so far. I've removed my attempt of extracting the dates. $strQuery = "SELECT posts.post_id, posts.created_on, replies.created_on, posts.subject "; $strQuery = $strQuery."FROM posts ,replies "; $strQuery = $strQuery."WHERE posts.post_id = replies.post_id "; $strQuery = $strQuery."AND posts.cat_id = '".$row->cat_id."'";

    Read the article

  • Ways to update a dependent table in the same MySQL transaction?

    - by codie
    I need to update two tables inside a single transaction. The individual queries look something like this: 1. INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; If the above query causes an insert then I need to run the following statement on the second table: 2. INSERT INTO t2 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = col2 + val2; otherwise, 3. UPDATE t2 SET col2 = col2 - old_val2 + val2 WHERE col1 = val1; -- old_val2 is the value of t1.col2 before it was updated Right now I run a SELECT on t1 first, to determine whether statement 1 will cause an insert or update on t1. Then I run statement 1 and either of 2 and 3 inside a transaction. What are the ways in which I can do all of these inside one transaction itself? The approach I was thinking of is the following: UPDATE t2, t1 set t2.col2 = t2.col2 - t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; INSERT INTO t2, t1 (t2.col1, t2.col2) VALUES (t1.col1, t1.col2) ON DUPLICATE KEY UPDATE t2.col2 = t2.col2 + t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; Unfortunately, there's no multi-table INSERT... ON DUPLICATE KEY UPDATE in MySQL 5.0. What else could I do?

    Read the article

  • JPA concatenating table names for parent/child @OneToMany

    - by Robert
    We are trying to use a basic @OneToMany relationship: @Entity @Table(name = "PARENT_MESSAGE") public class ParentMessage { @Id @Column(name = "PARENT_ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer parentId; @OneToMany(fetch=FetchType.LAZY) private List childMessages; public List getChildMessages() { return this.childMessages; } ... } @Entity @Table(name = "CHILD_MSG_USER_MAP") public class ChildMessage { @Id @Column(name = "CHILD_ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private Integer childId; @ManyToOne(optional=false,targetEntity=ParentMessage.class,cascade={CascadeType.REFRESH}, fetch=FetchType.LAZY) private ParentMessage parentMsg; public ParentMessage getParentMsg() { return parentMsg; } ... } ChildMessage child = new ChildMessage(); em.getTransaction().begin(); ParentMessage parentMessage = (ParentMessage) em.find(ParentMessage.class, parentId); child.setParentMsg(parentMessage); List list = parentMessage.getChildMessages(); if(list == null) list = new ArrayList(); list.add(child); em.getTransaction().commit(); We receive the following error. Why is OpenJPA concatenating the table names to APP.PARENT_MESSAGE_CHILD_MSG_USER_MAP? Of course that table doesn't exist.. the tables defined are APP.PARENT_MESSAGE and APP.CHILD_MSG_USER_MAP Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: Table/View 'APP.PARENT_MESSAGE_CHILD_MSG_USER_MAP' does not exist. {SELECT t1.CHILD_ID, t1.PARENT_ID, t1.CREATED_TIME, t1.USER_ID FROM APP.PARENT_MESSAGE_CHILD_MSG_USER_MAP t0 INNER JOIN APP.CHILD_MSG_USER_MAP t1 ON t0.CHILDMESSAGES_CHILD_ID = t1.CHILD_ID WHERE t0.PARENTMESSAGE_PARENT_ID = ?} [code=30000, state=42X05]

    Read the article

  • how to model a follower stream in appengine?

    - by molicule
    I am trying to design tables to buildout a follower relationship. Say I have a stream of 140char records that have user, hashtag and other text. Users follow other users, and can also follow hashtags. I am outlining the way I've designed this below, but there are two limitaions in my design. I was wondering if others had smarter ways to accomplish the same goal. The issues with this are The list of followers is copied in for each record If a new follower is added or one removed, 'all' the records have to be updated. The code class HashtagFollowers(db.Model): """ This table contains the followers for each hashtag """ hashtag = db.StringProperty() followers = db.StringListProperty() class UserFollowers(db.Model): """ This table contains the followers for each user """ username = db.StringProperty() followers = db.StringListProperty() class stream(db.Model): """ This table contains the data stream """ username = db.StringProperty() hashtag = db.StringProperty() text = db.TextProperty() def save(self): """ On each save all the followers for each hashtag and user are added into a another table with this record as the parent """ super(stream, self).save() hfs = HashtagFollowers.all().filter("hashtag =", self.hashtag).fetch(10) for hf in hfs: sh = streamHashtags(parent=self, followers=hf.followers) sh.save() ufs = UserFollowers.all().filter("username =", self.username).fetch(10) for uf in ufs: uh = streamUsers(parent=self, followers=uf.followers) uh.save() class streamHashtags(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() class streamUsers(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() Now, to get the stream of followed hastags indexes = db.GqlQuery("""SELECT __key__ from streamHashtags where followers = 'myusername'""") keys = [k,parent() for k in indexes[offset:numresults]] return db.get(keys) Is there a smarter way to do this?

    Read the article

  • Stored Procedures In Source Control - Automate Build/Deployment Process

    - by Alex
    My company provides a large .NET service-oriented solution. The services layer interact with a T-SQL back-end consisting of hundreds of tables and stored procedures. Our C# code is in version-control (SVN) but our stored procedures and schema are not. After much lobbying of expedient upper-management, I was allowed to review our (non-existent) build/deployment process to accomplish the following goals: Place schema and stored procedures under source-control. Automate the build/deployment process. I would like to proceed per the accepted answer's strategy in this post but have additional questions: I would like to use Hudson as my build server. Is this a reasonable choice for a C#/SQL solution? What better alternatives should I explore? Assuming I have all triggers, stored-procedures, schema, etc... under source control, and that they are scripted to individual files, how do I generate a build script which will take into account dependencies/references between these items? (SQL Server does this automatically, but it generates one giant script) What does the workflow of performing an update at the client look like? i.e. I have to keep existing table data. How do I roll-back schema changes? I am the only programmer. Several other pseudo-technical staff like to make changes directly inside SQL Management Studio. Is it realistic to expect others to adhere to this solution -- how can I enforce this? Thank you in advance for your help.

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >