Search Results

Search found 8267 results on 331 pages for 'insert into'.

Page 225/331 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Using DataTypeAttribute to validate a date

    - by Andy Evans
    I'm having some difficulty understanding how to validate a date (DOB) using MVC2. What I want to do is 1. Is the date entered a valid date and, 2. Is the date at lease 13 years in the past. For example, to validate an email I use the following code: [Required(ErrorMessage = "Email address is required.")] [StringLength(320, ErrorMessage = "Email must be less than 320 characters.")] [Email(ErrorMessage = "This email address is invalid.")] public string email { get; set; } To validate the email I use: public class EmailAttribute : RegularExpressionAttribute { public EmailAttribute() : base("insert long regex expression here") { } } Any assistance would be greatly appreciated, thanks!

    Read the article

  • Asp.net cached objects staying in memory

    - by GordonB
    I have a asp.net web forms app that uses System.Web.Caching.Cache to cache xml data from a number of web services for 2 hours. webCacheObj.Remove(dataCacheKey) webCacheObj.Insert(dataCacheKey, dataToCache, Nothing, DateTime.Now.AddHours(2), Nothing) Every 90 minutes a Microsoft Search Server hits a particular (spider) page which calls the code to put the objects into the cache. The issue i have is that over a period of time, the memory usage of the application grows exponentially. Lets say that in a week, the memory usage of the application pool grows to over 1gb. I'm using IIS7 and no application pool recycling is currently enabled.

    Read the article

  • Read binary data from a MDB-file running under LAMP

    - by BusterX
    I need to be able to connect to an MDB-file in a LAMP-environment (running on Linux) and ultimately insert converted data into a Mysql db. The data I need to access is stored as a BLOB (Long Binary Data according to Access) in the MDB file. I have not yet been able to actually have a look at the data but I have been told that the BLOB consists of byte strings. Something along the lines of: 0x1c 0x10 0x27 0x00 0x00 I need to parse the byte strings and convert these to a format that is human readable. I do have access to the documentation that explains the various byte strings. So this is really two questions: How do a get access to the MDB file via PHP* (running under LAMP) and read the BLOB (I do not have access to a Windows-platform)? What would be the best way to parse the binary data (in PHP*) once I am able to connect to the MDB-file? *Or are there other methods/languages that are more appropriate?

    Read the article

  • How can I create a google-spreadsheet using C#

    - by Kev
    I've installed the latest Google Data API for .Net. But I cannot figure out how to create a spreadsheet using C# code. It's not included in the sample programs. I've tried this: SpreadsheetsService ss = new SpreadsheetsService("Spreadsheet Example"); ss.setUserCredentials("[email protected]", "password"); SpreadsheetEntry se = new SpreadsheetEntry(); se.Title.Text = "new"; ss.Insert(new Uri("http://spreadsheets.google.com/feeds/spreadsheets/private/full"), se); However, It doesn't work! Is there some way to do this job? Thank you!

    Read the article

  • setting codeigniter mysql datetime column to time() always sets it to 0

    - by Jake
    Hi guys. I'm using Codeigniter for a small project, and my model works correctly except for the dates. I have a column defined: created_at datetime not null and my model code includes in its array passed into db-insert: 'created_at' = time() This produces a datetime value of 0000-00-00 00:00:00. When I change it to: 'created_at' = "from_unixtime(" . time() . ")" it still produces the 0 datetime value. What am I doing wrong? How can I set this field to the given unix time? Also, I know mysql sets TIMESTAMP columns automatically for you - I'm not interested in that solution here. So far I can't find a complete example of this on the web.

    Read the article

  • sql server swap data between rows problem

    - by AmRoSH
    I was asking b4 about swaping query to swap data between rows in same table and i got that qurey ALTER PROCEDURE [dbo].[VehicleReservationsSwap] -- Add the parameters for the stored procedure here (@FirstVehicleID int, @secondVehicleID int, @WhereClause nvarchar(2000)) AS BEGIN Create Table #Temp ( VehicleID int ,VehicleType nvarchar(100) ,JoinId int ) DECLARE @SQL varchar(8000) SET @SQL ='Insert into #Temp (VehicleID,VehicleType,JoinId) SELECT VehicleID,VehicleType,CASE WHEN VehicleID = ' + Cast(@FirstVehicleID as varchar(10)) + ' then ' + Cast(@secondVehicleID as varchar(10)) + ' ELSE ' + Cast(@FirstVehicleID as varchar(10)) + ' END AS JoinId FROM Reservations WHERE VehicleID in ( ' + Cast(@FirstVehicleID as varchar(10)) + ' , ' + Cast(@secondVehicleID as varchar(10)) + ' )' + @WhereClause EXEC(@SQL) --swap values UPDATE y SET y.VehicleID = #Temp.VehicleID ,y.VehicleType = #Temp.VehicleType FROM Reservations y INNER JOIN #Temp ON y.VehicleID = #Temp.JoinId WHERE y.VehicleID in (@FirstVehicleID,@secondVehicleID) Drop Table #Temp END this query take 2 parameters and swaping all rows returned for each parameter. the problem is the query swaps just if each parameter (forign key) has values I need to make swaping in case if one of them has no vlue. I hope if some one can help me in that . Thanks,

    Read the article

  • Rails and jQuery - how do you get server-side validation errors to your view after an ajax request

    - by adam
    Ive searched this site but questions are usually regarding doing client-side validations or for different frameworks. I have a tasks list whose items can be edited inline. Upon submitting the inline edit form the item is updated all thanks to jQuery, ajax and rails. But I want to handle bad input from the user. HTML requests redisplay the view and errors are displayed thanks to rails helpers. But how do I insert that information after an ajax call? Heres my update method in my controller def update @task = Task.find(params[:id]) respond_to do |format| if @task.update_attributes(params[:task]) flash[:notice] = 'Task was successfully updated.' format.html { redirect_to(@task) } format.xml { head :ok } format.js else format.html { render :action => "edit" } format.xml { render :xml => @task.errors, :status => :unprocessable_entity } #format.js ...hmmm... either go to js.erb file or do stuff inline end end end

    Read the article

  • Consing lists with user-defined type in Haskell

    - by user1319603
    I have this type I defined myself: data Item = Book String String String Int -- Title, Author, Year, Qty | Movie String String String Int -- Title, Director, Year, Qty | CD String String String Int deriving Show -- Title, Artist, Year, Qty I've created an empty list all_Items = [] With the following function I am trying to insert a new book of type Item (Book) into the all_Items addBook all_Items = do putStrLn "Enter the title of the book" tit <- getLine putStrLn "Enter the author of the book" aut <- getLine putStrLn "Enter the year this book was published" yr <- getLine putStrLn "Enter quantity of copies for this item in the inventory" qty <- getLine Book tit aut yr (read qty::Int):all_Items return(all_Items) I however am receiving this error: Couldn't match expected type `IO a0' with actual type `[a1]' The error points to the line where I am using the consing operator to add the new book to the list. I can gather that it is a type error but I can't figure out what it is that I am doing wrong and how to fix it. Thanks in Advance!

    Read the article

  • Performance on joins in linq

    - by swapna
    HI , I am going to rewrite a store procedure in LINQ. What this sp is doing is joining 12 tables and get the data and insert it into another table. it has 7 left outer joins and 4 inner joins.And returns one row of data. Now question. 1)What is the best way to achieve this joins in linq. 2) do you think this affect performance (its only retrieving one row of data at a given point of time) Please advice. Thanks SNA.

    Read the article

  • Running ASP.NET MVC application behind a proxy with different root relative path

    - by Wiebe
    Hi All, I'm having trouble with paths in a ASP.NET MVC application that's running behind a proxy. Our IIS Application root path is for example http://server/MyApp/ meaning that all urls using the application root ("~/",Url.Action("MyAction","MyController")) are resolved to "/MyApp" Now we're running behind a proxy server that forwards all requests, but changes the application root to something like this: "/Secury/Proxy/RubbishUrl/MyApp" Because the original url is only available on the client, I thought of creating a cookie with the path prefix, and insert this before each generated URL on the server. Now the question is, what's the best location in code to modify each URL that's resolved/sent to the client (to resources, controller actions, images etc)? Every path in the application is resolved with the MVC methods (Url.Content, Url.Action etc).

    Read the article

  • Pointer-based binary heap implementation

    - by Derek Chiang
    Is it even possible to implement a binary heap using pointers rather than an array? I have searched around the internet (including SO) and no answer can be found. The main problem here is that, how do you keep track of the last pointer? When you insert X into the heap, you place X at the last pointer and then bubble it up. Now, where does the last pointer point to? And also, what happens when you want to remove the root? You exchange the root with the last element, and then bubble the new root down. Now, how do you know what's the new "last element" that you need when you remove root again?

    Read the article

  • rails error on create action

    - by ash34
    SQL (2.0ms) SELECT task_report_requests_seq.NEXTVAL id FROM dual TaskReportRequest Create (2.2ms) INSERT INTO task_report_requests (location, created_at, updated_at, id, freq, login, task_dt) VALUES('020', TO_DATE('2010-05-25 05:02:38','YYYY-MM-DD HH24:MI:SS'), TO_DATE('2010-05-25 05:02:38','YYYY-MM-DD HH24:MI:SS'), 10023, 'M', NULL, TO_DATE('2010-05-30 00:00:00','YYYY-MM-DD HH24:MI:SS')) NoMethodError (You have a nil object when you didn't expect it! The error occurred while evaluating nil.call): app/controllers/task_report_requests_controller.rb:45:in `create' It says error evaluating nil.call . Can someone tell me when I would get such an error. I am not able to figure out with this information. thanks, ash

    Read the article

  • How can I intercept Drupal User Registration after it has passed all validations?

    - by Senthil
    Hi, I am using Drupal 6.16 In the user registration module, I want to hook in AFTER all validations have been made and the row is about to be inserted. Here, I want to run my business logic. If my business logic fails, the drupal registration should be stopped. I can do this by setting an error in the form. If it succeeds, drupal registration SHOULD proceed and complete. I decided to use the validate operation in hook_user. But it is possible for drupal registration to be stopped at the validation phase itself, by some other module that is run after mine. What I want is, when my business logic succeeds, the drupal registration MUST succeed. Which hook and operation should I use so that I can intercept just before the drupal user info insert/update and after all validations have succeeded?

    Read the article

  • SQL Get Latest Unique Rows

    - by Simpleton
    I have a log table, each row representing an object logging its state. Each object has a unique, unchanging GUID. There are multiple objects logging their states, so there will be thousands of entries, with objects continually inserting new logs. Everytime an object checks in, it is via an INSERT. I have the PrimaryKey, GUID, ObjectState, and LogDate columns in tblObjects. I want to select the latest (by datetime) log entry for each unique GUID from tblObjects, in effect a 'snapshot' of all the objects. How can this be accomplished?

    Read the article

  • Starting self hosted WCF services on demand

    - by Pieter
    Is it possible to start self hosted WCF services on demand? I see two options to accomplish this: Insert a listener in the self hosted WCF's web server and spin up a service host when a request for a specific service comes in, before WCF starts looking for the existence of that endpoint; or Integrate a web service in process, start a service host for a request if it isn't running yet and redirect the request to that service host (like I suspect IIS does). I cannot use IIS or WAS because the web services need to run in process with the UI business logic. Which is feasible and how can I accomplish this? EDIT: I cannot just start the service hosts because there are hundreds, most (about 95%) of which are (almost) never used but need to be available. This is for exposing a business logic layer of 900 entities.

    Read the article

  • how to order a group result with Linq?

    - by Aaron
    How can I order the results from "group ... by... into..." statement in linq? For instance: var queryResult = from records in container.tableWhatever where records.Time >= DateTime.Today group records by tableWhatever.tableHeader.UserId into userRecords select new { UserID = userRecords.Key, Records = userRecords }; The query returns records in table "contain.tableWhatever" grouped by "UserId". I want the returned results within each group ordered by time decending. How can I do that? More specific, assume the above query return only one group like the following: {UserID = 1, Records= {name1 5/3/2010_7:10pm; name2 5/3/2010_8:10pm; name3 5/3/2010_9:10pm} } After insert the orderby statement in the above query, the returned results should be like this: {UserID = 1, Records= {name3 5/3/2010_9:10pm; name2 5/3/2010_8:10pm; name1 5/3/2010_7:10pm} } Thanks for help!

    Read the article

  • converting an array of characters to a const gchar*

    - by Mark Roberts
    I've got an array of characters which contains a string: char buf[MAXBUFLEN]; buf[0] = 'f'; buf[1] = 'o'; buf[2] = 'o'; buf[3] = '\0'; I'm looking to pass this string as an argument to the gtk_text_buffer_insert function in order to insert it into a GtkTextBuffer. What I can't figure out is how to convert it to a const gchar *, which is what gtk_text_buffer_insert expects as its third argument. Can anybody help me out?

    Read the article

  • linq get all object in one-dimensional collection

    - by scrat789
    public class Class1 : List<Class2> { } public class Class2 : List<Class3> { } public class Class3 { string str; int i; } public class Program { Class1 c = new Class1(); //insert values.... List<Class3> all = ??; } How can i get a one-dimensional collection in my var "all" ? please note I can not modify Class1, class2 and class3...

    Read the article

  • T-SQL to PL/SQL (IDENTITY)

    - by folone
    I've got a T-SQL script, that converts field to IDENTITY (in a weird way). How do I convert it to PL/SQL? (and, probably, figure out, if there is a simpler way to do this - without creating a temporary table). The T-SQL script: -- alter table ts_changes add TS_THREADID VARCHAR(100) NULL; -- Change Field TS_ID TS_NOTIFICATIONEVENTS to IDENTITY BEGIN TRANSACTION GO CREATE TABLE dbo.Tmp_TS_NOTIFICATIONEVENTS ( TS_ID int NOT NULL IDENTITY (1, 1), TS_TABLEID int NOT NULL, TS_CASEID int NULL, TS_WORKFLOWID int NULL, TS_NOTIFICATIONID int NULL, TS_PRIORITY int NULL, TS_STARTDATE int NULL, TS_TIME int NULL, TS_WAITSTATUS int NULL, TS_RECIPIENTID int NULL, TS_LASTCHANGEDATE int NULL, TS_ELAPSEDCYCLES int NULL ) ON [PRIMARY] SET IDENTITY_INSERT dbo.Tmp_TS_NOTIFICATIONEVENTS ON GO IF EXISTS(SELECT * FROM dbo.TS_NOTIFICATIONEVENTS) EXEC('INSERT INTO dbo.Tmp_TS_NOTIFICATIONEVENTS (TS_ID, TS_TABLEID, TS_CASEID, TS_WORKFLOWID, TS_NOTIFICATIONID, TS_PRIORITY, TS_STARTDATE, TS_TIME, TS_WAITSTATUS, TS_RECIPIENTID, TS_LASTCHANGEDATE, TS_ELAPSEDCYCLES) SELECT TS_ID, TS_TABLEID, TS_CASEID, TS_WORKFLOWID, TS_NOTIFICATIONID, TS_PRIORITY, TS_STARTDATE, TS_TIME, TS_WAITSTATUS, TS_RECIPIENTID, TS_LASTCHANGEDATE, TS_ELAPSEDCYCLES FROM dbo.TS_NOTIFICATIONEVENTS WITH (HOLDLOCK TABLOCKX)') GO SET IDENTITY_INSERT dbo.Tmp_TS_NOTIFICATIONEVENTS OFF GO DROP TABLE dbo.TS_NOTIFICATIONEVENTS GO EXECUTE sp_rename N'dbo.Tmp_TS_NOTIFICATIONEVENTS', N'TS_NOTIFICATIONEVENTS', 'OBJECT' GO ALTER TABLE dbo.TS_NOTIFICATIONEVENTS ADD CONSTRAINT aaaaaTS_NOTIFICATIONEVENTS_PK PRIMARY KEY NONCLUSTERED ( TS_ID ) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO COMMIT

    Read the article

  • Best way to gather, then import data into drupal?

    - by Frank
    I am building my first database driven website with Drupal and I have a few questions. I am currently populating a google docs excel spreadsheet with all of the data I want to eventually be able to query from the website (after it's imported). Is this the best way to start? If this is not the best way to start what would you recommend? My plan is to populate the spreadsheet then import it as a csv into the mysql db via the CCK Node. I've seen two ways to do this. http://drupal.org/node/133705 (importing data into CCK nodes) http://drupal.org/node/237574 (Inserting data using spreadsheet/csv instead of SQL insert statements) Basically my question(s) is what is the best way to gather, then import data into drupal? Thanks in advance for any help, suggestions.

    Read the article

  • inserting date timestamp value to mysql thru php in godaddy hosting site

    - by Suj
    Hi all, I'm using GoDaddy's Shared Linux hosting. Using php i am inserting or updating the mysql database with create date or modified date using the variables $datestring = "%Y:%m:%d %h:%i:%s"; $time = time(); $createdate= mdate($datestring, $time); In this $createdate will be the variable i use to insert or update the table. But its updating the wrong value. ITs not the server time or localtime. mostly its 30 mins delay with godaddy's server time. Pls help.

    Read the article

  • How to sanitize log messages in Log4j to save them in database

    - by Rafael
    Hello, I'm trying to save log messages to a central database. In order to do this, I configured the following Appender in log4j's xml configuration: <appender name="DB" class="org.apache.log4j.jdbc.JDBCAppender"> <param name="URL" value="jdbc:postgresql://localhost/logging_test" /> <param name="user" value="test_user" /> <param name="password" value="test_password" /> <param name="sql" value="INSERT INTO log_messages ( log_level, message, log_date ) VALUES ( '%p', '%m', '%d{yyyy-MM-dd HH:mm:ss}' )" /> </appender> This works fine, except some of the messages contain ', and then the appender fails. Is there an easy way to do this?

    Read the article

  • Doctrine inserts when it should update

    - by Goran Juric
    I am trying to use do the most simple update query but Doctrine issues an INSERT statement instead of an UPDATE. $q = Doctrine_Query::create() ->from('Image i') ->where('id = ?'); $image = $q->fetchOne($articleId, Doctrine_Core::HYDRATE_RECORD); $image->copyright = "some text"; $image->save(); I have also tried using the example from the manual, but still a new record gets inserted: $userTable = Doctrine_Core::getTable('User'); $user = $userTable->find(2); if ($user !== false) { $user->username = 'Jack Daniels'; $user->save(); } edit: This example from the manual works: $user = new User(); $user->assignIdentifier(1); $user->username = 'jwage'; $user->save(); The funny thing is that I use this on another model and there it works OK. Maybe I have to fetch the whole array graph for this to work (I have another model in a one to many relationship)?

    Read the article

  • Uploading a csv file to sql server - Identity problem.

    - by Doozer1979
    Given a column structure in a CSV file of: First_Name, Last_Name, Date_Of_Birth And a SQL Server table with a structure of ID(PK) | First_Name | Last_Name | Date_Of_Birth (Field ID is an Identity with an auto-increment of 1) How do i arrange it so that SQL Server does not attempt to insert the First_Name column from the csv file into the ID field? For info the csv is loaded into a DataTable and then copied to SQL Server using SqlBulkCopy Should i be modifying the csv file before the import add the ID column (The destination table is truncated prior to import, so no need to worry about duplicate key values.) Or perhaps adding an id column to the Datatable? Or Is there a setting in Sql Server that i may have missed?

    Read the article

  • Why getting active record error when trying to work on arrays?

    - by keruilin
    I have the following association in my User model: has_and_belongs_to_many :friends, :class_name => 'User', :foreign_key => 'friend_id' I have the following uniqueness constraint in my user_users table: UNIQUE KEY `no_duplicate_friends` (`user_id`,`friend_id`) In my code, I am retrieving a user's friends -- friends = user.friends. friends is an array. I have a scenario where I want add the user with all those friends to the friends array. Ex: friends << user_with_all_those_homies However, I get the following error: ActiveRecord::StatementInvalid: Mysql::Error: Duplicate entry '18-18' for key 'no_duplicate_friends': INSERT INTO `users_users` (`friend_id`, `user_id`) VALUES (18, 18) What gives?

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >