Search Results

Search found 8267 results on 331 pages for 'insert into'.

Page 219/331 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • LINQ to SQL:DataContext.SubmitChanges not updating immediately

    - by aximili
    I have a funny problem. Doing DataContext.SubmitChanges() updates Count() in one way but not in the other, see my comment in the code below.(DC is the DataContext) var compliances = c.DataCompliances.Where(x => x.ComplianceCriteria.FKElement == e.Id); if (compliances.Count() == 0) // Insert if not exists { DC.DataCompliances.InsertOnSubmit(new DataCompliance { FKCompany = c.Id, FKComplianceCriteria = criteria.Id }); DC.SubmitChanges(); compliances = c.DataCompliances.Where(x => x.ComplianceCriteria.FKElement == e.Id); // At this point DC.DataCompliances.Count() has increased, // but compliances.Count() is still 0 // When I refresh the page however, it will be 1 } Why does that happen? I need to update compliances after inserting one. Does anyone have a solution?

    Read the article

  • How do I get the position of a result in the list after an order_by?

    - by Bob Bob
    I'm trying to find an efficient way to find the rank of an object in the database related to it's score. My naive solution looks like this: rank = 0 for q in Model.objects.all().order_by('score'): if q.name == 'searching_for_this' return rank rank += 1 It should be possible to get the database to do the filtering, using order_by: Model.objects.all().order_by('score').filter(name='searching_for_this') But there doesn't seem to be a way to retrieve the index for the order_by step after the filter. Is there a better way to do this? (Using python/django and/or raw SQL.) My next thought is to pre-compute ranks on insert but that seems messy.

    Read the article

  • PHP: Infinity loop and Time Limit!

    - by Jonathan
    Hi, I have a piece of code that fetches data by giving it an ID. If I give it an ID of 1230 for example, the code fetches an article data with an ID of 1230 from a web site (external) and insert it into a DB. Now, the problem is that I need to fetch all the articles, lets say from ID 00001 to 99999. If a do a 'for' loop, after 60 seconds the PHP internal time limit stops the loop. If a use some kind of header("Location: code.php?id=00001") or header("Location: code.php?id=".$ID) and increase $ID++ and then redirect to the same page the browser stops me because of the infinite loop or redirection problem. Please HELP!

    Read the article

  • RoR function help

    - by Aviatrix
    Can someone write a function for me on RoR , i simply don't have the time to study Ruby and RoR for just one time use. The function should do the following things : 1) have an array with variables 2) for each variable in the array execute 4-5 other functions get the results and insert them in another table in the same DB table name - refined CityName varchar Subdomain varchar = the varriable in the array Nearby text State varchar ZipCodes text AreaCodes text Some of the functions return arrays. i will really apreciate the help ! Thanks in advance.

    Read the article

  • Vim: Pasting from clipboard and automatically toggling :set paste

    - by Jonatan Littke
    Hey. When I paste things from the clipboard, they're normally (always) multilined, and in those cases (and those cases only), I'd like :set paste to be triggered, since otherwise the tabbing will increase with each line (you've all seen it!). Though the problem with :set paste is that it doesn't behave well with set smartindent, causing the cursor to jump to the beginning of a new line instead of at the correct indent. So I'd like to enable it for this instance only. I'm using Mac, sshing to a Debian machine with vim, and thus pasting in Insert mode using cmd-v. Cheers.

    Read the article

  • Best data-structure to use for two ended sorted list

    - by fmark
    I need a collection data-structure that can do the following: Be sorted Allow me to quickly pop values off the front and back of the list Remain sorted after I insert a new value Allow a user-specified comparison function, as I will be storing tuples and want to sort on a particular value Thread-safety is not required Optionally allow efficient haskey() lookups (I'm happy to maintain a separate hash-table for this though) My thoughts at this stage are that I need a priority queue and a hash table, although I don't know if I can quickly pop values off both ends of a priority queue. I'm interested in performance for a moderate number of items (I would estimate less than 200,000). Another possibility is simply maintaining an OrderedDictionary and doing an insertion sort it every-time I add more data to it. Furthermore, are there any particular implementations in Python. I would really like to avoid writing this code myself.

    Read the article

  • Carrierwave upload to a tmp dir before saving to database

    - by user827570
    I'm trying to build a visual editor where users can click an image they are presented with an image upload form once the upload is done I use ajax to return the image and insert it back into the page. But the above method inserts the image straight into the database but I want users to be able to visualize the image before the image is inserted into the database. So I was wondering if the image using carrierwave could be uploaded to a temp location, sent back to the user and then when the user saves the page the image is moved into the permanent location. Here's what I have so far. def edit_image @page = Page.find(1) @page.update_attributes(params[:page]) @page.save return :text => @page.file end But this is what I want to achieve def temp_image #uploads received image to a temp location #returns image to the user end And once the user clicks save def save #moves the file in the temp folder to the permanent location end Cheers

    Read the article

  • firefox open local link to directory with explorer

    - by raffael
    On a Website for our internal use i show links to local files and folders. the links are like this: href="file://C:/example/" href="file://C:/example/test.odt" The Problem is now that the link to the directory does open in firefox itself with a useless directory listing. Useless because you can just see the files or open them but not copy, insert, delete... The link to the file work normal and the file is opend by OpenOffice. By changing the configuration of firefox and setting the following key to false I can open the directory in with explorer.exe but for the file I have to choose the right application. network.protocol-handler.expose.file Does someone know a way to get both to work like i want? Means that the Directory is shown by explorer.exe and all files are opened by the right application. This can be by configuring Firefox or windows, changing the links, or even by writing a small program which opens all the file protocol correctly and will be used as protocol handler for the file protocol in firefox. Thanks Raffael

    Read the article

  • jquery document ready with Google

    - by cf_PhillipSenn
    This is how I load jQuery: <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> function OnLoad() { insert jQuery goodness here }; google.load("jquery", "1"); google.setOnLoadCallback(OnLoad); </script> But instead of function OnLoad() {, I'd like to use $(document).ready(function() {} so that it's like every example in every book and documentation snippet. How can I define: $ = jQuery?

    Read the article

  • Python and Plone help

    - by Grenko
    Im using the plone cms and am having trouble with a python script. I get a name error "the global name 'open' is not defined". When i put the code in a seperate python script it works fine and the information is being passed to the python script becuase i can print the query. Code is below: #Import a standard function, and get the HTML request and response objects. from Products.PythonScripts.standard import html_quote request = container.REQUEST RESPONSE = request.RESPONSE # Insert data that was passed from the form query=request.query #print query f = open("blast_query.txt","w") for i in query: f.write(i) return printed I also have a second question, can i tell python to open a file in in a certain directory for example, If the script is in a certain loaction i.e. home folder, but i want the script to open a file at home/some_directory/some_directory can it be done?

    Read the article

  • SQL SERVER 2008 Dynamic query problem

    - by priyanka.sarkar
    I have a dynamic query which reads like this Alter PROCEDURE dbo.mySP -- Add the parameters for the stored procedure here ( @DBName varchar(50), @tblName varchar(50) ) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here declare @string as varchar(50) declare @string1 as varchar(50) set @string1 = '[' + @DBName + ']' + '.[dbo].' + '[' + @tblName + ']' set @string = 'select * from ' + @string1 exec @string END I am calling like this dbo.mySP 'dbtest1','tblTest' And I am experiencing an error "Msg 203, Level 16, State 2, Procedure mySP, Line 27 The name 'select * from [dbtest1].[dbo].[tblTest]' is not a valid identifier." What is wrong? and How to overcome? Thanks in advance

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

  • An attempt has been made to Attach or Add an entity that is not new Linq to Sql error

    - by Collin Oconnor
    I have a save function for my order entity that looks like this and it breaks on the sumbmitChanges line: public void SaveOrder ( Order order ) { if (order.OrderId == 0) orderTable.InsertOnSubmit(order); else if (orderTable.GetOriginalEntityState(order) == null) { orderTable.Attach(order); orderTable.Context.Refresh(RefreshMode.KeepCurrentValues , order); } orderTable.Context.SubmitChanges(); } The order entity contains two other entities; an Address entity and a credit card entity. Now i want these two entities to be null sometimes. Now my guess for why this is throwing an error is because that both of these entites that are inside order are null. If this is the case, How can I insert an new order into the database with both entities (Address and creditCard) being null.

    Read the article

  • Commutative (operational transform) diffs for databases

    - by barrycarter
    What Unix program generates "diff"s between text files (or INSERT/UPDATE/DELETEs for databases) in such a way that the order that the "diff"s are applied in is irrelevant, and the result is the same regardless of order. Etherpad used to do something like this. Example (for a given document or database): % Adam makes a change X, then Bob makes a change Y, then Adam makes another change Z. % However, because of network latency, Adam sees the changes in this order: XZY, while Bob sees them in this order: YXZ. % However, the code/changes are written so that XYZ and YXZ yield the same result. Note: ideally, this can be done without having to do X/Y/Z inverse at any point. I have read http://stackoverflow.com/questions/2043165/operational-transformation-library but I'm not sure this really does what I want.

    Read the article

  • Fatal error: Function name must be a string in.. PHP error

    - by Jonesy
    Hi I have a class called User and a method called insertUser(). function insertUser($first_name, $last_name, $user_name, $password, $email_address, $group_house_id) { $first_name = mysql_real_escape_string($first_name); $last_name = mysql_real_escape_string($last_name); $user_name = mysql_real_escape_string($user_name); $password = mysql_real_escape_string($password); $email_address = mysql_real_escape_string($email_address); $query = "INSERT INTO Users (FirstName,LastName,UserName,Password,EmailAddress, GroupHouseID) VALUES ('$first_name','$last_name','$user_name','$password','$email_address','$group_house_id')"; $mysql_query($query); } And I call it like this: $newUser = new User(); $newUser->insertUser($first_name, $last_name, $user_name, $email, $password, $group_house_id); When I run the code I get this error: Fatal error: Function name must be a string in /Library/WebServer/Documents/ORIOnline/includes/class_lib.php on line 33 Anyone know what I am doing wronly? Also, this is my first attempt at OO PHP. Cheers, Jonesy

    Read the article

  • Passing value in silverlight

    - by Dilse Naaz
    How can pass a value from one page to another page in silverlight. I have one silver light application which contains two pages, one xaml.cs file and one asmx.cs file. I have one text box in xaml page names Text1. My requirement is that at the time of running, i could pass the textbox value to asmx.cs file. How it will be done? my code in asmx.cs file is public string DataInsert(string emp) { SqlConnection conn = new SqlConnection("Data Source=Nisam\\OFFICESERVERS;Initial Catalog=Employee;Integrated Security=SSPI"); SqlCommand cmd = new SqlCommand(); conn.Open(); cmd.Connection = conn; cmd.CommandText = "Insert into demo Values (@Name)"; cmd.Parameters.AddWithValue("@Name", xxx); cmd.ExecuteNonQuery(); return "Saved"; } the value xxx in code is replaced by the passed value from xaml.cs page. pls help me

    Read the article

  • Will MyISAM type tables work better than InnoDB for large numbers of columns?

    - by Ethan
    I have a MySQL InnoDB table with 238 columns. 56 of them are TEXT type, 27 are VARCHAR(255). I am getting MySQL error 139 when users insert data sometimes. After research I found that I'm probably running into InnoDB row size/column size/column count limitations. (I'm putting it that way because the specific limits among those three things are interdependent.) Docs on InnoDB give an idea of the limits. If I switch this table to MyISAM is it likely to solve the problem? I understand the maximum row size of 65,535 bytes. I think I'm hitting InnoDB's additional 8000 byte limit somehow. Switching to PostgreSQL is also a remote option, but would take much longer.

    Read the article

  • Question About DateCreated and DateModified Columns - SQL Server

    - by user311509
    CREATE TABLE Customer ( customerID int identity (500,20) CONSTRAINT . . dateCreated datetime DEFAULT GetDate() NOT NULL, dateModified datetime DEFAULT GetDate() NOT NULL ); When i insert a record, dateCreated and dateModified gets set to default date/time. When i update/modify the record, dateModified and dateCreated remains as is? What should i do? Obviously, i need to dateCreated value to remain as was inserted the first time and dateModified keeps changing when a change/modification occurs in the record fields. In other words, can you please write a sample quick trigger? I don't know much yet...

    Read the article

  • utf8 and unicode getting warning messages in mysql

    - by BufordTaylor
    I have a mysql table. When I try to insert, I get this: Warning: Incorrect string value: '\xAE</...' for column 'value' at row 1 mysql> show create table Configurations; | Configurations | CREATE TABLE `Configurations` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(255) NOT NULL, `ckey` varchar(255) NOT NULL, `value` mediumtext, PRIMARY KEY (`id`), KEY `ckey` (`ckey`), ) ENGINE=InnoDB AUTO_INCREMENT=29 DEFAULT CHARSET=utf8 | mysql> SHOW VARIABLES LIKE 'coll%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_general_ci | | collation_server | utf8_general_ci | +----------------------+-----------------+ I googled the hell out of the error, and it all seemed to boil down to utf8 being set as my default character set. I've been like that for a while. I'm not sure what else to do. Help?

    Read the article

  • tiny mce sql error when adding links

    - by Anders Kitson
    I am using tiny mce with a script I built for uploading some content to a blog like system. Whenever I add a link via tiny mce I get this error. The field type in mysql for $content which is the one carrying the link is longblob if that helps. here is the link error first and then my code You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'google test" href="http://www.google.ca" target="_blank"google est laborum' at line 4 /* GRAB FORM DATA */ $title = $_POST['title']; $date = $_POST['date']; $content = $_POST['content']; $imageName1 = $_FILES["file"]["name"]; $date = date("Y-m-d"); $sql = "INSERT INTO blog (title,date,content,image)VALUES( \"$title\", \"$date\", \"$content\", \"$imageName1\" )"; $results = mysql_query($sql)or die(mysql_error());

    Read the article

  • Convert XML to table in SQL Server 2005.

    - by Tamim Sadikali
    If I pass in an xml parameter to a stored proc which looks like this: <ClientKeys> <ck>3052</ck> <ck>3051</ck> <ck>3050</ck> <ck>3049</ck> ... </ClientKeys> ...and then convert the XML to a temp table like this: CREATE TABLE #ClientKeys ( ClientKey varchar(36) ) INSERT INTO #ClientKeys (ClientKey) SELECT ParamValues.ck.value('.','VARCHAR(36)') FROM @ClientKeys.nodes('/ClientKeys/ck') as ParamValues(ck) ...the temp tbl is populated and everything is good. However the time taken to populate said table is strictly proportionate to the number of 'ck' elements in the xml - which I wasn't expecting as there is no iterative step. And thus the time taken to populate the tbl soon becomes 'too long'. Is there a quicker way to achieve the above?

    Read the article

  • PHP Moving mySQL Tree Node

    - by TK
    I am having trouble trying to move sub nodes or parent nodes up or down... not that good at math. CREATE TABLE IF NOT EXISTS `pages` ( page-id mediumint(8) unsigned NOT NULL AUTO_INCREMENT, page-left mediumint(8) unsigned NOT NULL, page-right smallint(8) unsigned NOT NULL, page-title text NOT NULL, page-content text NOT NULL, page-time int(11) unsigned NOT NULL, page-slug text NOT NULL, page-template text NOT NULL, page-parent mediumint(8) unsigned NOT NULL, page-type text NOT NULL, PRIMARY KEY (page-id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 ; INSERT INTO pages (page-id, page-left, page-right, page-title, page-content, page-time, page-slug, page-template, page-parent, page-type) VALUES (17, 1, 6, '1', '', 0, 'PARENT', '', 0, ''), (18, 2, 5, '2', '', 0, 'SUB', '', 17, ''), (19, 3, 4, '3', '', 0, 'SUB-SUB', '', 18, ''), (20, 7, 8, '5', '', 0, 'TEST', '', 0, ''); As example how would I move TEST up above PARENT and say move SUB down below SUB-SUB by playing with the page-left/page-right IDs? Code is not required just help with the SQL concept or math for it, would help me understand how to move it better...

    Read the article

  • msysGit: Why does git log output blank lines?

    - by Sam
    It appears to insert less blank lines the closer I type the command to the bottom of the terminal window. If I type it at the top of the terminal window, it inserts nearly a full window height of blank lines; if I type it at the very bottom, no blank lines are inserted. It seems like the pager program is pushing output to the bottom of the terminal window, but I want the output to be right below my command or at the top, like in Linux git. I can get expected behavior by using git --no-pager log, but what if I want to use a pager?

    Read the article

  • Sequence Generators in T-SQL

    - by PaoloFCantoni
    We have an Oracle application that uses a standard pattern to populate surrogate keys. We have a series of extrinsic rows (that have specific values for the surrogate keys) and other rows that have intrinsic values. We use the following Oracle trigger snippet to determine what to do with the Surrogate key on insert: 'IF :NEW.SurrogateKey IS NULL THEN SELECT SurrogateKey_SEQ.NEXTVAL INTO :NEW.SurrogateKey FROM DUAL; END IF;' If the supplied surrogate key is null then get a value from the nominated sequence, else pass the supplied surrogate key through to the row. I can't seem to find an easy way to do this is T-SQL. There are all sorts of approaches, but none of which use the notion of a sequence generator like Oracle and other SQL-92 compliant DBs do. Anybody know of a really efficient way to do this in SQL Server T-SQL? BTW we're using SQL Server 2008 if that's any help. TIA, Paolo

    Read the article

  • Solving a SQL Server Deadlock situation

    - by mjh41
    I am trying to find a solution that will resolve a recurring deadlock situation in SQL server. I have done some analysis on the deadlock graph generated by the profiler trace and have come up with this information: The first process (spid 58) is running this query: UPDATE cds.dbo.task_core SET nstate = 1 WHERE nmboxid = 89 AND ndrawerid = 1 AND nobjectid IN (SELECT nobjectid FROM ( SELECT nobjectid, count(nobjectid) AS counting FROM cds.dbo.task_core GROUP BY nobjectid) task_groups WHERE task_groups.counting > 1) The second process (spid 86) is running this query: INSERT INTO task_core (…) VALUES (…) spid 58 is waiting for a Shared Page lock on CDS.dbo.task_core (spid 86 holds a conflicting intent exclusive (IX) lock) spid 86 is waiting for an Intent Exclusive (IX) page lock on CDS.dbo.task_core (spid 58 holds a conflicting Update lock)

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >