Search Results

Search found 24172 results on 967 pages for 'mongodb update'.

Page 253/967 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Incorrect syntax inserting data into table

    - by SelectDistinct
    I am having some trouble with my update() method. The idea is that the user Provides a recipe name, ingredients, instructions and then selects an image using Filestream. Once the user clicks 'Add Recipe' this will call the update method, however as things stand I am getting an error which is mentioning the contents of the text box: Here is the update() method code: private void updatedata() { // filesteam object to read the image // full length of image to a byte array try { // try to see if the image has a valid path if (imagename != "") { FileStream fs; fs = new FileStream(@imagename, FileMode.Open, FileAccess.Read); // a byte array to read the image byte[] picbyte = new byte[fs.Length]; fs.Read(picbyte, 0, System.Convert.ToInt32(fs.Length)); fs.Close(); //open the database using odp.net and insert the lines string connstr = @"Server=mypcname\SQLEXPRESS;Database=RecipeOrganiser;Trusted_Connection=True"; SqlConnection conn = new SqlConnection(connstr); conn.Open(); string query; query = "insert into Recipes(RecipeName,RecipeImage,RecipeIngredients,RecipeInstructions) values (" + textBox1.Text + "," + " @pic" + "," + textBox2.Text + "," + textBox3.Text + ")"; SqlParameter picparameter = new SqlParameter(); picparameter.SqlDbType = SqlDbType.Image; picparameter.ParameterName = "pic"; picparameter.Value = picbyte; SqlCommand cmd = new SqlCommand(query, conn); cmd.Parameters.Add(picparameter); cmd.ExecuteNonQuery(); MessageBox.Show("Image successfully saved"); cmd.Dispose(); conn.Close(); conn.Dispose(); Connection(); } } catch (Exception ex) { MessageBox.Show(ex.Message); } } Can anyone see where I have gone wrong with the insert into Recipes query or suggest an alternative approach to this part of the code?

    Read the article

  • Rapid taps on an OpenGL ES app introducing input delay

    - by Tim R.
    I am starting out writing a 2D game in OpenGL ES, and I have encountered an odd problem: if I rapidly tap the touchscreen, the input starts lagging behind the display. The more times I tap, the more delay it causes between the input and any indication of that input onscreen. It only happens if I intentionally tap very rapidly, but not from tapping and dragging with any number of fingers. What could be causing this? Excessive details follow: Both accelerometer input and taps are delayed by just tapping. The only events I am responding to are touchesBegan (below) in my EAGLView and accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration in my Game object. There doesn't seem to be any upper limit to the amount of delay: I've gotten up to 12 seconds of delay by tapping rapidly with five fingers. I have not seen any drops in framerate (it stays constantly at 60 fps) in the OpenGL ES tool in Instruments or by taking 1/the time between updates. Possibly relevant code: - (void) drawView:(id) sender { [game update:allTouches]; [renderer render:game]; } -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { allTouches = [event allTouches]; } allTouches is a pointer that gets passed to my Game every update, which passes it to each GameObject in their update methods.

    Read the article

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

  • Question About Fk refrence in The Collection

    - by Ahmed
    Hi , i have 2 entities : ( person ) & (Address) with follwing mapping : <class name="Adress" table="Adress" lazy="false"> <id name="Id" column="Id"> <generator class="native" /> </id> <many-to-one name="Person" class="Person"> <column name="PersonId" /> </many-to-one> </class> <class name="Person" table="Person" lazy="false"> <id name="PersonId" column="PersonId"> <generator class="native" /> </id> <property name="Name" column="Name" type="String" not-null="true" /> <set name="Adresses" lazy="true" inverse="true" cascade="save-update"> <key> <column name="PersonId" /> </key> <one-to-many class="Adress" /> </set> </class> my propblem is that when i set Adrees.Person with new object of person ,The collection person.Adresses doesn't update itself . should i update every end role of the association to be updated in the two both? another thing : if i updated the Fk manually like this : Adress.PersonId it doesn't break or change association. does this is Nhibernte behavior ? thanks in advance , i am waiting for your experiencies

    Read the article

  • EF4 + STE: Reattaching via a WCF Service? Using a new objectcontext each and every time?

    - by Martin
    Hi there, I am planning to use WCF (not ria) in conjunction with Entity Framework 4 and STE (Self tracking entitites). If i understnad this correctly my WCF should return an entity or collection of entities (using LIST for example and not IQueryable) to the client (in my case silverlight) The client then can change the entity or update it. At this point i believe it is self tracking???? This is where i sort of get a bit confused as there are a lot of reported problems with STEs not tracking.. Anyway... Then to update i just need to send back the entity to my WCF service on another method to do the update. I should be creating a new OBJECTCONTEXT everytime? In every method? If i am creaitng a new objectcontext everytime in everymethod on my WCF then don't i need to re-attach the STE to the objectcontext? So basically this alone wouldn't work?? using(var ctx = new MyContext()) { ctx.Orders.ApplyChanges(order); ctx.SaveChanges(); } Or should i be creating the object context once in the constructor of the WCF service so that 1 call and every additional call using the same wcf instance uses the same objectcontext? I could create and destroy the wcf service in each method call from the client - hence creating in effect a new objectcontext each time. I understand that it isn't a good idea to keep the objectcontext alive for very long. Any insight or information would be gratefully appreciated thanks

    Read the article

  • Can redirection of screen output to file change the result of a C++ code?

    - by Biga
    I am having this very weird behaviour with a C++ code: It gives me different results when running with and without redirecting the screen output to a file (reproducible in cygwin and linux). I mean, if I get the same executable and run it like ./run or run it like ./run >out.log, I get different results! I use std::cout to output to screen, all lines ending with endl; I use ifstream for the input file; I use ofstream for output, all lines ending with endl. I am using g++ 4. Any idea what is going on? UPDATE: I have hard-coded the input data, so 'ifstream' is not used, and problem persists. UPDATE 2: That's getting interesting. I have probed three variables that are computed initially, and that's what I get when using with and without redirecting the output to file redirected to file: 0 -0.02 0 direct to screen: 0 -0.02 1.04083e-17 So there's a round-off difference in the code variables with and without redirecting the output! Now, why redirecting would interefere with an internal computation of the code? UPDATE 3: If I redirect to /dev/null, I get the sam behaviour as outputing direct to screen, instead of redirecting to file.

    Read the article

  • Adding with PHP to a MySQL database

    - by shinjuo
    I am pretty new to PHP and I am trying to make an inventory database. I have been trying to make it so that a user can enter a card ID and then amount the want to add to the inventory and have it update the inventory. For example someone could type in test and 2342 and it would update test. Here is what I have been trying with no success: add.html <body> <form action="add.php" method="post"> Card ID: <input type="text" name="CardID" /> Amount to Add: <input type="text" name="Add" /> <input type="submit" /> </form> </body> </html> add.php <?php $link = mysql_connect('tulsadir.ipowermysql.com', 'cbouwkamp', '!starman1'); if (!$link){ die('Could not connect: ' . mysql_error()); } mysql_select_db("tdm_inventory", $link); $add = $_POST[Add] mysql_query("UPDATE cardLists SET AmountLeft = '$add' WHERE cardID = 'Test'"); echo "test successful"; mysql_close($link); ?>

    Read the article

  • PHP Form - Edit & Delete via Text File Db

    - by Jax
    hi, I pieced together the script below from various tutorials, examples, etc... Right now the script currently: Saves Id, Name, Url with a "|" delimiter to a text file Db like: 1|John|http://www.john.com| 2|Mark|http://www.mark.com| 3|Fred|http://www.fred.com| But I'm having a hard time trying to make the "UPDATE" and "DELETE" buttons work. Can someone please post code which will: let me update/save any changed data for that row (for UPDATE button) let me delete that row (for DELETE button) PLEASE copy n paste the code below and try for yourself. I would like to keep the output format of the script below too. thanks D- $file = "data.txt"; $name = $_POST['name']; $url = $_POST['url']; $data = file('data.txt'); $i = 1; foreach ($data as $line) { $line = explode('|', $line); $i++; } if (isset($_POST['submits'])) { $fp = fopen($file, "a+"); fwrite($fp, $i."|".$name."|".$url."|\n"); fclose($fp); } ? '); } ?

    Read the article

  • GPG error occurs while using "deb file:/local-path-to-repo ..." in /etc/apt/sources.list

    - by Chandler.Huang
    I need to install packages within non-internet connection environment. My plan is to download dist structure from Internet and then add file path to /etc/apt/sources.list. So I download related structure includes ubunt/dists/precise, precise-backports, precise-proposed, precise-security, precise-updates from a ftp mirror server. And then I remove original source and add the following to my /etc/apt/sources.list. deb file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe deb-src file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe Then I got GPG error as following after apt-get update. root@openstack:/~# apt-get update Ign file: precise InRelease Get:1 file: precise Release.gpg [198 B] Get:2 file: precise Release [50.1 kB] Ign file: precise Release Get:3 file: precise/main TranslationIndex [3,761 B] Get:4 file: precise/multiverse TranslationIndex [2,716 B] Get:5 file: precise/restricted TranslationIndex [2,636 B] Get:6 file: precise/universe TranslationIndex [2,965 B] Reading package lists... Done W: GPG error: file: precise Release: The following signatures were invalid: BADSIG 0976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> I had tried use the following steps after google but in vain. sudo apt-get clean cd /var/lib/apt sudo mv lists lists.old sudo mkdir -p lists/partial sudo apt-get update Is there any way to resolve this? And why this error occurs? Thanks a lot.

    Read the article

  • android widget unresponsive

    - by John
    I have a widget that you press and it then it will update the text on the widget. I have set an on click listener to launch another activity to perform the text update, But for some reason it only works temporarily and then it will become unresponsive and not do anything when pressed. Does anyone know why it might be doing that? i have posted my widget code below in case it is helpful. @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager,int[] appWidgetIds) { thisWidget = new ComponentName(context, MemWidget.class); Intent intent = new Intent(context, updatewidget.class); PendingIntent pendingIntent = PendingIntent.getActivity(context, 0, intent, 0); // Get the layout for the App Widget and attach an on-click listener to the button RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.widget); views.setOnClickPendingIntent(R.id.ImageButton01, pendingIntent); // Tell the AppWidgetManager to perform an update on the current App Widget appWidgetManager.updateAppWidget(thisWidget, views); } @Override public void onReceive(Context context, Intent intent) { appWidgetManager = AppWidgetManager.getInstance(context); remoteViews = new RemoteViews(context.getPackageName(), R.layout.widget); thisWidget = new ComponentName(context, MemWidget.class); // v1.5 fix that doesn't call onDelete Action final String action = intent.getAction(); if (AppWidgetManager.ACTION_APPWIDGET_DELETED.equals(action)) { final int appWidgetId = intent.getExtras().getInt( AppWidgetManager.EXTRA_APPWIDGET_ID, AppWidgetManager.INVALID_APPWIDGET_ID); if (appWidgetId != AppWidgetManager.INVALID_APPWIDGET_ID) { this.onDeleted(context, new int[] { appWidgetId }); } } else { super.onReceive(context, intent); } }

    Read the article

  • How to insert and call by row and column into sqlite3 python, great tutorial problem.

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • How to effectively execute this cron job?

    - by Lost_in_code
    I have a table with 200 rows. I'm running a cron job every 10 minutes to perform some kind of insert/update operation on the table. The operation needs to be performed only on 5 rows at a time every time the cron job runs. So in first 10 mins records 1-5 are updated, records 5-10 in the 20th minute and so on. When the cron job runs for the 20th time, all the records in the table would have been updated exactly once. This is what is to be achieved at least. And the next cron job should repeat the process again. The problem: is that, every time a cron job runs, the insert/update operation should be performed on N rows (not just 5 rows). So, if N is 100, all records would've been updated by just 2 cron jobs. And the next cron job would repeat the process again. Here's an example: This is the table I currently have (200 records). Every time a cron job executes, it needs to pick N records (which I set as a variable in PHP) and update the time_md5 field with the current time's MD5 value. +---------+-------------------------------------+ | id | time_md5 | +---------+-------------------------------------+ | 10 | 971324428e62dd6832a2778582559977 | | 72 | 1bd58291594543a8cc239d99843a846c | | 3 | 9300278bc5f114a290f6ed917ee93736 | | 40 | 915bf1c5a1f13404add6612ec452e644 | | 599 | 799671e31d5350ff405c8016a38c74eb | | 56 | 56302bb119f1d03db3c9093caf98c735 | | 798 | 47889aa559636b5512436776afd6ba56 | | 8 | 85fdc72d3b51f0b8b356eceac710df14 | | .. | ....... | | .. | ....... | | .. | ....... | | .. | ....... | | 340 | 9217eab5adcc47b365b2e00bbdcc011a | <-- 200th record +---------+-------------------------------------+ So, the first record(id 10) should not be updated more than once, till all 200 records are updated once - the process should start over once all the records are updated once. I have some idea on how this could be achieved, but I'm sure there are more efficient ways of doing it. Any suggestions?

    Read the article

  • ASP ListView - Eval() as formatted number, Bind() as unformatted?

    - by chucknelson
    I have an ASP ListView, and have a very simple requirement to display numbers as formatted w/ a comma (12,123), while they need to bind to the database without formatting (12123). I am using a standard setup - ListView with a datasource attached, using Bind(). I converted from some older code, so I'm not using ASP.NET controls, just form inputs...but I don't think it matters for this: <asp:SqlDataSource ID="MySqlDataSource" runat="server" ConnectionString='<%$ ConnectionStrings:ConnectionString1 %>' SelectCommand="SELECT NUMSTR FROM MY_TABLE WHERE ID = @ID" UpdateCommand= "UPDATE MY_TABLE SET NUMSTR = @NUMSTR WHERE ID = @ID"> </asp:SqlDataSource> <asp:ListView ID="MyListView" runat="server" DataSourceID="MySqlDataSource"> <LayoutTemplate> <div id="itemplaceholder" runat="server"></div> </LayoutTemplate> <ItemTemplate> <input type="text" name="NUMSTR" ID="NUMSTR" runat="server" value='<%#Bind("NUMSTR")%>' /> <asp:Button ID="UpdateButton" runat="server" Text="Update" Commandname="Update" /> </ItemTemplate> </asp:ListView> In the example above, NUMSTR is a number, but stored as a string in a SqlServer 2008 database. I'm also using the ItemTemplate as read and edit templates, to save on duplicate HTML. In the example, I only get the unformatted number. If I convert the field to an integer (via the SELECT) and use a format string like Bind("NUMSTR", "{0:###,###}"), it writes the formatted number to the database, and then fails when it tries to read it again (can't convert with the comma in there). Is there any elegant/simple solution to this? It's so easy to get the two-way binding going, and I would think there has to be a way to easily format things as well... Oh, and I'm trying to avoid the standard ItemTemplate and EditItemTemplate approach, just for sheer amount of markup required for that. Thanks!

    Read the article

  • Error : Number of Rows In Section in UITableView in iPhone SDK

    - by Meghan
    I am getting this error while I am trying to load the data into my table view. Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (73) must be equal to the number of rows contained in that section before the update (71), plus or minus the number of rows inserted or deleted from that section (3 inserted, 0 deleted). What could be wrong? Thanks EDIT : I am initializing the array on ViewWillAppear and adding new objects to the same array on Tableview's didSelectRowAtIndexPath method Here is the code On viewWillAppear : cellTextArray = [[NSMutableArray alloc] init]; [cellTextArray addObjectsFromArray:newPosts]; Here is the code which modifies the array on didSelectRowAtIndexPath : [cellTextArray addObjectsFromArray:newPosts]; NSMutableArray *insertIndexPaths = [NSMutableArray array]; for (NSUInteger item = count; item < count + newCount; item++) { [insertIndexPaths addObject:[NSIndexPath indexPathForRow:item inSection:0]]; } [self.table beginUpdates]; [self.table insertRowsAtIndexPaths:insertIndexPaths withRowAnimation:UITableViewRowAnimationFade]; [self.table endUpdates]; [self.table scrollToRowAtIndexPath:indexPath atScrollPosition:UITableViewScrollPositionNone animated:YES]; NSIndexPath *selected = [self.table indexPathForSelectedRow]; if (selected) { [self.table deselectRowAtIndexPath:selected animated:YES]; } Here newPosts is an array which has the values that are added to cellTextArray on didSelectRowAtIndexPath method and viewWillAppear method.

    Read the article

  • Heroku only initializes some of my models.

    - by JayX
    So I ran heroku db:push And it returned Sending schema Schema: 100% |==========================================| Time: 00:00:08 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 projects: 100% |==========================================| Time: 00:00:00 tasks: 100% |==========================================| Time: 00:00:00 users: 100% |==========================================| Time: 00:00:00 Sending data 8 tables, 70,551 records groups: 100% |==========================================| Time: 00:00:00 schema_migrat: 100% |==========================================| Time: 00:00:00 projects: 100% |==========================================| Time: 00:00:00 tasks: 100% |==========================================| Time: 00:00:02 authenticatio: 100% |==========================================| Time: 00:00:00 articles: 100% |==========================================| Time: 00:08:27 users: 100% |==========================================| Time: 00:00:00 topics: 100% |==========================================| Time: 00:01:22 Resetting sequences And when I went to heroku console This worked >> Task => Task(id: integer, topic: string, content: string, This worked >> User => User(id: integer, name: string, email: string, But the rest only returned something like >> Project NameError: uninitialized constant Project /home/heroku_rack/lib/console.rb:150 /home/heroku_rack/lib/console.rb:150:in `call' /home/heroku_rack/lib/console.rb:28:in `call' >> Authentication NameError: uninitialized constant Authentication /home/heroku_rack/lib/console.rb:150 /home/heroku_rack/lib/console.rb:150:in `call' update 1: And when I typed >> ActiveRecord::Base.connection.tables it returned => ["projects", "groups", "tasks", "topics", "articles", "schema_migrations", "authentications", "users"] Using heroku's SQL console plugin I got SQL> show tables +-------------------+ | table_name | +-------------------+ | authentications | | topics | | groups | | projects | | schema_migrations | | tasks | | articles | | users | +-------------------+ So I think they are existing in heroku's database already. There is probably something wrong with rack db:migrate update 2: I ran rack db:migrate locally in both production and development modes and nothing wrong happened. But when I ran it on heroku it only returned: $ heroku rake db:migrate (in /disk1/home/slugs/389817_1c16250_4bf2-f9c9517b-bdbd-49d9-8e5a-a87111d3558e/mnt) $ Also, I am using sqlite3 update 3: so I opened up heroku console and typed in the following command class Authentication < ActiveRecord::Base;end Amazingly I was able to call Authentication class, but once I exited, nothing was changed.

    Read the article

  • I need a fast runtime expression parser

    - by Chris Lively
    I need to locate a fast, lightweight expression parser. Ideally I want to pass it a list of name/value pairs (e.g. variables) and a string containing the expression to evaluate. All I need back from it is a true/false value. The types of expressions should be along the lines of: varA == "xyz" and varB==123 Basically, just a simple logic engine whose expression is provided at runtime. UPDATE At minimum it needs to support ==, !=, , =, <, <= Regarding speed, I expect roughly 5 expressions to be executed per request. We'll see somewhere in the vicinity of 100/requests a second. Our current pages tend to execute in under 50ms. Usually there will only be 2 or 3 variables involved in any expression. However, I'll need to load approximately 30 into the parser prior to execution. UPDATE 2012/11/5 Update about performance. We implemented nCalc nearly 2 years ago. Since then we've expanded it's use such that we average 40+ expressions covering 300+ variables on post backs. There are now thousands of post backs occurring per second with absolutely zero performance degradation. We've also extended it to include a handful of additional functions, again with no performance loss. In short, nCalc met all of our needs and exceeded our expectations.

    Read the article

  • newbie hibernate first level cache confusion

    - by Bruce
    Hi all I'm just geting to grips with hibernate. Little bit confused. I just wanted to watch the operation of the first level cache, which I understood to batch up queries until the end of the session. But if I create an object, hibernate saves it immediately, so that when I later update it in the same transaction, it has to do an update too: Session session = factory.getCurrentSession(); session.beginTransaction(); Test1 test1 = new Test1(); test1.setName("Test 1"); test1.setValue(10); // Touch it session.save(test1); System.out.println("At checkpoint 1"); test1.setValue(20); session.getTransaction().commit(); I see the sql for the save, then 'At checkpoint 1', then the sql for the update. Do I have something set up wrong or am I misunderstanding hibernate's first level cache? Is there a good document on the first level cache - I didn't find anything in the hibernate docs, but I could easily have missed it.. Thanks!

    Read the article

  • UpdateModelFromDatabaseException when trying to add a table to Entity Framework model

    - by Agent_9191
    I'm running into a weird issue with Entity Framework in .NET 3.5 SP1 within Visual Studio 2008. I created a database with a few tables in SQL Server and then created the associated .edmx Entity Framework model and had no issues. I then created a new table in the database that has a foreign key to an existing table and needed to be added to the .edmx. So I opened the .edmx in Visual Studio and in the models right-clicked and chose "Update Model From Database...". I saw the new table in the "add" tab, so I checked it and clicked finish. However I get an error message with the following text: --------------------------- Microsoft Visual Studio --------------------------- An exception of type 'Microsoft.Data.Entity.Design.Model.Commands.UpdateModelFromDatabaseException' occurred while attempting to update from the database. The exception message is: 'Cannot update from the database. Cannot resolve the Name Target for ScalarProperty 'ID <==> CustomerID'.'. --------------------------- OK --------------------------- For reference, here's the tables seem to be the most pertinent to the error. CustomerPreferences already exists in the .edmx. Diets is the table that was added afterwards and trying to add to the .edmx. CREATE TABLE [dbo].[CustomerPreferences]( [ID] [uniqueidentifier] NOT NULL, [LastUpdatedTime] [datetime] NOT NULL, [LastUpdatedBy] [uniqueidentifier] NOT NULL, PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[Diets]( [ID] [uniqueidentifier] NOT NULL, [CustomerID] [uniqueidentifier] NOT NULL, [Description] [nvarchar](50) NOT NULL, [LastUpdatedTime] [datetime] NOT NULL, [LastUpdatedBy] [uniqueidentifier] NOT NULL, PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Diets] WITH CHECK ADD CONSTRAINT [FK_Diets_CustomerPreferences] FOREIGN KEY([CustomerID]) REFERENCES [dbo].[CustomerPreferences] ([ID]) GO ALTER TABLE [dbo].[Diets] CHECK CONSTRAINT [FK_Diets_CustomerPreferences] GO This seems like a fairly common use case, so I'm not sure where I'm going wrong.

    Read the article

  • Performance of stored proc when updating columns selectively based on parameters?

    - by kprobst
    I'm trying to figure out if this is relatively well-performing T-SQL (this is SQL Server 2008). I need to create a stored procedure that updates a table. The proc accepts as many parameters as there are columns in the table, and with the exception of the PK column, they all default to NULL. The body of the procedure looks like this: CREATE PROCEDURE proc_repo_update @object_id bigint ,@object_name varchar(50) = NULL ,@object_type char(2) = NULL ,@object_weight int = NULL ,@owner_id int = NULL -- ...etc AS BEGIN update object_repo set object_name = ISNULL(@object_name, object_name) ,object_type = ISNULL(@object_type, object_type) ,object_weight = ISNULL(@object_weight, object_weight) ,owner_id = ISNULL(@owner_id, owner_id) -- ...etc where object_id = @object_id return @@ROWCOUNT END So basically: Update a column only if its corresponding parameter was provided, and leave the rest alone. This works well enough, but as the ISNULL call will return the value of the column if the received parameter was null, will SQL Server optimize this somehow? This might be a performance bottleneck on the application where the table might be updated heavily (insertion will be uncommon so the performance there is not a problem). So I'm trying to figure out what's the best way to do this. Is there a way to condition the column expressions with something like CASE WHEN or something? The table will be indexed up the wazoo as well for read performance. Is this the best approach? My alternative at this point is to create the UPDATE expression in code (e.g. inline SQL) and execute it against the server. This would solve my doubts about performance, but I'd rather leave this in a stored proc if possible.

    Read the article

  • Load ascx via jQuery

    - by Raika
    Is there a way to load ascx file by jQuery? UPDATE: thanks to @Emmett and @Yads.i am use the handler and the ajax: jQuery.ajax({ type: "POST", //GET url: "Foo.ashx", data: '{}', contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { jQuery('#controlload').append(response.d); // or response }, error: function () { jQuery('#controlload').append('error'); } }); but i,m just getting an error. my ajax is wrong? Another Update : i am using error: function (xhr, ajaxOptions, thrownError) { jQuery('#controlload').append(thrownError); } and this what i get : Invalid JSON: Test =(this test is label inside my ascx) and my ascx file after Error!!! Another Update : my ascx file is somthing like this: <asp:DropDownList ID="ddl" runat="server" AutoPostBack="true"> <asp:ListItem>1</asp:ListItem> <asp:ListItem>2</asp:ListItem> </asp:DropDownList> <asp:Label ID="Label1" runat="server">Test</asp:Label> but when I call ajax i get this error in asp: :( Control 'ctl00_ddl' of type 'DropDownList' must be placed inside a form tag with runat=server.

    Read the article

  • Why does my binding break down on SilverLight ProgressBars?

    - by Bill Jeeves
    I asked a similar question about charts but I have given up on that and I am using progress bars instead. Essentially, I have ten progress bars in a Silverlight control. Each is showing a different value and updating every couple of seconds (it's a process monitor). Each progress bar has the same minimum value and maximum value so the bars can be compared. Trying to follow the M-V-VM model, I have bound the value of each bar to a property in my ViewModel. All of the maximum values for the bar are bound to a single property. When the model updates, the values and the maximum can all update. This allows the bars to re-scale as the sizes grow. I'm finding that the binding will stop working sometimes on one or more bars. I suspect it is because a bar's value becomes higher than the maximum occasionally. This is because if I update the maximums first and they are going down, the values will be to high. If I update the values first when the maximum needs increasing, the values are too high again. Is there a way to stop this behaviour? Some way, perhaps, to tell the progress bars that it's OK to temporarily go too high? Or some way to tell the bindings that they shouldn't be disabled when this happens? Or maybe I've got this completely wrong and there's some other issue with ProgressBar binding I don't know about?

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • Is there a better way to do updates in LinqToSQL?

    - by Vaccano
    I have a list (that comes to my middleware app from the client) that I need to put in my database. Some items in the list may already be in the db (just need an update). Others are new inserts. This turns out to be much harder than I thought I would be. Here is my code to do that. I am hoping there is a better way: public void InsertClients(List<Client> clients) { var comparer = new LambdaComparer<Client>((x, y) => x.Id == y.Id); // Get a listing of all the ones we will be updating var alreadyInDB = ctx.Clients .Where(client => clients.Contains(client, comparer)); // Update the changes for those already in the db foreach (Client clientDB in alreadyInDB) { var clientDBClosure = clientDB; Client clientParam = clients.Find(x => x.Id == clientDBClosure.Id); clientDB.ArrivalTime = clientParam.ArrivalTime; clientDB.ClientId = clientParam.ClientId; clientDB.ClientName = clientParam.ClientName; clientDB.ClientEventTime = clientParam.ClientEventTime; clientDB.EmployeeCount = clientParam.EmployeeCount; clientDB.ManagerId = clientParam.ManagerId; } // Get a list of all clients that are not in my the database. var notInDB = clients.Where(x => alreadyInDB.Contains(x, comparer) == false); ctx.Clients.InsertAllOnSubmit(notInDB); ctx.SubmitChanges(); } This seems like a lot of work to do a simple update. But maybe I am just spoiled. Anyway, if there is a easier way to do this please let me know. Note: If you are curious the code to the LambdaComparer is here: http://gist.github.com/335780#file_lambda_comparer.cs

    Read the article

  • Listening to PHP function calls to intercept the returned value

    - by Lansen Q
    I am working on making use of a Web Services API offered by the hosts of our internal system. I am accessing it via PHP with the built-in SOAP offering. The API session is initiated by a remote call to a function that returns some session tokens; every call to any function thereafter will return a new session token, which must accompany the next request. I have an API Client class that is doing the bulk of the work; what I would like to do is to set something up whereby any SOAP call that is made will make sure to update the API Client class' $session variable with the new session details, and then pass the data along. So far the only way I can think of doing this is creating a new class extending the SoapClient class, with a __call function wrapper to execute the function, update the new session token, and return the results nonetheless. I'm not sure that this will a) work b) be the best way to go about this. The wrapper class would be identical to making a SOAP call, and it would return an identical result, just it would update the session token before you get your result back. Thanks! Hope I explained myself properly.

    Read the article

  • Can I get away with this or is it just too crude and unpractical ?

    - by The_AlienCoder
    I spent the whole of last night searching for a free AspNet web chat control that I could simply drag into my website. Well the search was in vain as I could not find a control that matched my needs i.e List of users, 1 to 1 chat, Ability to kick out users.. In the end I decided to create my own control from scractch. Although it works well on my machine Im concerned that It maybe a little crude and unpractical on a shared hosting enviroment. Basically this is what I did : Created an sql database that stores the chat messages. Wrote the stored procedures and and included a statement that clears old messages Then the 'crude' part : Dragged an update panel and timer control on my page Dragged a Repeater databound to the chat messages table inside the update panel Dragged another update panel and inside it put a textbox and a button Configured the timer control to tick every 5 seconds. ..and then I made it all work like this In the timer tick event I 'refreshed' the messages display by invoking Databind() on my repeater i.e protected void Timer1_Tick(object sender, EventArgs e) { MyRepeater.DataBind(); } Then in my send button click event protected void btnSend_Click(object sender, EventArgs e) { MyDataLayer.InsertMessage(Message, Sender, CurrTime); } Well It works well on my machine and Ive got the other functionalities(users list, kick out user..) to work by simply creating more tables. But like I said it seems a little crude to me. so I need a proffesional opinion. Should I run with this or try another Approach ?

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >