Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 758/838 | < Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • PHP site keeps opening to blank page, no errors.

    - by gene
    First, the premises: PHP loaded on IIS6 on Win2003 STD R2 SP2, PHP_5213 using FastCGI, MySQL_5145. Customer sent me the site files, which I unzipped to C:\InetPub\wwwroot\<site root>, then I created a new site in IIS, pointed to <site root>, added test.php to the site files for testing and it works, but visiting index.php produces a blank page with no errors. The readme.txt file present makes reference to application.php and explains root folder var and sets it to a non-existent file. I don't know PHP syntax, but I tried several logical changes with zero results. At this point I'm not even sure if that is the problem anymore. With PHP, MySQL & site debugging have put in over 20 hours. Still confused, I have resorted to heavy drug use and purchased a small firearm, loaded with a single round (even this seemed to take an inordinate amount of time). I've given up all hope. Someone please help save a new server and/or old administrator.

    Read the article

  • Strange behaviour using Drag and Drop in word 2003 automation in headers

    - by Oliver Hanappi
    Hi! I am developing a template based addin for Word 2003 which allows the user to drag and drop elements from a listbox into the word document. Unfortunately I'm getting a really strange behaviour when trying to drop elements in the document's header. Open the template and type something in the header Close the header and insert some content on the page Add a page break. Switch to page layout mode where and set zoom level to "Two Pages" Open the header Slowly Drag and Drop an list item from the list box to the header. See multiple Page Setups dialogs occur which cause Word to crash. Here is my code: // in ThisDocument.cs public MyUserControl _control; public void Init() { _control = new MyUserControl(); ActionsPane.Controls.Add(_control); ActionsPane.Visible = true; } // in MyUserControl.cs public void listBox1_MouseDown(object sender, MouseEventArgs e) { DoDragDrop("something", DragDropEffects.Copy); } Have I done somethinkg wrong with implementing Drag and Drop? Is there a workaround for this strange behaviour? Thanks in advance, Oliver Hanappi

    Read the article

  • Why are changes to coffeescript files not being compiled when my Rails 3.2.0 app is in development mode?

    - by ben
    Normally, any changes I make to .js.coffee files in my Rails 3.2.0 app in development mode take effect when I refresh the page. All of a sudden, this is not happening. If I do rake assets:precompile, then the changes are shown, but then if I do rake assets:clean they go back to not being shown. What is causing this? Edit: Restarting the server makes the changes show. Why isn't this happening automatically as before? Edit: Here is my development.rb Myapp::Application.configure do # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Raise exception on mass assignment protection for Active Record models config.active_record.mass_assignment_sanitizer = :strict # Log the query plan for queries taking more than this (works # with SQLite, MySQL, and PostgreSQL) config.active_record.auto_explain_threshold_in_seconds = 0.5 # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = true config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.log_level = :warn end

    Read the article

  • Struggling with a data modeling problem

    - by rpat
    I am struggling with a data model (I use MySQL for the database). I am uneasy about what I have come up with. If someone could suggest a better approach, or point me to some reference matter I would appreciate it. The data would have organizations of many types. I am trying to do a 3 level classification (Class, Category, Type). Say if I have 'Italian Restaurant', it will have the following classification Food Services Restaurants Italian However, an organization may belong to multiple groups. A restaurant may also serve Chinese and Italian. So it will fit into 2 classifications Food Services Restaurants Italian Food Services Restaurants Chinese The classification reference tables would be like the following: ORG_CLASS (RowId, ClassCode, ClassName) 1, FOOD, Food Services ORG_CATEGORY(RowId, ClassCode, CategoryCode, CategoryName) 1, FOOD, REST, Restaurants ORG_TYPE (RowId, ClassCode, CategoryCode, TypeCode, TypeName) 100, FOOD, REST, ITAL, Italian 101, FOOD, REST, CHIN, Chinese 102, FOOD, REST, SPAN, Spanish 103, FOOD, REST, MEXI, Mexican 104, FOOD, REST, FREN, French 105, FOOD, REST, MIDL, Middle Eastern The actual data tables would be like the following: I will allow an organization a max of 3 classifications. I will have 3 GroupIds each pointing to a row in ORG_TYPE. So I have my ORGANIZATION_TABLE ORGANIZATION_TABLE (OrgGroupId1, OrgGroupId2, OrgGroupId3, OrgName, OrgAddres) 100,103,NULL,MyRestaurant1, MyAddr1 100,102,NULL,MyRestaurant2, MyAddr2 100,104,105, MyRestaurant3, MyAddr3 During data add, a dialog could let the user choose the clssa, category, type and the corresponding GroupId could be populated with the rowid from the ORG_TYPE table. During Search, If all three classification are chosen, It will be more specific. For example, if Food Services Restaurants Italian is the criteria, the where clause would be 'where OrgGroupId1 = 100' If only 2 levels are chosen Food Services Restaurants I have to do 'where OrgGroupId1 in (100,101,102,103,104,105, .....)' - There could be a hundred in that list I will disallow class level search. That is I will force selection of a class and category The Ids would be integers. I am trying to see performance issues and other issues. Overall, would this work? or I need to throw this out and start from scratch.

    Read the article

  • Is it immoral to write crappy code even if readability and correctness is not a requirement?

    - by mafutrct
    There are cases when crappy (i.e. unreadable and buggy) code is not much of a problem. For instance, imagine you need to generate a big text file that mostly follows a simple pattern with a few very complex exceptions. What do you do? You quickly write a simple algorithm and insert the exceptional bits in the output manually to save 4 hours. The code is unreadable, and the output is flawed, but it's still the correct way since it is way faster. But let's get this straight: I hate bad code. I've had to read and work with code that caused my stomach to hurt. I care a lot about good code. And actually, I caught myself thinking that it is immoral to write bad code even though the dirty approach is sometimes superior. I was surprised by myself and found my idea to be very irrational. Did you ever experience this? Should I just get rid of this stupid idea and use the most efficient approach to coding?

    Read the article

  • How to make a Global Array?

    - by Wayfarer
    So, I read this post, and it's pretty much exactly what I was looking for. However... it doesn't work. I guess I'm not going to go with the singleton object, but rather making the array in either a Global.h file, or insert it into the _Prefix file. Both times I do that though, I get the error: Expected specifier-qualifier-list before 'static' and it doesn't work. So... I'm not sure how to get it to work, I can remove extern and it works, but I feel like I need that to make it a constant. The end goal is to have this Mutable Array be accessible from any object or any file in my project. Help would be appreciated! This is the code for my Globals.h file: #import <Foundation/Foundation.h> @interface Globals : NSObject { static extern NSMutableArray * myGlobalArray; } @end I don't think I need anything in the implementation file. If I were to put that in the prefix file, the error was the same.

    Read the article

  • How to write to a varchar(max) column using ODBC

    - by andyjohnson
    Summary: I'm trying to write a text string to a column of type varchar(max) using ODBC and SQL Server 2005. It fails if the length of the string is greater than 8000. Help! I have some C++ code that uses ODBC (SQL Native Client) to write a text string to a table. If I change the column from, say, varchar(100) to varchar(max) and try to write a string with length greater than 8000, the write fails with the following error [Microsoft][ODBC SQL Server Driver]String data, right truncation So, can anyone advise me on if this can be done, and how? Some example (not production) code that shows what I'm trying to do: SQLHENV hEnv = NULL; SQLRETURN iError = SQLAllocEnv(&hEnv); HDBC hDbc = NULL; SQLAllocConnect(hEnv, &hDbc); const char* pszConnStr = "Driver={SQL Server};Server=127.0.0.1;Database=MyTestDB"; UCHAR szConnectOut[SQL_MAX_MESSAGE_LENGTH]; SWORD iConnectOutLen = 0; iError = SQLDriverConnect(hDbc, NULL, (unsigned char*)pszConnStr, SQL_NTS, szConnectOut, (SQL_MAX_MESSAGE_LENGTH-1), &iConnectOutLen, SQL_DRIVER_COMPLETE); HSTMT hStmt = NULL; iError = SQLAllocStmt(hDbc, &hStmt); const char* pszSQL = "INSERT INTO MyTestTable (LongStr) VALUES (?)"; iError = SQLPrepare(hStmt, (SQLCHAR*)pszSQL, SQL_NTS); char* pszBigString = AllocBigString(8001); iError = SQLSetParam(hStmt, 1, SQL_C_CHAR, SQL_VARCHAR, 0, 0, (SQLPOINTER)pszBigString, NULL); iError = SQLExecute(hStmt); // Returns SQL_ERROR if pszBigString len > 8000 The table MyTestTable contains a single colum defined as varchar(max). The function AllocBigString (not shown) creates a string of arbitrary length. I understand that previous versions of SQL Server had an 8000 character limit to varchars, but not why is this happening in SQL 2005? Thanks, Andy

    Read the article

  • Display last picture

    - by steve
    Hi I'm inserting an image from the camera (Taking a picture) into the MediaStore.Images.Media datastore. Does anyone know how I can go about displaying the last picture taken? I used Uri image = ContentUris.withAppendedId(externalContentUri, 45); to display an image from the datastore but obviously 45 is not the correct image. I try to pass the information from the previous activity (Camera) to the display activity but I'm assuming due to the photo call back being its own thread the value never gets set. Photo code is as follows Camera.PictureCallback photoCallback = new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // TODO Auto-generated method stub FileOutputStream fos; try { Bitmap bm = BitmapFactory.decodeByteArray(data, 0, data.length); fileUrl = MediaStore.Images.Media.insertImage(getContentResolver(), bm, "LastTaken", "Picture"); if(fileUrl == null) { Log.d("Still", "Image Insert Failed"); return; } else { picUri = Uri.parse(fileUrl); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, picUri)); } } catch(Exception e) { Log.d("Picture", "Error Picture: ", e); } camera.startPreview(); } };

    Read the article

  • Looping through a SimpleXML object, or turning the whole thing into an array.

    - by Coffee Cup
    I'm trying to work out how to iterate though a returned SimpleXML object. I'm using a toolkit called Tarzan AWS, which connects to Amazon Web Services (SimpleDB, S3, EC2, etc). I'm specifically using SimpleDB. I can put data into the Amazon SimpleDB service, and I can get it back. I just don't know how to handle the SimpleXML object that is returned. The Tarzan AWS documentation says this: Look at the response to navigate through the headers and body of the response. Note that this is an object, not an array, and that the body is a SimpleXML object. Here's a sample of the returned SimpleXML object: [body] = SimpleXMLElement Object ( [QueryWithAttributesResult] = SimpleXMLElement Object ( [Item] = Array ( [0] = SimpleXMLElement Object ( [Name] = message12413344443260 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = john ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is a message. ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334444 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413344443260 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.1 ) ) ) [1] = SimpleXMLElement Object ( [Name] = message12413346907303 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = fred ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is another message ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334690 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413346907303 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.2 ) ) ) ) So what code do I need to get through each of the object items? I'd like to loop through each of them and handle it like a returned mySQL query. For example, I can query SimpleDB and then loop though the SimpleXML so I can display the results on the page. Alternatively, how do you turn the whole shebang into an array? I'm new to SimpleXML, so I apologise if my questions aren't specific enough.

    Read the article

  • Algorithm to find the percentage of how much two texts are identical

    - by qster
    What algorithm would you suggest to identify how much from 0 to 1 (float) two texts are identical? Note that I don't mean similar (ie, they say the same thing but in a different way), I mean exact same words, but one of the two texts could have extra words or words slightly different or extra new lines and stuff like that. A good example of the algorithm I want is the one google uses to identify duplicate content in websites (X search results very similar to the ones shown have been omitted, click here to see them). The reason I need it is because my website has the ability for users to post comments; similar but different pages currently have their own comments, so many users ended up copy&pasting their comments on all the similar pages. Now I want to merge them (all similar pages will "share" the comments, and if you post it on page A it will appear on similar page B), and I would like to programatically erase all those copy&pasted comments from the same user. I have quite a few million comments but speed shouldn't be an issue since this is a one time thing that will run in the background. The programming language doesn't really matter (as long as it can interface to a MySQL database), but I was thinking of doing it in C++.

    Read the article

  • Appending data to NSFetchedResultsController during find or create loop

    - by Justin Williams
    I have a table view that is managed by an NSFetchedResultsController. I am having an issue with a find-or-create operation, however. When the user hits the bottom of my table view, I am querying my server for another batch of content. If it doesn't exist in the local cache, we create it and store it. If it does exist, however, I want to append that data to the fetched results controller and display it. I can't quite figure that part out. Here's what I'm doing thus far: Passing the returned array of values from my server to an NSOperation to process. In the operation, create a new managed object context to work with. In the operation, I iterate through the array and execute a fetch request to see if the object exists (based on its server id). If the object doesn't exist, we create it and insert it into the operations' managed object context. After the iteration completes, we save the managed object context, which triggers a merge notification on my main thread. At this point, any objects that weren't locally cached in my Core Data store before will appear, but the ones that previously existed do not come along for the ride. I feel like it's something simple I'm missing, and could use a nudge in the right direction.

    Read the article

  • Date formats in ActiveRecord / Rails 3

    - by cbmeeks
    In my model, I have a departure_date and a return_date. I am using a text_field instead of the date_select so that I can use the JQuery datepicker. My app is based in the US for now but I do hope to get international members. So basically this is what is happening. The user (US) types in a date such as 04/01/2010 (April 1st). Of course, MySQL stores it as a datetime such as 2010-04-01... Anyway, when the user goes to edit the date later on, it shows "01/04/2010" because I am using a strftime("%m/%d/%Y) which doesn't make sense....so it thinks it is January 4th instead of the original April 1st. It's like the only way to accurately store the data is for the user to type in: 2010-04-01 I hope all of this makes sense. What I am really after is a way for the user to type in (or use the datepicker) a date in their native format. So someone in Europe could type in 01/04/2010 for April 1st but someone in the US would type in 04/01/2010. Is there an easy, elegant solution to this? Thanks for any suggestions.

    Read the article

  • JQuery: Dynamically-generated form fields not submitted and event handlers appear to fire multiple t

    - by Bob
    Hi, I'm new to JQuery, so I apologise if there's something which should be obvious which I'm unaware of. I seem to be having a couple of issues some JQuery I'm trying to implement: Code: http://pastebin.ca/1843496 (the editor didn't seem to like HTML tags) post_test.php simply contains: [?php print_r($_POST); ?] in an attempt to find out what's actually being submitted. The first issue I'm having is that only the hidden form fields are actually being submitted. If I insert tags into the container DIV manually it works as expected, but when done as above the text field doesn't get posted. From what I've read, I gather it's to do with the fact that the DOM is being modified after it's loaded, but the thing that's puzzling me is why, in that case, there are no issues referencing the other added hidden fields or the tag. I've experimented with changing the event handler for a.save to '.live('click', function ...' and also using LiveQuery to no avail. The other issue is that when a.save is clicked, before the form is actually submitted, as far as I can tell the event handler is running again, replacing the value entered into the text field with the value of editable.text(), which is what ultimately gets submitted. I'm sorry if any of this is unclear. Regards,

    Read the article

  • WIKI replacement solution for SharePoint?

    - by Jakub
    I'm trying to research a replacement for the pathetic WIKI that comes with WSS (only wiki code it has is to create url links). I have looked at a few but most 'replacements' I see are MOSS only? (or so it just states MOSS for requirements). Has anyone faced this situation? What did you end up using? I would like something that I can have all in one location (not different apps, hence WSS). With LDAP / AD Integration like WSS. Thanks appreciate any input. I would like to see ~ $3k solutions tho (nothing super expensive, hence why we don't run MOSS). EDIT: Anyone else have any suggestions? EDIT2: Actually since I haven't had much feedback (thanks to those that have). I installed mediawiki under IIS with PHP enabled, and enabled the IIS AD hack for authentication. IIS ends up prompting for authentication (user/pass) if you use a non IE browser, then it sets the $_SERVER["REMOTE_USER"] variable, and grabs some AD info (groups etc). Works rather well, only issues is the UGLY urls so far. But its fully working. Seems like a good setup. Other than having to rely on MYSQL (my company strives to be mainly SQL Server)

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Form Validation using Javascript inside PHP

    - by Mikey1980
    I have a simple problem but no matter what I try I can't see to get it to work. I have a form on a php page and I need to validate the qty value on my form so that it doesn't exceed $qty (value pulled from mySQL) and is not less than zero. Sounds easy--hmm wish it were..lol! I had it checking if the value was numeric and in my attempts to make this work I even broke that--not a good morning..lol! Here's a snip of my Java Fn: <script type='text/javascript'> function checkQty(elem){ var numericExpression = /^[0-9]+$/; if(elem.value.match(numericExpression)){ return true; }else{ alert("Quantity for RMA must be greater than zero and less than original order!"); elem.focus(); return false; } } </script> The function is called from the submit button, onClick: <input type="submit" name="submit" onclick="checkQty(document.getElementById('qty')";"> I've tried: var numericExpression = /^[0-9]+$/; if(elem.value.match(numericExpression) || elem.value < 0 || elem.value > <? int($qty) ?>){ No dice....HELP!?!

    Read the article

  • NHibernate flush should save only dirty objects

    - by Emilian
    Why NHibernate fires an update on firstOrder when saving secondOrder in the code below? I'm using optimistic locking on Order. Is there a way to tell NHibernate to update firstOrder when saving secondOrder only if firstOrder was modified? // Configure var cfg = new Configuration(); var configFile = Path.Combine( AppDomain.CurrentDomain.BaseDirectory, "NHibernate.MySQL.config"); cfg.Configure(configFile); // Create session factory var sessionFactory = cfg.BuildSessionFactory(); // Create session var session = sessionFactory.OpenSession(); // Set session to flush on transaction commit session.FlushMode = FlushMode.Commit; // Create first order var firstOrder = new Order(); var firstOrder_OrderLine = new OrderLine { ProductName = "Bicycle", ProductPrice = 120.00M, Quantity = 1 }; firstOrder.Add(firstOrder_OrderLine); // Save first order using (var tx = session.BeginTransaction()) { try { session.Save(firstOrder); tx.Commit(); } catch { tx.Rollback(); } } // Create second order var secondOrder = new Order(); var secondOrder_OrderLine = new OrderLine { ProductName = "Hat", ProductPrice = 12.00M, Quantity = 1 }; secondOrder.Add(secondOrder_OrderLine); // Save second order using (var tx = session.BeginTransaction()) { try { session.Save(secondOrder); tx.Commit(); } catch { tx.Rollback(); } } session.Close(); sessionFactory.Close();

    Read the article

  • C#+BDE+DBF problem

    - by Drabuna
    I have huge problem: I have lots of .dbf files(~50000) and I need to import them into Oracle database. I open conncection like this: OleDbConnection oConn = new OleDbConnection(); OleDbCommand oCmd = new OleDbCommand(); oConn.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + directory + ";Extended Properties=dBASE IV;User ID=Admin;Password="; oCmd.Connection = oConn; oCmd.CommandText = @"SELECT * FROM " + tablename; try { oConn.Open(); resultTable.Load(oCmd.ExecuteReader()); } catch (Exception ex) { MessageBox.Show(ex.Message); } oConn.Close(); oCmd.Dispose(); oConn.Dispose(); I read them in loop, and then insert into oracle. Everything's fine. BUT: There is about 1000 files, that I can't open. They raise exception "not a table". So I google, and install Borland Database Engine. Now everything wokrs fine....but no. Now, when I'm reading files, on 1024 file exception raises: "System resource exceeded". But I have lots of free resources. When I remove BDE, everything's fine again, no "system resource exceeded" error, but I cant read all files. Help please. PS: Tried using ODBC but nothing changes.

    Read the article

  • Replacing .NET WebBrowser control with a better browser, like Chrome?

    - by Sylverdrag
    Is there any relatively easy way to insert a modern browser into a .NET application? As far as I understand, the WebBrowser control is a wrapper for IE, which wouldn't be a problem except that it looks like it is a very old version of IE, with all that entails in terms of CSS screw-ups, potential security risks (if the rendering engine wasn't patched, can I really expect the zillion buffer overflow problems to be fixed?), and other issues. I am using Visual Studio C# (express edition - does it make any difference here?) I would like to integrate a good web browser in my applications. In some, I just use it to handle the user registration process, interface with some of my website's features and other things of that order, but I have another application in mind that will require more err... control. I need: A browser that can integrate inside a window of my application (not a separate window) A good support for CSS, js and other web technologies, on par with any modern browser Basic browser functions like "navigate", "back", "reload"... Liberal access to the page code and output. I was thinking about Chrome, since it comes under the BSD license, but I would be just as happy with a recent version of IE. As much as possible, I would like to keep things simple. The best would be if one could patch the existing WebBrowser control, which does already about 70% of what I need, but I don't think that's possible. I have found an activeX control for Mozilla (http://www.iol.ie/~locka/mozilla/control.htm) but it looks like it's an old version, so it's not necessarily an improvement. I am open to suggestions

    Read the article

  • Facebox with other jquery in it

    - by dakemz
    So I have facebox setup and it works. when I load an external page with a tab based navigation (JQuery too) the modal works but the nav doesnt. If it isnt clear I actually want the tabs to be inside the lightbox. And I also have php/mysql running inside the lightbox if that can change anything. Thanks for any help. <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js" type="text/javascript"></script> <script type="text/javascript" src="js/jquery-ui-1.7.2.custom.min.js"></script> <script type="text/javascript"> $(function(){ $('#tabs').tabs(); }); </script> <div id="tabs"> <ul> <li><a href="#tabs-1">Informations</a></li> <li><a href="#tabs-2">Factures en attente</a></li> <li><a href="#tabs-3">Marché en cours</a></li> </ul>

    Read the article

  • Connect xampp to MongoDB

    - by Jhonny D. Cano -Leftware-
    Hello I have a xampp 1.7.3 instance running and a MongoDB 1.2.4 server on the same machine. I want to connect them, so I basically have been following this tutorial on php.net, it seems to connect but the cursors are always empty. I don't know what am I missing. Here is the code I am trying. The cursor-valid always says false. thanks <?php $m = new Mongo(); // connect try { $m->connect(); } catch (MongoConnectionException $ex) { echo $ex; } echo "conecta..."; $dbs = $m->listDBs(); if ($dbs == NULL) { echo $m->lastError(); return; } foreach($dbs as $db) { echo $db; } $db = $m->selectDB("CDO"); echo "elige bd..."; $col = $db->selectCollection("rep_consulta"); echo "elige col..."; $rangeQuery = array('id' => array( '$gt' => 100)); $col->insert(array('id' => 456745764, 'nombre' => 'cosa')); $cursor = $col->find()->limit(10); echo "buscando..."; var_dump($cursor); var_dump($cursor->valid()); if ($cursor == NULL) echo 'cursor null'; while($cursor->hasNext()) { $item = $cursor->current(); echo "en while..."; echo $item["nombre"].'...'; } ?> doing this by command line works perfect use CDO db.rep_consulta.find() -- lot of data here

    Read the article

  • Dependency Injection Question - ASP.NET

    - by Paul
    I'm starting a web application that contains the following projects: Booking.Web Booking.Services Booking.DataObjects Booking.Data I'm using the repository pattern in my data project only. All services will be the same, no matter what happens. However, if a customer wants to use Access, it will use a different data repository than if the customer wants to use SQL Server. I have StructureMap, and want to be able to do the following: Web project is unaffected. It's a web forms application that will only know about the services project and the dataobjects project. When a service is called, it will use StructureMap (by looking up the bootstrapper.cs file) to see which data repository to use. An example of a services class is the error logging class: public class ErrorLog : IErrorLog { ILogging logger; public ErrorLog() { } public ErrorLog(ILogging logger) { this.logger = logger; } public void AddToLog(string errorMessage) { try { AddToDatabaseLog(errorMessage); } catch (Exception ex) { AddToFileLog(ex.Message); } finally { AddToFileLog(errorMessage); } } private void AddToDatabaseLog(string errorMessage) { ErrorObject error = new ErrorObject { ErrorDateTime = DateTime.Now, ErrorMessage = errorMessage }; logger.Insert(error); } private void AddToFileLog(string errorMessage) { // TODO: Take this value from the web.config instead of hard coding it TextWriter writer = new StreamWriter(@"E:\Work\Booking\Booking\Booking.Web\Logs\ErrorLog.txt", true); writer.WriteLine(DateTime.Now.ToString() + " ---------- " + errorMessage); writer.Close(); } } I want to be able to call this service from my web project, without defining which repository to use for the data access. My boostrapper.cs file in the services project is defined as: public class Bootstrapper { public static void ConfigureStructureMap() { ObjectFactory.Initialize(x => { x.AddRegistry(new ServiceRegistry()); } ); } public class ServiceRegistry : Registry { protected override void configure() { ForRequestedType<IErrorLog>().TheDefaultIsConcreteType<Booking.Services.Logging.ErrorLog>(); ForRequestedType<ILogging>().TheDefaultIsConcreteType<SqlServerLoggingProvider>(); } } } What else do I need to get this to work? When I defined a test, the ILogger object was null. Thanks,

    Read the article

< Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >