Search Results

Search found 8267 results on 331 pages for 'insert into'.

Page 264/331 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Storing URLs while Spidering

    - by itemio
    I created a little web spider in python which I'm using to collect URLs. I'm not interested in the content. Right now I'm keeping all the visited URLs in a set in memory, because I don't want my spider to visit URLs twice. Of course that's a very limited way of accomplishing this. So what's the best way to keep track of my visited URLs? Should I use a database? * which one? MySQL, sqlite, postgre? * how should I save the URLs? As a primary key trying to insert every URL before visiting it? Or should I write them to a file? * one file? * multiple files? how should I design the file-structure? I'm sure there are books and a lot of papers on this or similar topics. Can you give me some advice what I should read?

    Read the article

  • updating changes from one database to another database in the same server

    - by Pavan Kumar
    I have a copy of client database say 'DBCopy' which already contains modified data. The copy of the client database (DBCopy) is attached to the SQL Server where the Central Database (DBCentral) exists. Then I want to update whatever changes already present in DBCopy to DBCentral. Both DBCopy and DBCentral have same schema. How can i do it programatically using C#.NET maybe with a button click. Can you give me an example code as how to do it?. I am using SQL Server 2005 Standard Edition and VS 2008 SP1. In the actual scenario there are about 7 client database all with same schema as the central database. I am bringing copy of each client database and attach it to Central Server where the central database resides and try to update changes present in each copy of the client database to central database one by one programatically using C# .NET . The clients and the central server are physically seperate machines present in different places. They are not interconnected. I need to only update and insert new data. I am not bothered about deletion of data. Thanks and regards Pavan

    Read the article

  • Table Design For SystemSettings, Best Model

    - by Chris L
    Someone suggested moving a table full of settings, where each column is a setting name(or type) and the rows are the customers & their respective settings for each setting. ID | IsAdmin | ImagePath ------------------------------ 12 | 1          | \path\to\images 34 | 0          | \path\to\images The downside to this is every time we want a new setting name(or type) we alter the table(via sql) and add the new (column)setting name/type. Then update the rows(so that each customer now has a value for that setting). The new table design proposal. The proposal is to have a column for setting name and another column for setting. ID | SettingName | SettingValue ---------------------------- 12 | IsAdmin        | 1 12 | ImagePath   | \path\to\images 34 | IsAdmin        | 0 34 | ImagePath   | \path\to\images The point they made was that adding a new setting was as easy as a simple insert statement to the row, no added column. But something doesn't feel right about the second design, it looks bad, but I can't come up with any arguments against it. Am I wrong?

    Read the article

  • Refactoring ADO.NET - SqlTransaction vs. TransactionScope

    - by marc_s
    I have "inherited" a little C# method that creates an ADO.NET SqlCommand object and loops over a list of items to be saved to the database (SQL Server 2005). Right now, the traditional SqlConnection/SqlCommand approach is used, and to make sure everything works, the two steps (delete old entries, then insert new ones) are wrapped into an ADO.NET SqlTransaction. using (SqlConnection _con = new SqlConnection(_connectionString)) { using (SqlTransaction _tran = _con.BeginTransaction()) { try { SqlCommand _deleteOld = new SqlCommand(......., _con); _deleteOld.Transaction = _tran; _deleteOld.Parameters.AddWithValue("@ID", 5); _con.Open(); _deleteOld.ExecuteNonQuery(); SqlCommand _insertCmd = new SqlCommand(......, _con); _insertCmd.Transaction = _tran; // add parameters to _insertCmd foreach (Item item in listOfItem) { _insertCmd.ExecuteNonQuery(); } _tran.Commit(); _con.Close(); } catch (Exception ex) { // log exception _tran.Rollback(); throw; } } } Now, I've been reading a lot about the .NET TransactionScope class lately, and I was wondering, what's the preferred approach here? Would I gain anything (readibility, speed, reliability) by switching to using using (TransactionScope _scope = new TransactionScope()) { using (SqlConnection _con = new SqlConnection(_connectionString)) { .... } _scope.Complete(); } What you would prefer, and why? Marc

    Read the article

  • JVM segmentation faults due to "Invalid memory access of location"

    - by Dan
    I have a small project written in Scala 2.9.2 with unit tests written using ScalaTest. I use SBT for compiling and running my tests. Running sbt test on my project makes the JVM segfault regularly, but just compiling and running my project from SBT works fine. Here is the exact error message: Invalid memory access of location 0x8 rip=0x10959f3c9 [1] 11925 segmentation fault sbt I cannot locate a core dump anywhere, but would be happy to provide it if it can be obtained. Running java -version results in this: java version "1.6.0_37" Java(TM) SE Runtime Environment (build 1.6.0_37-b06-434-11M3909) Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01-434, mixed mode) But I've also got Java 7 installed (though I was never able to actually run a Java program with it, afaik). Another issue that may be related: some of my test cases contain titles including parentheses like ( and ). SBT or ScalaTest (not sure) will consequently insert square parens in the middle of the output. For example, a test case with the name (..)..(..) might suddenly look like (..[)..](..). Any help resolving these issues is much appreciated :-) EDIT: I installed the Java 7 JDK, so now java -version shows the right thing: java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) This also means that I now get a more detailed segfault error and a core dump: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x000000010a71a3e3, pid=16830, tid=19459 # # JRE version: 7.0_07-b10 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.3-b01 mixed mode bsd-amd64 compressed oops) # Problematic frame: # V [libjvm.dylib+0x3cd3e3] And the dump.

    Read the article

  • SQL (mySQL) update some value in all records processed by a select

    - by jdmuys
    I am using mySQL from their C API, but that shouldn't be relevant. My code must process records from a table that match some criteria, and then update the said records to flag them as processed. The lines in the table are modified/inserted/deleted by another process I don't control. I am afraid in the following, the UPDATE might flag some records erroneously since the set of records matching might have changed between step 1 and step 3. SELECT * FROM myTable WHERE <CONDITION>; # step 1 <iterate over the selected set of lines. This may take some time.> # step 2 UPDATE myTable SET processed=1 WHERE <CONDITION> # step 3 What's the smart way to ensure that the UPDATE updates all the lines processed, and only them? A transaction doesn't seem to fit the bill as it doesn't provide isolation of that sort: a recently modified record not in the originally selected set might still be targeted by the UPDATE statement. For the same reason, SELECT ... FOR UPDATE doesn't seem to help, though it sounds promising :-) The only way I can see is to use a temporary table to memorize the set of rows to be processed, doing something like: CREATE TEMPORARY TABLE workOrder (jobId INT(11)); INSERT INTO workOrder SELECT myID as jobId FROM myTable WHERE <CONDITION>; SELECT * FROM myTable WHERE myID IN (SELECT * FROM workOrder); <iterate over the selected set of lines. This may take some time.> UPDATE myTable SET processed=1 WHERE myID IN (SELECT * FROM workOrder); DROP TABLE workOrder; But this seems wasteful and not very efficient. Is there anything smarter? Many thanks from a SQL newbie.

    Read the article

  • .Net: What is your confident approach in "Catch" section of try-catch block, When developing CRUD op

    - by odiseh
    hi, I was wondering if there would be any confident approach for use in catch section of try-catch block when developing CRUD operations(specially when you use a Database as your data source) in .Net? well, what's your opinion about below lines? public int Insert(string name, Int32 employeeID, string createDate) { SqlConnection connection = new SqlConnection(); connection.ConnectionString = this._ConnectionString; try { SqlCommand command = connection.CreateCommand(); command.CommandType = CommandType.StoredProcedure; command.CommandText = "UnitInsert"; if (connection.State != ConnectionState.Open) connection.Open(); SqlCommandBuilder.DeriveParameters(command); command.Parameters["@Name"].Value = name; command.Parameters["@EmployeeID"].Value = employeeID; command.Parameters["@CreateDate"].Value = createDate; int i = command.ExecuteNonQuery(); command.Dispose(); return i; } catch { **// how do you "catch" any possible error here?** return 0; // } finally { connection.Close(); connection.Dispose(); connection = null; } }

    Read the article

  • How to localize an app on Google App Engine?

    - by Petri Pennanen
    What options are there for localizing an app on Google App Engine? How do you do it using Webapp, Django, web2py or [insert framework here]. 1. Readable URLs and entity key names Readable URLs are good for usability and search engine optimization (Stack Overflow is a good example on how to do it). On Google App Engine, key based queries are recommended for performance reasons. It follows that it is good practice to use the entity key name in the URL, so that the entity can be fetched from the datastore as quickly as possible. Currently I use the function below to create key names: import re import unicodedata def urlify(unicode_string): """Translates latin1 unicode strings to url friendly ASCII. Converts accented latin1 characters to their non-accented ASCII counterparts, converts to lowercase, converts spaces to hyphens and removes all characters that are not alphanumeric ASCII. Arguments unicode_string: Unicode encoded string. Returns String consisting of alphanumeric (ASCII) characters and hyphens. """ str = unicodedata.normalize('NFKD', unicode_string).encode('ASCII', 'ignore') str = re.sub('[^\w\s-]', '', str).strip().lower() return re.sub('[-\s]+', '-', str) This works fine for English and Swedish, however it will fail for non-western scripts and remove letters from some western ones (like Norwegian and Danish with their œ and ø). Can anyone suggest a method that works with more languages? 2. Translating templates Does Django internationalization and localization work on Google App Engine? Are there any extra steps that must be performed? Is it possible to use Django i18n and l10n for Django templates while using Webapp? The Jinja2 template language provides integration with Babel. How well does this work, in your experience? What options are avilable for your chosen template language? 3. Translated datastore content When serving content from (or storing it to) the datastore: Is there a better way than getting the *accept_language* parameter from the HTTP request and matching this with a language property that you have set with each entity?

    Read the article

  • LINQ-To-SQL and Mapping Table Deletions

    - by Jake
    I have a many-to-many relationship between two tables, let's say Friends and Foods. If a friend likes a food I stick a row into the FriendsFoods table, like this: ID Friend Food 1 'Tom' 'Pizza' FriendsFoods has a Primary Key 'ID', and two non-null foreign keys 'Friend' and 'Food' to the 'Friends' and 'Foods' tables, respectively. Now suppose I have a Friend tom .NET object corresponding to 'Tom', and Tom no longer likes pizza (what is wrong with him?) FriendsFoods ff = tblFriendsFoods.Where(x => x.Friend.Name == 'Tom' && x.Food.Name == 'Pizza').Single(); tom.FriendsFoods.Remove(ff); pizza.FriendsFoods.Remove(ff); If I try to SubmitChanges() on the DataContext, I get an exception because it attempts to insert a null into the Friend and Food columns in the FriendsFoods table. I'm sure I can put together some kind of convoluted logic to track changes to the FriendsFoods table, intercept SubmitChanges() calls, etc to try and get this to work the way I want, but is there a nice, clean way to remove a Many-To-Many relationship with LINQ-To-SQL?

    Read the article

  • Creating nodes porgramatically in Drupal 6

    - by John
    Hey, I have been searching for how to create nodes in Drupal 6. I found some entries here on stackoverflow, but the questions seemed to either be for older versions or the solutions did not work for me. Ok, so here is my current process for trying to create $node = new stdClass(); $node->title = "test title"; $node->body = "test body"; $node->type= "story"; $node->created = time(); $node->changed = $node->created; $node->status = 1; $node->promote = 1; $node->sticky = 0; $node->format = 1; $node->uid = 1; node_save( $node ); When I execute this code, the node is created, but when I got the administration page, it throws the following errors: warning: Invalid argument supplied for foreach() in C:\wamp\www\steelylib\includes\menu.inc on line 258. warning: Invalid argument supplied for foreach() in C:\wamp\www\steelylib\includes\menu.inc on line 258. user warning: Duplicate entry '36' for key 1 query: INSERT INTO node_comment_statistics (nid, last_comment_timestamp, last_comment_name, last_comment_uid, comment_count) VALUES (36, 1269980590, NULL, 1, 0) in C:\wamp\www\steelylib\sites\all\modules\nodecomment\nodecomment.module on line 409. warning: Invalid argument supplied for foreach() in C:\wamp\www\steelylib\includes\menu.inc on line 258. warning: Invalid argument supplied for foreach() in C:\wamp\www\steelylib\includes\menu.inc on line 258. I've looked at different tutorials, and all seem to follow the same process. I'm not sure what I am doing wrong. I am using Drupal 6.15. When I roll back the database (to right before I made the changes) the errors are gone. Any help is appreciated!

    Read the article

  • Debugging NSoperation BAD ACCESS within graphics context

    - by Joe
    I tried everything to debug this one but I can't get to the bottom of it. This code lives in a subclass of NSOperation which is processed from a queue: (borders is an ivar NSArray containing 5 UIimage objects) NSMutableArray *images = [[NSMutableArray alloc] init]; for (unsigned i = 0; i < 5; i++) { CGSize size = CGSizeMake(60, 60); UIGraphicsBeginImageContext(size); CGPoint thumbPoint = CGPointMake(6, 6); [controller.image drawAtPoint:thumbPoint]; CGPoint borderPoint = CGPointMake(0, 0); [[borders objectAtIndex:i] drawAtPoint:borderPoint]; [images addObject:UIGraphicsGetImageFromCurrentImageContext()]; UIGraphicsEndImageContext(); } [images release]; The code works fine most of the time but when I push the iphone by access subviews and pressing lots of buttons on the UI I either get this exception which is trapped by the operation: Exception Load view: *** -[NSCFArray insertObject:atIndex:]: attempt to insert nil or I get this: Program received signal: “EXC_BAD_ACCESS”. The exception is caused because UIGraphicsGetImageFromCurrentImageContext() return nil. I don't know how to debug the EXC_BAD_ACCESS but I'm guessing that this error (in fact both of these errors) is caused by low memory. The debugger stops at the line: [controller.image drawAtPoint:thumbPoint]; As I mentioned I've trapped the exception so I can live with that but the EXC_BAD_ACCESS is more serious. IF this is memory related how can I tell and is it possible to increase the memory available to NSOperation?

    Read the article

  • ASP.NET-MVC Page: image logo is not displaying while sending the email

    - by Rita
    Hi I have a page that sends an email on ASP.NET MVC Page. All the Text is displaying but the image is not displaying. Any workaround. Appreciate your responses. Here is my code: MailMessage mailMsg = new MailMessage(); mailMsg.IsBodyHtml = true; mailMsg.From = new MailAddress(ConfigurationManager.AppSettings["Email.Sender"]); mailMsg.To.Add(new MailAddress(email)); mailMsg.Subject = "Test mail to display the Logo in the email"; mailMsg.Body = " Test mail to display the Logo in the email; mailMsg.Body += Environment.NewLine + Environment.NewLine + "<html><body><img src=cid:companylogo/><br></body></html>"; //Insert Logo string logoPath = Server.MapPath(Links.Content.images.Amgen_MedInfo_Logo_jpg); // logo is placed in images folder LinkedResource logo = new LinkedResource(logoPath); logo.ContentId = "companylogo"; // done HTML formatting in the next line to display logo AlternateView aView = AlternateView.CreateAlternateViewFromString(mailMsg.Body, new System.Net.Mime.ContentType("text/html")); aView.LinkedResources.Add(logo); mailMsg.AlternateViews.Add(aView); mailMsg.IsBodyHtml = true; SmtpClient smtpClient = new SmtpClient(ConfigurationManager.AppSettings["SMTP"]); smtpClient.Send(mailMsg);

    Read the article

  • Doctrine does not export relation properly

    - by iggnition
    Hi, I've got a MySQL 5.1.41 database which i'm trying to fill with doctrine, but doctrine does not insert the relations correctly. My YAML is: Locatie: connection: doctrine tableName: locatie columns: loc_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true org_id: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false naam: type: string(30) fixed: false unsigned: false primary: false notnull: true autoincrement: false straat: type: string(30) fixed: false unsigned: false primary: false notnull: true autoincrement: false huisnummer: type: integer(4) fixed: false unsigned: false primary: false notnull: true autoincrement: false huisnummer_achtervoegsel: type: string(3) fixed: false unsigned: false primary: false notnull: false autoincrement: false plaats: type: string(25) fixed: false unsigned: false primary: false notnull: true autoincrement: false postcode: type: string(6) fixed: false unsigned: false primary: false notnull: true autoincrement: false telefoon: type: string(12) fixed: false unsigned: false primary: false notnull: true autoincrement: false opmerking: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false inloggegevens: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Organisatie: local: org_id foreign: org_id type: one onDelete: CASCADE onUpdate: CASCADE Organisatie: connection: doctrine tableName: organisatie columns: org_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true naam: type: string(30) fixed: false unsigned: false primary: false notnull: true autoincrement: false straat: type: string(30) fixed: false unsigned: false primary: false notnull: true autoincrement: false huisnummer: type: integer(4) fixed: false unsigned: false primary: false notnull: true autoincrement: false huisnummer_achtervoegsel: type: string(3) fixed: false unsigned: false primary: false notnull: false autoincrement: false plaats: type: string(25) fixed: false unsigned: false primary: false notnull: true autoincrement: false postcode: type: string(6) fixed: false unsigned: false primary: false notnull: true autoincrement: false telefoon: type: string(12) fixed: false unsigned: false primary: false notnull: true autoincrement: false opmerking: type: string(255) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Locatie: local: org_id foreign: org_id type: many Now if a make an organisation and then create a location which has a foreignkey to organisation everything is fine. but when i try to update the org_id with phpmyadmin i get a contraint error. If i manually set the foreign key to ON_UPDATE CASCADE it does work. Why does doctrine not set this option? I got it to work in Propel, but i really want to use doctrine for this.

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • problems with infinite loop

    - by Tom
    function addAds($n) { for ($i=0;$i<=$n;$i++) { while($row=mysql_fetch_array(mysql_query("SELECT * FROM users"))) { $aut[]=$row['name']; } $author=$aut[rand(0,mysql_num_rows(mysql_query("SELECT * FROM users")))]; $name="pavadinimas".rand(0,3600); $rnd=rand(0,1); if($rnd==0) { $type="siulo"; } else { $type="iesko"; } $text="tekstas".md5("tekstas".rand(0,8000)); $time=time()-rand(3600,86400); $catid=rand(1,9); switch ($catid) { case 1: $subid=rand(1,8); break; case 2: $subid=rand(9,16); break; case 3: $subid=rand(17,24); break; case 4: $subid=rand(25,32); break; case 5: $subid=rand(33,41); break; case 6: $subid=rand(42,49); break; case 7: $subid=rand(50,56); break; case 8: $subid=rand(57,64); break; case 9: $subid=rand(65,70); break; } mysql_query("INSERT INTO advert(author,name,type,text,time,catid,subid) VALUES('$author','$name','$type','$text','$time','$catid','$subid')") or die(mysql_error()); } echo "$n adverts successfully added."; } The problem with this function, is that it never loads. As I noticed, my while loop causes it. If i comment it, everything is ok. It has to get random user from my db and set it to variable $author.

    Read the article

  • Find TableLeyout in a thread (Because of the ProgressDialog)

    - by Shaulian
    Hi all, On my activity, im getting some big data from web, and while getting this data i want to show the user a ProgressDialog with spinning wheel. That i can do only with putting this code into a thread, right ? the problem is that after im getting this data i need to insert it into my tableLayout as TableRows and it seems impossible to access the TableLayout from the thread. What can i do to show this progress dialog and to be able access the table layout from the thread ?? Is there any event that happens on the end of the thread ? My code fails for : _tableLayout.addView(_tableRowVar, new TableLayout.LayoutParams( LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); My full code is : final ProgressDialog dialog = ProgressDialog.show(MyActivity.this, "", "Getting data.\nPlease wait...",true); new Thread() { public void run() { try { TableLayout _tableLayout; _tableLayout = (TableLayout)MyActivity.this.findViewById(R.id.tableLayoutID); List<String> data = getDataFromWeb(); // Get the data and bind it into the table publishTableLayoutWithTableRows(_tableLayout, data ); } catch (Exception e) { new AlertDialog.Builder(MyActivity.this) .setMessage(e.getMessage()) .show(); } dialog.dismiss(); } }.start();

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Neo4j Reading data / performing shortest path calculations on stored data

    - by paddydub
    I'm using the Batch_Insert example to insert Data into the database How can i read this data back from the database. I can't find any examples of how i do this. public static void CreateData() { // create the batch inserter BatchInserter inserter = new BatchInserterImpl( "var/graphdb", BatchInserterImpl.loadProperties( "var/neo4j.props" ) ); Map<String,Object> properties = new HashMap<String,Object>(); properties.put( "name", "Mr. Andersson" ); properties.put( "age", 29 ); long node1 = inserter.createNode( properties ); properties.put( "name", "Trinity" ); properties.remove( "age" ); long node2 = inserter.createNode( properties ); inserter.createRelationship( node1, node2, DynamicRelationshipType.withName( "KNOWS" ), null ); inserter.shutdown(); } I would like to store graph data in the database, graph.makeEdge( "s", "c", "cost", (double) 7 ); graph.makeEdge( "c", "e", "cost", (double) 7 ); graph.makeEdge( "s", "a", "cost", (double) 2 ); graph.makeEdge( "a", "b", "cost", (double) 7 ); graph.makeEdge( "b", "e", "cost", (double) 2 ); Dijkstra<Double> dijkstra = getDijkstra( graph, 0.0, "s", "e" ); What is the best method to store this kind data with 10000's of edges. Then run the Dijskra algorighm to find shortest path calculations using the stored graph data.

    Read the article

  • Dynamic SQL Server stored procedure

    - by Pinu
    ALTER PROCEDURE [dbo].[GetDocumentsAdvancedSearch] @SDI CHAR(10) = NULL ,@Client CHAR(4) = NULL ,@AccountNumber VARCHAR(20) = NULL ,@Address VARCHAR(300) = NULL ,@StartDate DATETIME = NULL ,@EndDate DATETIME = NULL ,@ReferenceID CHAR(14) = NULL AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- DECLARE DECLARE @Sql NVARCHAR(4000) DECLARE @ParamList NVARCHAR(4000) SELECT @Sql = 'SELECT DISTINCT ISNULL(Documents.DocumentID, '') ,Person.Name1 ,Person.Name2 ,Person.Street1 ,Person.Street2 ,Person.CityStateZip ,ISNULL(Person.ReferenceID,'') ,ISNULL(Person.AccountNumber,'') ,ISNULL(Person.HasSetPreferences,0) ,Documents.Job ,Documents.SDI ,Documents.Invoice ,ISNULL(Documents.ShippedDate,'') ,ISNULL(Documents.DocumentPages,'') ,Documents.DocumentType ,Documents.Description FROM Person LEFT OUTER JOIN Documents ON Person.PersonID = Documents.PersonID LEFT OUTER JOIN DocumentType ON Documents.DocumentType = DocumentType.DocumentType LEFT OUTER JOIN Addressess ON Person.PersonID = Addressess.PersonID' SELECT @Sql = @Sql + ' WHERE Documents.SDI IN ( '+ QUOTENAME(@sdi) + ') OR (Person.AssociationID = ' + ''' 000000 + ''' + 'AND Person.Client = ' + QUOTENAME(@Client) IF NOT (@AccountNumber IS NULL) SELECT @Sql = @Sql + 'AND Person.AccountNumber LIKE' + QUOTENAME(@AccountNumber) IF NOT (@Address IS NULL) SELECT @Sql = @Sql + 'AND Person.Name1 LIKE' +QUOTENAME(@Address)+ 'AND Person.Name2 LIKE' +QUOTENAME(@Address)+ 'AND Person.Street1 LIKE' +QUOTENAME(@Address)+ 'AND Person.Street2 LIKE' +QUOTENAME(@Address)+ 'AND Person.CityStateZip LIKE' +QUOTENAME(@Address) IF NOT (@StartDate IS NULL) SELECT @Sql = @Sql + 'AND Documents.ShippedDate >=' +@StartDate IF NOT (@EndDate IS NULL) SELECT @Sql = @Sql + 'AND Documents.ShippedDate <=' +@EndDate IF NOT (@ReferenceID IS NULL) SELECT @Sql = @Sql + 'AND Documents.ReferenceID =' +QUOTENAME(@ReferenceID) -- Insert statements for procedure here -- PRINT @Sql SELECT @ParamList = '@Psdi CHAR(10),@PClient CHAR(4),@PAccountNumber VARCHAR(20),@PAddress VARCHAR(300),@PStartDate DATETIME ,@PEndDate DATETIME,@PReferenceID CHAR(14)' EXEC SP_EXECUTESQL @Sql,@ParamList,@Sdi,@Client,@AccountNumber,@Address,@StartDate,@EndDate,@ReferenceID --PRINT @Sql END ERROR Msg 102, Level 15, State 1, Line 23 Incorrect syntax near '000000'. Msg 105, Level 15, State 1, Line 23 Unclosed quotation mark after the character string 'AND Person.Client = [1 ]AND Person.AccountNumber LIKE[1]'.

    Read the article

  • Can you notice what's wrong with my PHP or MYSQL code?

    - by Jenna
    I am trying to create a category menu with sub categories. I have the following MySQL table: -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(1000) NOT NULL, `slug` varchar(1000) NOT NULL, `parent` int(11) NOT NULL, `type` varchar(255) NOT NULL, PRIMARY KEY (`ID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=66 ; -- -- Dumping data for table `categories` -- INSERT INTO `categories` (`ID`, `name`, `slug`, `parent`, `type`) VALUES (63, 'Party', '/category/party/', 0, ''), (62, 'Kitchen', '/category/kitchen/', 61, 'sub'), (59, 'Animals', '/category/animals/', 0, ''), (64, 'Pets', '/category/pets/', 59, 'sub'), (61, 'Rooms', '/category/rooms/', 0, ''), (65, 'Zoo Creatures', '/category/zoo-creatures/', 59, 'sub'); And the following PHP: <?php include("connect.php"); echo "<ul>"; $query = mysql_query("SELECT * FROM categories"); while ($row = mysql_fetch_assoc($query)) { $catId = $row['id']; $catName = $row['name']; $catSlug = $row['slug']; $parent = $row['parent']; $type = $row['type']; if ($type == "sub") { $select = mysql_query("SELECT name FROM categories WHERE ID = $parent"); while ($row = mysql_fetch_assoc($select)) { $parentName = $row['name']; } echo "<li>$parentName >> $catName</li>"; } else if ($type == "") { echo "<li>$catName</li>"; } } echo "</ul>"; ?> Now Here's the Problem, It displays this: * Party * Rooms >> Kitchen * Animals * Animals >> Pets * Rooms * Animals >> Zoo Creatures I want it to display this: * Party * Rooms >> Kitchen * Animals >> Pets >> Zoo Creatures Is there something wrong with my loop? I just can't figure it out.

    Read the article

  • Ibatis startBatch() only works with SqlMapClient's own start and commit transactions, not with Sprin

    - by Brian
    Hi, I'm finding that even though I have code wrapped by Spring transactions, and it commits/rolls back when I would expect, in order to make use of JDBC batching when using Ibatis and Spring I need to use explicit SqlMapClient transaction methods. I.e. this does batching as I'd expect: dao.getSqlMapClient().startTransaction(); dao.getSqlMapClient().startBatch(); int i = 0; for (MyObject obj : allObjects) { dao.storeChange(obj); i++; if (i % DB_BATCH_SIZE == 0) { dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().startBatch(); } } dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().commitTransaction(); but if I don't have the opening and closing transaction statements, and rely on Spring to manage things (which is what I want to do!), batching just doesn't happen. Given that Spring does otherwise seem to be handling its side of the bargain regarding transaction management, can anyone advise on any known issues here? (Database is MySQL; I'm aware of the issues regarding its JDBC pseudo-batch approach with INSERT statement rewriting, that's definitely not an issue here)

    Read the article

  • How to parse xml in sql server to process NULL value in DateTime DataType.

    - by Shantanu Gupta
    I have created a sample query in sql server to parse data from xml and to display it right now. Although I will be inserting this data in my table but before that I am facing a simple problem. I want to insert NULL in datetime field ADDED_DATE="NULL" as shown in xml given below. But when I executes this query. It gives me error Conversion failed when converting datetime from character string. What mistake am i doing. Please highlight my mistake. declare @xml varchar(1000) set @xml= ' <ROOT> <TX_MAP FK_GUEST_ID="1" FK_CATEGORY_ID="2" ATTRIBUTE="Test" DESCRIPTION="TestDesc" IS_ACTIVE="1" ADDED_BY="NULL" ADDED_DATE="NULL" MODIFIED_BY="NULL" MODIFIED_DATE="NULL"></TX_MAP> <TX_MAP FK_GUEST_ID="2" FK_CATEGORY_ID="1" ATTRIBUTE="Test2" DESCRIPTION="TestDesc2" IS_ACTIVE="1" ADDED_BY="NULL" ADDED_DATE="NULL" MODIFIED_BY="NULL" MODIFIED_DATE="NULL"></TX_MAP> </ROOT> ' declare @handle int exec sp_xml_preparedocument @handle output, @xml select * from OPENXML(@handle,'/ROOT/TX_MAP',1) with ( FK_GUEST_ID INT ,FK_CATEGORY_ID VARCHAR(10) ,ATTRIBUTE VARCHAR(100) ,[DESCRIPTION] VARCHAR(100) ,IS_ACTIVE VARCHAR(10) ,ADDED_BY VARCHAR(100) ,ADDED_DATE DATETIME NULL ,MODIFIED_BY VARCHAR(100) ,MODIFIED_DATE DATETIME NULL ) I am using Sql Server 2005.

    Read the article

  • Avoiding CheckStyle magic number errors in JDBC queries.

    - by Dan
    Hello, I am working on a group project for class and we are trying out CheckStyle. I am fairly comfortable with Java but have never touched JDBC or done any database work before this. I was wondering if there is an elegant way to avoid magic number errors in preparedStatement calls, consider: preparedStatement = connect.prepareStatement("INSERT INTO shows " + "(showid, showtitle, showinfo, genre, youtube)" + "values (default, ?, ?, ?, ?);"); preparedStatement.setString(1, title); preparedStatement.setString(2, info); preparedStatement.setString(3, genre); preparedStatement.setString(4, youtube); result = preparedStatement.executeUpdate(); The setString methods get flagged as magic numbers, so far I just added the numbers 3-10 or so to the ignore list for magic numbers but I was wondering if there was a better way to go about inserting those values into the statement. I also beg you for any other advice that comes to mind seeing that code, I'd like to avoid developing any nasty habits, e.g. should I be using Statement or is PreparedStatement fine? Will that let me refer to column names instead? Is that ideal? etc... Thanks!

    Read the article

  • MySql: Problem when using a temporary table

    - by Alex
    Hi, I'm trying to use a temporary tables to store some values I need for a query. The reason of using a temporary table is that I don't want to store the data permanently so different users can modify it at the same time. That data is just stored for a second, so I think a temporary table is the best approach for this. The thing is that it seems that the way I'm trying to use it is not right (the query works if I use a permanent one). This is an example of query: CREATE TEMPORARY TABLE SearchMatches (PatternID int not null primary key, Matches int not null) INSERT INTO SearchMatches (PatternID, Matches) VALUES ('12605','1'),('12503','1'),('12587','2'),('12456','1'), ('12457','2'),('12486','2'),('12704','1'),(' 12686','1'), ('12531','2'),('12549','1'),('12604','1'),('12504','1'), ('12586','1'),('12548','1'),('12 530','1'),('12687','2'), ('12485','1'),('12705','1') SELECT pat.id, signatures.signature, products.product, versions.version, builds.build, pat.log_file, sig_types.sig_type, pat.notes, pat.kb FROM patterns AS pat INNER JOIN signatures ON pat.signature = signatures.id INNER JOIN products ON pat.product = products.id INNER JOIN versions ON pat.version = versions.id INNER JOIN builds ON pat.build = builds.id INNER JOIN sig_types ON pat.sig_type = sig_types.id, SearchMatches AS sm INNER JOIN patterns ON patterns.id = sm.PatternID WHERE sm.Matches <> 0 ORDER BY sm.Matches DESC, products.product, versions.version, builds.build LIMIT 0 , 50 Any suggestion? Thanks.

    Read the article

  • How upload files to azure in background with Delphi and OmniThread?

    - by mamcx
    I have tried to upload +100 files to azure with Delphi. However, the calls block the main thread, so I want to do this with a async call or with a background thread. This is what I do now (like explained here): procedure TCloudManager.UploadTask(const input: TOmniValue; var output: TOmniValue); var FileTask:TFileTask; begin FileTask := input.AsRecord<TFileTask>; Upload(FileTask.BaseFolder, FileTask.LocalFile, FileTask.CloudFile); end; function TCloudManager.MassiveUpload(const BaseFolder: String; Files: TDictionary<String, String>): TStringList; var pipeline: IOmniPipeline; FileInfo : TPair<String,String>; FileTask:TFileTask; begin // set up pipeline pipeline := Parallel.Pipeline .Stage(UploadTask) .NumTasks(Environment.Process.Affinity.Count * 2) .Run; // insert URLs to be retrieved for FileInfo in Files do begin FileTask.LocalFile := FileInfo.Key; FileTask.CloudFile := FileInfo.Value; FileTask.BaseFolder := BaseFolder; pipeline.Input.Add(TOmniValue.FromRecord(FileTask)); end;//for pipeline.Input.CompleteAdding; // wait for pipeline to complete pipeline.WaitFor(INFINITE); end; However this block too (why? I don't understand).

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >