Search Results

Search found 10691 results on 428 pages for 'batch insert'.

Page 375/428 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • Saving temporary file name after uploading file with uplodify

    - by mIRU
    I have this script if (!empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $targetPath = dirname(__FILE__).$_POST['folder'] . '/'; $pathinfoFile = pathinfo($_FILES['Filedata']['name']); $targetFile = str_replace('//','/',$targetPath) .uniqid().'.'.$pathinfoFile['extension']; move_uploaded_file($tempFile,$targetFile); } this script is from uplodify , modified for saving the file with unique name , i need after user will upload the file , this unique name and original name , to save temporary , when user will submit the form , this temporary variables i will insert in database , the problem is that in firebug console , i can not see all the actions of this script , and i can not understand the way how to fix it, I tried to save in $_SESSION but i have problems , is not saving , i found why is not working with $_SESSION , http://uploadify.com/forum/viewtopic.php?f=5&t=43 , i tried the solution from forum but without result , exist a more easy way to solve this problem ? , or what is better way to do it ? . Sorry for so silly question , i ran out of ideas . Thanks a lot for helping me , and sorry again for my English

    Read the article

  • VB.NET Syntax Coding

    - by Yiu Korochko
    I know many people ask how some of these are done, but I do not understand the context in which to use the answers, so... I'm building a code editor for a subversion of Python language, and I found a very decent way of highlighting keywords in the RichTextBox through this: bluwords.Add(KEYWORDS GO HERE) If scriptt.Text.Length > 0 Then Dim selectStart2 As Integer = scriptt.SelectionStart scriptt.Select(0, scriptt.Text.Length) scriptt.SelectionColor = Color.Black scriptt.DeselectAll() For Each oneWord As String In bluwords Dim pos As Integer = 0 Do While scriptt.Text.ToUpper.IndexOf(oneWord.ToUpper, pos) >= 0 pos = scriptt.Text.ToUpper.IndexOf(oneWord.ToUpper, pos) scriptt.Select(pos, oneWord.Length) scriptt.SelectionColor = Color.Blue pos += 1 Loop Next scriptt.SelectionStart = selectStart2 End If (scriptt is the richtextbox) But when any decent amount of code is typed (or loaded via OpenFileDialog) chunks of the code go missing, the syntax selection falls apart, and it just plain ruins it. I'm looking for a more efficient way of doing this, maybe something more like visual studio itself...because there is NO NEED to highlight all text, set it black, then redo all of the syntaxing, and the text begins to over-right if you go back to insert characters between text. Also, in this version of Python, hash (#) is used for comments on comment only lines and double hash (##) is used for comments on the same line. Now I saw that someone had asked about this exact thing, and the working answer to select to the end of the line was something like: ^\'[^\r\n]+$|''[^\r\n]+$ which I cannot seem to get into practice. I also wanted to select text between quotes and turn it turquoise, such as between the first quotation mark and the second, the text is turquoise, and the same between the 3rd and 4th etcetera... Any help is appreciated!

    Read the article

  • SQL Server error handling: exceptions and the database-client contract

    - by gbn
    We’re a team of SQL Servers database developers. Our clients are a mixed bag of C#/ASP.NET, C# and Java web services, Java/Unix services and some Excel. Our client developers only use stored procedures that we provide and we expect that (where sensible, of course) they treat them like web service methods. Some our client developers don’t like SQL exceptions. They understand them in their languages but they don’t appreciate that the SQL is limited in how we can communicate issues. I don’t just mean SQL errors, such as trying to insert “bob” into a int column. I also mean exceptions such as telling them that a reference value is wrong, or that data has already changed, or they can’t do this because his aggregate is not zero. They’d don’t really have any concrete alternatives: they’ve mentioned that we should output parameters, but we assume an exception means “processing stopped/rolled back. How do folks here handle the database-client contract? Either generally or where there is separation between the DB and client code monkeys. Edits: we use SQL Server 2005 TRY/CATCH exclusively we log all errors after the rollback to an exception table already we're concerned that some of our clients won't check output paramaters and assume everything is OK. We need errors flagged up for support to look at. everything is an exception... the clients are expected to do some message parsing to separate information vs errors. To separate our exceptions from DB engine and calling errors, they should use the error number (ours are all 50,000 of course)

    Read the article

  • Using the <h2> as the title after sent?

    - by Delan Azabani
    Currently, I have a semi-dynamic system for my website's pages. head.php has all the tags before the content body, foot.php the tags after. Any page using the main theme will include head.php, then write the content, then output foot.php. Currently, to be able to set the title, I quickly set a variable $title before inclusion: <?php $title = 'Untitled document'; include_once '../head.php'; ?> <h2>Untitled document</h2> Content here... <?php include_once '../foot.php'; ?> So that in head.php... <title><?php echo $title; ?> | Delan Azabani</title> However, this seems kludgy as the title is most of the time, the same as the content of the h2 tag. Is there a way I can get PHP to read the content of h2, track back and insert it, then send the whole thing at the end?

    Read the article

  • MYSQL JOIN WHERE ISSUES - need some kind of if condition

    - by Breezer
    Hi Well this will be hard to explain but ill do my best The thing is i have 4 tables all with a specific column to relate to eachother. 1 table with users(agent_users) , 1 with working hours(agent_pers), 1 with sold items(agent_stat),1 with project(agent_pro) the user and the project table is irrelevant in the issue at hand but to give you a better understanding why certain tables is included in my query i decided to still mention them =) The thing is that I use 2 pages to insert data to the working hour and the sold items during that time tables, then i have a third page to summarize everything for current month, the query for that is as following: SELECT *, SUM(sv_p_kom),SUM(sv_p_gick),SUM(sv_p_lunch) FROM (( agent_users LEFT JOIN agent_pers ON agent_users.sv_aid = agent_pers.sv_p_uid) LEFT JOIN agent_stat ON agent_pers.sv_p_uid = agent_stat.sv_s_uid) LEFT JOIN agent_pro ON agent_pers.sv_p_pid=agent_pro.p_id WHERE MONTH(agent_pers.sv_p_datum) =7 GROUP BY sv_aname so the problem is now that i dont want sold items from previous months to get included in the data received, i know i could solve that by simple adding in the WHERE part MONTH(agent_stat.sv_s_datum) =7 but then if no items been sold that month no data at all will show up not the time or anything. Any aid on how i could solve this is greatly appreciated. if there's something that's not so clear dont hesitate to ask and ill try my best to answer. after all my english isn't the best out there :P regards breezer

    Read the article

  • Error in connection in ruby.

    - by piemesons
    require 'rubygems' require 'mysql' db = Mysql.connect('localhost', 'root', '', 'mohit') //db.rb:4: undefined method `connect' for Mysql:Class (NoMethodError) //undefined method `real_connect' for Mysql:Class (NoMethodError) db.query("CREATE TABLE people ( id integer primary key, name varchar(50), age integer)") db.query("INSERT INTO people (name, age) VALUES('Chris', 25)") begin query = db.query('SELECT * FROM people') puts "There were #{query.num_rows} rows returned" query.each_hash do |h| puts h.inspect end rescue puts db.errno puts db.error end error i am geting is: undefined method `connect' for Mysql:Class (NoMethodError) OR undefined method `real_connect' for Mysql:Class (NoMethodError) EDIT return value of Mysql.methods ["private_class_method", "inspect", "name", "tap", "clone", "public_methods", "object_id", "__send__", "method_defined?", "instance_variable_defined?", "equal?", "freeze", "extend", "send", "const_defined?", "methods", "ancestors", "module_eval", "instance_method", "hash", "autoload?", "dup", "to_enum", "instance_methods", "public_method_defined?", "instance_variables", "class_variable_defined?", "eql?", "constants", "id", "instance_eval", "singleton_methods", "module_exec", "const_missing", "taint", "instance_variable_get", "frozen?", "enum_for", "private_method_defined?", "public_instance_methods", "display", "instance_of?", "superclass", "method", "to_a", "included_modules", "const_get", "instance_exec", "type", "<", "protected_methods", "<=>", "class_eval", "==", "class_variables", ">", "===", "instance_variable_set", "protected_instance_methods", "protected_method_defined?", "respond_to?", "kind_of?", ">=", "public_class_method", "to_s", "<=", "const_set", "allocate", "class", "new", "private_methods", "=~", "tainted?", "__id__", "class_exec", "autoload", "untaint", "nil?", "private_instance_methods", "include?", "is_a?"] return value of Mysql.methods(false) is []... blank array

    Read the article

  • [IceFaces] Why are validators of unchanged components called?

    - by bitschnau
    I have a IceFaces-form and several input fields. Let's say I have this: <ice:selectOneMenu id="accountMenu" value="#{accountController.account.aId}" validator="#{accountController.validateAccount}"> <f:selectItems id="accountItems" value="#{accountController.accountItems}" /> </ice:selectOneMenu> and this: <ice:selectOneMenu id="costumerMenu" value="#{customerController.customer.cId}" validator="#{customerController.validateCustomer"> <f:selectItems id="customerItems" value="#{customerController.customerItems}" /> </ice:selectOneMenu> If I change one value, the respective validator is called, what is fine. But also the other validator is called, which is not fine, because the user get's an irritating message to insert a value to a field he maybe was just going to pay attention to. It's like poking the user with a stick to "Hurry up now!". BAD! I thought the attribute "partialSubmit" is controlling this behaviour, so only the one DOM-part is submitted, which is affected by the user interaction, but if I declare the both components to be partially submitted, nothing changes. Still both validators are called if one component value is changed. How can I prevent the whole form from being validated until it is submitted completely?

    Read the article

  • .save puts NULL in id field in Rails

    - by mathee
    Here's the model file: class ProfileTag < ActiveRecord::Base def self.create_or_update(options = {}) id = options.delete(:id) record = find_by_id(id) || new record.id = id record.attributes = options puts "record.profile_id is" puts record.profile_id record.save! record end end This gives me the correct print out in my log. But it also says that there's a call to UPDATE that sets profile_id to NULL. Here's some of the output in the log file: Processing ProfilesController#update (for 127.0.0.1 at 2010-05-28 18:20:54) [PUT] Parameter: {"commit"=>"Save", ...} ?[4;36;1mProfileTag Create (0.0ms)?[0m ?[0;1mINSERT INTO `profile_tags` (`reputation_value`, `updated_at`, `tag_id`, `id`, `profile_id`, `created_at`) VALUES(0, '2010-05-29 01:20:54', 1, NULL, 4, '2010-05-29 01:20:54')?[0m ?[4;35;1mSQL (2.0ms)?[0m ?[0mCOMMIT?[0m ?[4;36;1mSQL (0.0ms)?[0m ?[0;1mBEGIN?[0m ?[4;35;1mSQL (0.0ms)?[0m ?[0mCOMMIT?[0m ?[4;36;1mProfileTag Load (0.0ms)?[0m ?[0;1mSELECT * FROM `profile_tags` WHERE (`profile_tags`.profile_id = 4) ?[0m ?[4;35;1mSQL (1.0ms)?[0m ?[0mBEGIN?[0m ?[4;36;1mProfileTag Update (0.0ms)?[0m ?[0;1mUPDATE `profile_tags` SET profile_id = NULL WHERE (profile_id = 4 AND id IN (35)) ?[0m I'm not sure I understand why the INSERT puts the value into profile_id properly, but then it sets it to NULL on an UPDATE. If you need more specifics, please let me know. I'm thinking that the save functionality does many things other than INSERTs into the database, but I don't know what I need to specify so that it will properly set profile_id.

    Read the article

  • How can I exclude words with apostrophes when reading into a table of strings?

    - by rearden
    ifstream fin; string temp; fin.open("engldict.txt"); if(fin.is_open()) { bool apos = false; while(!fin.eof()) { getline(fin, temp, '\n'); if(temp.length() > 2 && temp.length() < 7) { for(unsigned int i = 0; i < temp.length(); i++) { if(temp.c_str()[i] == '\'') apos = true; } if(!apos) dictionary.insert(temp); } } } This code gives me a runtime error: Unhandled exception at 0x00A50606 in Word Jumble.exe: 0xC0000005: Access violation reading location 0x00000014. and throws me a break point at: size_type size() const _NOEXCEPT { // return length of sequence return (this->_Mysize); } within the xstring header. This exception is thrown no matter what character I use, so long as it is present within the words I am reading in. I am aware that it is probably a super simple fix, but I just really need another set of eyes to see it. Thanks in advance.

    Read the article

  • In ArrayBlockingQueue, why copy into ReentrantLock field into local final variable?

    - by mjlee
    In ArrayBlockingQueue, any method that requires lock will get set 'final' local variable before calling 'lock()'. public boolean offer(E e) { if (e == null) throw new NullPointerException(); final ReentrantLock lock = this.lock; lock.lock(); try { if (count == items.length) return false; else { insert(e); return true; } } finally { lock.unlock(); } } Is there any reason to set a local variable 'lock' from 'this.lock' when field 'this.lock' is final also. Additionally, it also set local variable of E[] before acting on. private E extract() { final E[] items = this.items; E x = items[takeIndex]; items[takeIndex] = null; takeIndex = inc(takeIndex); --count; notFull.signal(); return x; } Is there any reason for copying to local final variable?

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • How SQLite on Android handles long strings?

    - by Levara
    I'm wondering how Android's implementation of SQLite handles long Strings. Reading from online documentation on sqlite, it said that strings in sqlite are limited to 1 million characters. My strings are definitely smaller. I'm creating a simple RSS application, and after parsing a html document, and extracting text, I'm having problem saving it to a database. I have 2 tables in database, feeds and articles. RSS feeds are correctly saved and retrieved from feeds table, but when saving to the articles table, logcat is saying that it cannot save extracted text to it's column. I don't know if other columns are making problems too, no mention of them in logcat. I'm wondering, since text is from an article on web, are signs like (",',;) creating problems? Is Android automaticaly escaping them, or I have to do that. I'm using a technique for inserting similar to one in notepad tutorial: public long insertArticle(long feedid, String title, String link, String description, String h1,tring h2, String h3, String p, String image, long date) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_FEEDID, feedid); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_LINK, link); initialValues.put(KEY_DESCRIPTION, description ); initialValues.put(KEY_H1, h1 ); initialValues.put(KEY_H2, h2); initialValues.put(KEY_H3, h3); initialValues.put(KEY_P, p); initialValues.put(KEY_IMAGE, image); initialValues.put(KEY_DATE, date); return mDb.insert(DATABASE_TABLE_ARTICLES,null, initialValues); } Column P is for extracted text, h1, h2 and h3 are for headers from a page. Logcat reports only column p to be the problem. The table is created with following statement: private static final String DATABASE_CREATE_ARTICLES = "create table articles( _id integer primary key autoincrement, feedid integer, title text, link text not null, description text," + "h1 text, h2 text, h3 text, p text, image text, date integer);";

    Read the article

  • What technologies/tools do people use to implement live websites ?

    - by MadSeb
    Hi, I have the following situation: -I have a server A hooked up to a piece of hardware that sends values and information out of every second. Programs on the server machine can read these values. This server A is in a very remote location so Internet connection is very slow and not reliable but the connection does exist. Let's say it's a weather station in the Arctic. -Users from the home location want to monitorize the weather values somehow. Well, the users can use a remote desktop connection the server A but that would be too too slow. My idea is somehow to have a website on a web server (let's call the webserver - B and B is in a home location ) and make the server A connect to the server B and somehow send values and the web application reads the values and displays them....... but how to do such a system ?? I know I can use MySQL and have the server A connect to a SQL server on server B and send INSERT queries and have the web application running on server B constantly read from the SQL server but I think that would be way way too slow and I think there has to be a better solution. Any ideas ? BTW. The users should be able to send information to the weather station from the website as well ( so an ADMIN user should be allowed to shut down the weather station from the website or whatever) Best regards, MadSeb

    Read the article

  • Options for displaying OG groups a node is published for on node page?

    - by Erik Töyrä
    What I want I have several OG groups in which content can be published. I would like to display which OG groups a node has been published for when viewing the node page. Like in "This page is published for: Department A, Department B." The code snipped below shows the data I have in the $node object in node.tpl.php. This data is generated by the OG module. Extracted data from $node ... [og_groups] => Array ( [993] => 993 [2078] => 2078 ) [og_groups_both] => Array ( [993] => Department A [2078] => Department B ) ... I know I could loop through the og_groups_both array in node.tpl.php and generate the output from there, but it feels like a quite dirty solution. The ideal solution would be to have a $og_groups variable in node.tpl.php, similiar to how $submitted is used in node.tpl.php (see below). Example of how $submitted is used <?php if ($submitted): ?> <div class="submitted"><?php print $submitted; ?></div> <?php endif; ?> Should I use hook_load() in a custom module to insert the new variable $og_groups in $node? What options do I have and which solution would you recommend?

    Read the article

  • Can I access views when drawn in the drawRect method?

    - by GianPac
    When creating the content view of a tableViewCell, I used the drawInRect:withFont, drawAtPoint... and others, inside drawRect: since all I needed was to lay some text. Turns out, part of the text drawn need to be clickable URLs. So I decided to create a UIWebView and insert it into my drawRect. Everything seems to be laid out fine, the problem is that interaction with the UIWebView is not happening. I tried enabling the interaction, did not work. I want to know, since with drawRect: I am drawing to the current graphics context, there is anything I can do to have my subviews interact with the user? here is some of my code I use inside the UITableViewCell class. -(void) drawRect:(CGRect)rect{ NSString *comment = @"something"; [comment drawAtPoint:point forWidth:195 withFont:someFont lineBreakMode:UILineBreakModeMiddleTruncation]; NSString *anotherCommentWithURL = @"this comment has a URL http://twtr.us"; UIWebView *webView = [[UIWebView alloc] initWithFrame:someFrame]; webView.delegate = self; webView.text = anotherCommentWithURL; [self addSubView:webView]; [webView release]; } As I said, the view draws fine, but there is no interaction with the webView from the outside. the URL gets embedded into HTML and should be clickable. It is not. I got it working on another view, but on that one I lay the views. Any ideas?

    Read the article

  • how to remove subsets form given text file

    - by user324887
    i have a problem like this 10 20 30 40 70 20 30 70 30 40 10 20 29 70 80 90 20 30 40 40 45 65 10 20 80 45 65 20 I want to remove all subset transaction from this file. output file should be like follows 10 20 30 40 70 29 70 80 90 20 30 40 40 45 65 10 20 80 Where records like 20 30 70 30 40 10 20 45 65 20 are removed because of they are subset of other records. i AM using set for this but i am not able to create one set for one line can anybody know how to do this please help me here i am sending you my code include include include using namespace std; using namespace std; set s1; int main() { FILE fp = fopen ( "abc.txt", "r" ); if ( fp != NULL ) { char line [ 128 ]; / or other suitable maximum line size */ while ( fgets ( line, sizeof line, fp ) != NULL ) /* read a line */ { istringstream iss(line); do { string sub; iss >> sub; s1.insert(sub); } while (iss); for (set<string>::const_iterator p = s1.begin( );p != s1.end( ); ++p) cout << *p << endl; } } }

    Read the article

  • Error with MySQL Query

    - by Ken
    Okay, I must be an idiot, because this is my 3rd question for today. Here's my code: date_default_timezone_set("America/Los_Angeles"); include("mainmenu.php"); $con = mysql_connect("localhost", "root", "********"); if(!$con){ die(mysql_error()); } $usrname = $_POST['usrname']; $fname = $_POST['fname']; $lname = $_POST['lname']; $password = $_POST['password']; $email = $_POST['email']; mysql_select_db("`users`, $con) or die(mysql_error()"); $query = ("INSERT INTO `users`.`data` (`id`, `usrname`, `fname`, `lname`, `email`, `password`) VALUES (NULL, '$usrname', '$fname', '$lname', '$email', 'password'))"); mysql_query('$query') or die(mysql_error()); mysql_close($con); echo("Thank you for registering!"); I always get the error returned as: "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '$query' at line 1. Help a newbie. I'm about to stab my monitor.

    Read the article

  • [C++][OpenMP] Proper use of "atomic directive" to lock STL container

    - by conradlee
    I have a large number of sets of integers, which I have, in turn, put into a vector of pointers. I need to be able to update these sets of integers in parallel without causing a race condition. More specifically. I am using OpenMP's "parallel for" construct. For dealing with shared resources, OpenMP offers a handy "atomic directive," which allows one to avoid a race condition on a specific piece of memory without using locks. It would be convenient if I could use the "atomic directive" to prevent simultaneous updating to my integer sets, however, I'm not sure whether this is possible. Basically, I want to know whether the following code could lead to a race condition vector< set<int>* > membershipDirectory(numSets, new set<int>); #pragma omp for schedule(guided,expandChunksize) for(int i=0; i<100; i++) { set<int>* sp = membershipDirectory[5]; #pragma omp atomic sp->insert(45); } (Apologies for any syntax errors in the code---I hope you get the point) I have seen a similar example of this for incrementing an integer, but I'm not sure whether it works when working with a pointer to a container as in my case.

    Read the article

  • How is timezone handled in the lifecycle of an ADO.NET + SQL Server DateTime column?

    - by stimpy77
    Using SQL Server 2008. This is a really junior question and I could really use some elaborate information, but the information on Google seems to dance around the topic quite a bit and it would be nice if there was some detailed elaboration on how this works... Let's say I have a datetime column and in ADO.NET I set it to DateTime.UtcNow. 1) Does SQL Server store DateTime.UtcNow accordingly, or does it offset it again based on the timezone of where the server is installed, and then return it offset-reversed when queried? I think I know that the answer is "of course it stores it without offsetting it again" but want to be certain. So then I query for it and cast it from, say, an IDataReader column to a DateTime. As far as I know, System.DateTime has metadata that internally tracks whether it is a UTC DateTime or it is an offsetted DateTime, which may or may not cause .ToLocalTime() and .ToUniversalTime() to have different behavior depending on this state. So, 2) Does this casted System.DateTime object already know that it is a UTC DateTime instance, or does it assume that it has been offset? Now let's say I don't use UtcNow, I use DateTime.Now, when performing an ADO.NET INSERT or UPDATE. 3) Does ADO.NET pass the offset to SQL Server and does SQL Server store DateTime.Now with the offset metadata? So then I query for it and cast it from, say, an IDataReader column to a DateTime. 4) Does this casted System.DateTime object already know that it is an offset time, or does it assume that it is UTC?

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • excel graphs using perl

    - by user1822725
    i amfacing problem when i ran the script its giving error like Can't locate object method "add_chart" via package "Spreadsheet::WriteExcel" at chart_column.pl line 33. May i know what is the problem here? And i am using perl, v5.8.5 built for x86_64-linux. #!/usr/bin/perl -w ############################################################################### # # A simple demo of Column charts in Spreadsheet::WriteExcel. # # reverse('©'), December 2009, John McNamara, [email protected] # use strict; use Spreadsheet::WriteExcel; my $workbook = Spreadsheet::WriteExcel->new( 'chart_column.xls' ); my $worksheet = $workbook->add_worksheet(); my $bold = $workbook->add_format( bold => 1 ); # Add the worksheet data that the charts will refer to. my $headings = [ 'Category', 'Values 1', 'Values 2' ]; my $data = [ [ 2, 3, 4, 5, 6, 7 ], [ 1, 4, 5, 2, 1, 5 ], [ 3, 6, 7, 5, 4, 3 ], ]; $worksheet->write( 'A1', $headings, $bold ); $worksheet->write( 'A2', $data ); ############################################################################### # # Example 1. A minimal chart. # my $chart1 = $workbook->add_chart( type => 'column' ); # Add values only. Use the default categories. $chart1->add_series( values => '=Sheet1!$B$2:$B$7' ); # Insert the chart into the main worksheet. $worksheet->insert_chart( 'E2', $chart1 );

    Read the article

  • php convert european datetime to mysql datetime

    - by Mathlight
    I'm really stuck with this problem. I've got an datetime string like this: 28-06-14 11:01:00 I'm trying to convert it to 2014-06-28 11:01:00 so that i can insert it into the database ( with field type datetime. I've tryed multiple things like this: $datumHolder = new DateTime($data['datum'], new DateTimeZone('Europe/Amsterdam')); $datum1 = $datumHolder -> format("Y-m-d H:i:s"); $datum2 = date( 'Y-m-d', strtotime(str_replace('-', '/', $data['datum']) ) ); $datum3 = DateTime::createFromFormat( 'Y-m-d-:Hi:s', $data['datum']); This is the output i get: datum1: 2028-06-14 11:01:00 datum2: 1970-01-01 And i get an error for datum3: echo "datum3: " . $datum3->format( 'Y-m-d H:i:s'); . '<br />'; Call to a member function format() on a non-object So my question is very clear... What am I doing wrong / how to get this working? Thanks in advantage guys! I know that this question is asked many, many times... But whatever i try, i can't get it working...

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • How do I rescue a small portion of data from a SQL Server database backup?

    - by Greg
    I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore. The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand. Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else. What's the best approach to take here? Thanks. Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

    Read the article

  • template files evaluation in python

    - by saminny
    I am trying to use python for translating a set of templates to a set of configuration files based on values taken from a main configuration file. However, I am having certain issues. Consider the following example of a template file. file1.cfg.template %(CLIENT1)s %(HOST1)s %(PORT1)d C %(COMPID1)s %(CLIENT2)s %(HOST2)s %(PORT2)d C %(COMPID2)s This file contains an entry for each client. There are hundreds of config files like this and I don't want to have logic for each type of config file. Python should do the replacements and generate config files automatically given a set of global values read from a main xml config file. However, in the above example, if CLIENT2 does not exist, how do I delete that line? I expect Python would generate the config file using something like this: os.open("file1.cfg.template").read() % myhash where myhash is hash of configuration parameters from the main config file which may not contain CLIENT2 at all. In the case it does not contain CLIENT2, I want that line to disappear from the file. Is it possible to insert some 'IF' block in the file and have python evaluate it? Thanks for your help. Any suggestions most welcome.

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >