Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 583/1286 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • SQL Server INSERT ... SELECT Statement won't parse

    - by Jim Barnett
    I am getting the following error message with SQL Server 2005 Msg 120, Level 15, State 1, Procedure usp_AttributeActivitiesForDateRange, Line 18 The select list for the INSERT statement contains fewer items than the insert list. The number of SELECT values must match the number of INSERT columns. I have copy and pasted the select list and insert list into excel and verified there are the same number of items in each list. Both tables an additional primary key field with is not listed in either the insert statement or select list. I am not sure if that is relevant, but suspicious it may be. Here is the source for my stored procedure: CREATE PROCEDURE [dbo].[usp_AttributeActivitiesForDateRange] ( @dtmFrom DATETIME, @dtmTo DATETIME ) AS BEGIN SET NOCOUNT ON; DECLARE @dtmToWithTime DATETIME SET @dtmToWithTime = DATEADD(hh, 23, DATEADD(mi, 59, DATEADD(s, 59, @dtmTo))); -- Get uncontested DC activities INSERT INTO AttributedDoubleClickActivities ([Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID], [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], Ordinal, [Click-Time], [Event-ID]) SELECT [Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID] [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], REPLACE(Ordinal, '?', '') AS Ordinal, [Click-Time], [Event-ID] FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo AND REPLACE(Ordinal, '?', '') IN (SELECT REPLACE(Ordinal, '?', '') FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo EXCEPT SELECT CONVERT(VARCHAR, TripID) FROM VisualSciencesActivities WHERE [Time] BETWEEN @dtmFrom AND @dtmTo); END GO

    Read the article

  • Extract a specific string from a curl'd result

    - by allentown
    Given this curl command: curl --user-agent "fogent" --silent -o page.html "http://www.google.com/search?q=insansiate" * Spelling is intentionally incorrect. I want to grab the suggestion as my result. I want to be able to either grep into the page.html file perhaps with grep -oE or pipe it right from curl and never store a file. The result should be: 'instantiate' I need only the word 'instantiate', or the phrase, whatever google is auto correcting, is what I am after. Here is the basic html that is returned: <span class=spell style="color:#cc0000">Did you mean: </span><a href="/search?hl=en&amp;ie=UTF-8&amp;&amp;sa=X&amp;ei=VEMUTMDqGoOINraK3NwL&amp;ved=0CB0QBSgA&amp;q=instantiate&amp;spell=1"class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp;<span class=std>Top 2 results shown</span> So perhaps from/to of the string below, which I hope is unique enough to cover all my bases. class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp; I keep running into issues with greedy grep; perhaps I should run it though an html prettify tool first to get a line break or 50 in there. I don't know of any simple way to do so in bash, which is what I would ideally like this to be in. I really don't want to deal with firing up perl, and making sure I have the correct module. Any suggestions, thank you?

    Read the article

  • Impersonate SYSTEM (or equivalent) from Administrator Account

    - by KevenK
    This question is a follow up and continuation of this question about a Privilege problem I'm dealing with currently. Problem Summary: I'm running a program under a Domain Administrator account that does not have Debug programs (SeDebugPrivilege) privilege, but I need it on the local machine. Klugey Solution: The program can install itself as a service on the local machine, and start the service. Said service now runs under the SYSTEM account, which enables us to use our SeTCBPrivilege privilege to create a new access token which does have SeDebugPrivilege. We can then use the newly created token to re-launch the initial program with the elevated rights. I personally do not like this solution. I feel it should be possible to acquire the necessary privileges as an Administrator without having to make system modifications such as installing a service (even if it is only temporary). I am hoping that there is a solution that minimizes system modifications and can preferably be done on the fly (ie: Not require restarting itself). I have unsuccessfully tried to LogonUser as SYSTEM and tried to OpenProcessToken on a known SYSTEM process (such as csrss.exe) (which fails, because you cannot OpenProcess with PROCESS_TOKEN_QUERY to get a handle to the process without the privileges I'm trying to acquire). I'm just at my wit's end trying to come up with an alternative solution to this problem. I was hoping there was an easy way to grab a privileged token on the host machine and impersonate it for this program, but I haven't found a way. If anyone knows of a way around this, or even has suggestions on things that might work, please let me know. I really appreciate the help, thanks!

    Read the article

  • Rails rake test returns an error message

    - by eakkas
    I am a rails newbie and receive the following message when I run rake test. This is a an application based on rails community engine. I tried creating a test application just to make sure that my gems etc. are fine and I am able to run rake test successfully in that application. It would be great if someone could shed a light on what is going wrong... /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/whiny_nil.rb:52:in `method_missing': undefined method `merge' for nil:NilClass (NoMethodError) from /home/eakkas/NetBeansProjects/hello_ce/vendor/plugins/community_engine/app/controllers/users_controller.rb:17 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in `require_without_desert' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:8:in `require' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:32:in `__each_matching_file' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:7:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:27:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:26:in `each' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:26:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run'

    Read the article

  • Spring 2.0.0/2.0.6 to 3.0.5 migration stories

    - by Pangea
    We are in the process of migrating to 3.0.5 of spring from 2.0.x. We mainly use spring in below scenarios custom scope: thread local scope persistence: jdbc+hibernate 3.6 (but moving to mix of ejb 3.0+jpa 2.0+hibernate, not sure if all 3 can co-exist in 1 app) transactions: local (but planning to use jta due to the necessity of using multiple persistence inits, and has to use ejb+jpa+hibernate in 1 single trans), declarative trans mgmt parent-child contexts cxf annotations+xml OracleLobHandler Resource/ResourceBundleMessageResource JSF/Facelets with FacesSpringVariableResolver ActiveMQ integration Quartz integration TaskExecutor JMX exporter HttpExporter/Invoker Appreciate if someone can share their experiences like what to watch out for head aches/pain points which ones to drop for better alternate choices in new 3.0.5 release Is it better to switch from commons/iscreen validator to Hibernate Validator (Spec impl) or Spring Validator Is there a bean mapping framework in spring that i can use instead of Dozer XSLT transformation helper: currently we have small homegrown framework to cache xslts during load. if spring can do that for me then I would like to drop this Encryption/Decryption support. Password generation support. Authentication with SALT any SAML (or claims based secur New ideas Suggestions Switch to latest version of aspectj Upgrade guide from 2.5 to 3.0.5

    Read the article

  • how to load files in python

    - by Alvaro
    I'm fairly new to python and would like some help on properly loading separate files. My codes purpose is to open a given file, search for customers of that file by the state or state abbreviation. However, i have a separate function to open a separate file where i have (name of state):(state abbreviation) Thanks. def file_state_search(fileid, state): z=0 indx = 0 while z<25: line=fileid.readline() data_list = ("Name:", "Address:", "City:", "State:", "Zipcode:") line_split = line.split(":") if state in line: while indx<5: print data_list[indx], line_split[indx] indx = indx + 1 elif state not in line: z = z + 1 def state_convert(fileid, state): line2=in_file2.readline() while state in line2: print line2 x=1 while x==1: print "Choose an option:" print print "Option '1': Search Record By State" print option = raw_input("Enter an option:") print if option == "1": state = raw_input("Enter A State:") in_file = open("AdrData.txt", 'r') line=in_file.readline() print in_file2 = open("States.txt", 'r') line2=in_file2.readline() converted_state = state_convert(in_file2, state) print converted_state state_find = file_state_search(in_file, state) print state_find x=raw_input("Enter '1' to continue, Enter '2' to stop: ") x=int(x) By the way, my first import statement works, for whatever reason my second one doesn't. Edit: My question is, what am i doing wrong in my state_convert function.

    Read the article

  • Filter large amounts of data in a table w/ jQuery

    - by Bry4n
    I work for a transit agency and I have large amounts of data (mostly times), and I need a way to filter the data using two textboxes (To and From). I found jQuery quick search, but it seems to only work with one textbox. If anyone has any ideas via jQuery or some other client side library, that would be fantastic. Ideal example: To: [Textbox] From:[Textbox] <table> <tr> <td>69th street</td><td>5:00pm</td><td>5:06pm</td><td>5:10pm</td><td>5:20pm</td> </tr> <tr> <td>Millbourne</td><td>5:09pm</td><td>5:15pm</td><td>5:20pm</td><td>5:25pm</td> </tr> <tr> <td>Spring Garden</td><td>6:00pm</td><td>6:15pm</td><td>6:20pm</td><td>6:25pm</td> </tr> </table> So If I start typing in one of the stations in the To: textbox it either displays dynamically like the quick search or i have to press a button (either or) and then in the from: textbox. Lastly it shows me to: station and all its times on the left and the from: station and all its times on the right.

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • Lucene setboost doesn't work

    - by Keven
    Hi all, OUr team just upgrade lucene from 2.3 to 3.0 and we are confused about the setboost and getboost of document. What we want is just set a boost for each document when add them into index, then when search it the documents in the response should have different order according to the boost I set. But it seems the order is not changed at all, even the boost of each document in the search response is still 1.0. Could some one give me some hit? Following is our code: String[] a = new String[] { "schindler", "spielberg", "shawshank", "solace", "sorcerer", "stone", "soap", "salesman", "save" }; List strings = Arrays.asList(a); AutoCompleteIndex index = new Index(); IndexWriter writer = new IndexWriter(index.getDirectory(), AnalyzerFactory.createAnalyzer("en_US"), true, MaxFieldLength.LIMITED); float i = 1f; for (String string : strings) { Document doc = new Document(); Field f = new Field(AutoCompleteIndexFactory.QUERYTEXTFIELD, string, Field.Store.YES, Field.Index.NOT_ANALYZED); doc.setBoost(i); doc.add(f); writer.addDocument(doc); i += 2f; } writer.close(); IndexReader reader2 = IndexReader.open(index.getDirectory()); for (int j = 0; j < reader2.maxDoc(); j++) { if (reader2.isDeleted(j)) { continue; } Document doc = reader2.document(j); Field f = doc.getField(AutoCompleteIndexFactory.QUERYTEXTFIELD); System.out.println(f.stringValue() + ":" + f.getBoost() + ", docBoost:" + doc.getBoost()); doc.setBoost(j); }

    Read the article

  • How can I create a "dynamic" WHERE clause?

    - by TheChange
    Hello there, First: Thanks! I finished my other project and the big surprise: now everything works as it should :-) Thanks to some helpful thinkers of SO! So here I go with the next project. I'd like to get something like this: SELECT * FROM tablename WHERE field1=content AND field2=content2 ... As you noticed this can be a very long where-clause. tablename is a static property which does not change. field1, field2 , ... (!) and the contents can change. So I need an option to build up a SQL statement in PL/SQL within a recursive function. I dont really know what to search for, so I ask here for links or even a word to search for.. Please dont start to argue about wether the recursive function is really needed or what its disadvanteges - this is not in question ;-) If you could help me to create something like an SQL-String which will later be able to do a successful SELECT this would be very nice! Iam able to go through the recursive function and make a longer string each time, but I cannot make an SQL statement from it.. Oh, one additional thing: I get the fields and contents by a xmlType (xmldom.domdocument etc) I can get the field and the content for example in a clob from the xmltype

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • Help with PHP simplehtmldom - Modifiying a form.

    - by onemyndseye
    Ive gotten some great help here and I am so close to solving my problem that I can taste it. But I seem to be stuck. I need to scrape a simple form from a local webserver and only return the lines that match a users local email (i.e. onemyndseye@localhost). simplehtmldom makes easy work of extracting the correct form element: foreach($html->find('form[action*="delete"]') as $form) echo $form; Returns: <form action="/delete" method="post"> <input type="checkbox" id="D1" name="D1" /><a href="http://www.linux.com/rss/feeds.php"> http://www.linux.com/rss/feeds.php </a> [email: onemyndseye@localhost (Default) ]<br /> <input type="checkbox" id="D2" name="D2" /><a href="http://www.ubuntu.com/rss.xml"> http://www.ubuntu.com/rss.xml </a> [email: onemyndseye@localhost (Default) ]<br /> However I am having trouble making the next step. Which is returning lines that contain 'onemyndseye@localhost' and removing it so that only the following is returned: <input type="checkbox" id="D1" name="D1" /><a href="http://www.linux.com/rss/feeds.php">http://www.linux.com/rss/feeds.php</a> <br /> <input type="checkbox" id="D2" name="D2" /><a href="http://www.ubuntu.com/rss.xml">http://www.ubuntu.com/rss.xml</a> <br /> Thanks to the wonderful users of this site Ive gotten this far and can even return just the links but I am having trouble getting the rest... Its important that the complete <input> tags are returned EXACTLY as shown above as the id and name values will need to be passed back to the original form in post data later on. Thanks in advance!

    Read the article

  • C++ destructors causing crash's

    - by larsonator
    ok, so i got a some what intricate program that simulates the uni systems of students, units, and students enrolling in units. Students are stored in a binary search tree, Units are stored in a standard list. Student has a list of Unit Pointers, to store which units he/she is enrolled in Unit has a list of Student pointers, to store students which are enrolled in that unit. The unit collections (storing units in a list) as made as a static variable where the main function is, as is the Binary search tree of students. when its finaly time to close the program, i call the destructors of each. but at some stage, during the destructors on the unit side, Unhandled exception at 0x002e4200 in ClassAllocation.exe: 0xC0000005: Access violation reading location 0x00000000. UnitCollection destructor: UnitCol::~UnitCol() { list<Unit>::iterator itr; for(itr = UnitCollection.begin(); itr != UnitCollection.end();) { UnitCollection.pop_front(); itr = UnitCollection.begin(); } } Unit Destructor Unit::~Unit() { } now i got the same sorta problem on the student side of things BST destructors void StudentCol::Destructor(const BTreeNode * r) { if(r!= 0) { Destructor(r->getLChild()); Destructor(r->getRChild()); delete r; } } StudentCol::~StudentCol() { Destructor(root); } Student Destructor Student::~Student() { } so yeah any help would be greatly appreciated

    Read the article

  • Parsing a JSON feed from YQL using jQuery

    - by Keith
    I am using YQL's query.multi to grab multiple feeds so I can parse a single JSON feed with jQuery and reduce the number of connections I'm making. In order to parse a single feed, I need to be able to check the type of result (photo, item, entry, etc) so I can pull out items in specific ways. Because of the way the items are nested within the JSON feed, I'm not sure the best way to loop through the results and check the type and then loop through the items to display them. Here is a YQL (http://developer.yahoo.com/yql/console/) query.multi example and you can see three different result types (entry, photo, and item) and then the items nested within them: select * from query.multi where queries= "select * from twitter.user.timeline where id='twitter'; select * from flickr.photos.search where has_geo='true' and text='san francisco'; select * from delicious.feeds.popular" or here is the JSON feed itself: http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20query.multi%20where%20queries%3D%22select%20*%20from%20flickr.photos.search%20where%20user_id%3D'23433895%40N00'%3Bselect%20*%20from%20delicious.feeds%20where%20username%3D'keith.muth'%3Bselect%20*%20from%20twitter.user.timeline%20where%20id%3D'keithmuth'%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=

    Read the article

  • Validating Time & Date To Be At Least A Certain Amount Of Time In The Future

    - by MJH
    I've built a reservation form for a taxi company which works fine, but I'm having an issue with users making reservations that are due too soon in the future. Since the entire form is kind of long, I first want to make sure the user is not trying to make a reservation for less than an hour ahead of time, without them having to fill out the whole form. This is what I have come up with so far, but it's just not working: <?php //Set local time zone. date_default_timezone_set('America/New_York'); //Get current date and time. $current_time = date('Y-m-d H:i:s'); //Set reservation time variable $res_datetime = $_POST['res_datetime']; //Set event time. $event_time = strtotime($res_datetime); ?> <!doctype html> <html> <head> <meta charset="utf-8"> <title>Check Date and Time</title> </head> <?php //Check to be sure reservation time is at least one hour in the future. if (($current_time - $event_time) <= (3600)) { echo "You must make a reservation at least one hour ahead of time."; } ?> <form name="datetime" action="" method="post"> <input name="res_datetime" type="datetime-local" id="res_datetime"> <input type="submit"> </form> <body> </body> </html> How can I create a validation check to make sure the date and time of the reservation is at least one hour ahead of time?

    Read the article

  • Which text margin does SWT Table use when drawing text?

    - by Zordid
    I got a relatively easy question - but I cannot find anything anywhere to answer it. I use a simple SWT table widget in my application that displays only text in the cells. I got an incremental search feature and want to highlight text snippets in all cells if they match. So when typing "a", all "a"s should be highlighted. To get this, I add an SWT.EraseItem listener to interfere with the background drawing. If the current cell's text contains the search string, I find the positions and calculate relative x-coordinates within the text using event.gc.stringExtent - easy. With that I just draw rectangles "behind" the occurrences. Now, there's a flaw in this. The table does not draw the text without a margin, so my x coordinate does not really match - it is slightly off by a few pixels! But how many?? Where do I retrieve the cell's text margins that table's own drawing will use? No clue. Cannot find anything. :-( Bonus question: the table's draw method also shortens text and adds "..." if it does not fit into the cell. Hmm. My occurrence finder takes the TableItem's text and thus also tries to mark occurrences that are actually not visible because they are consumed by the "...". How do I get the shortened text and not the "real" text within the EraseItem draw handler? Thanks!

    Read the article

  • Get Json and output to text file undecoded

    - by Gary
    Hi, I want to fetch json script and write it to a txt file undecoded, exactly how it was originally. I do have a script that I use that I am modifying but unsure what to commands to use. This script decodes, which is what I want to advoid. //Get Age list($bstat,$bage,$bdata) = explode("\t",check_file('./advise/roadsnow.txt',60*2+15)); //Test Age if ( $bage > $CacheMaxAge ) { //echo "The if statement evaluated to true so get new file and reset $bage"; $bage="0"; $file = file_get_contents('http://somesite.jsontxt'); $out = (json_decode($file)); $report = wordwrap($out->mainText, 100, "\n"); //$valid = $out->validTo; //write the data to a text file called roadsnow.txt $myFile = "./advise/roadsnow.txt"; $fh = fopen($myFile, 'w') or die("can't open file"); $stringData = $report; fwrite($fh, $stringData); } else { //echo the test evaluated to false; file is not stale so read local cache //print "we are at the read local cache"; $stringData = file_get_contents("./advise/roadsnow.txt"); } // if/else is done carry on with processing //Format file $data = $stringData

    Read the article

  • Pull/Clone a svn repository into hg with new default branch name?

    - by TheLQ
    I'm forking a project's SVN repo and need to integrate into my Mercurial repo. To keep things simple I have a local hgsubversion repo and a local hg repo. However both the mercurial and hgsubversion repo uses default as their default branch name. My goal here is to put the original code and updates on one branch and my code on the default branch However I have yet to be able to do this. W:\programming\tcsite-svn-test>hg clone http://*HG_SITE*/hg . no changes found updating to branch default 0 files updated, 0 files merged, 0 files removed, 0 files unresolved W:\programming\tcsite-svn-test>hg branch blizzard marked working directory as branch blizzard W:\programming\tcsite-svn-test>hg commit W:\programming\tcsite-svn-test>hg log changeset: 0:be13a9580df0 branch: blizzard tag: tip user: Leon Blakey <[email protected]> date: Fri Jan 14 23:44:25 2011 -0500 summary: Created Blizzard Branch W:\programming\tcsite-svn-test>hg pull http://*SVN_SITE*/svn/ pulling from http://*SVN_SITE*/svn/ .... pulled 23 revisions (run 'hg update' to get a working copy) W:\programming\tcsite-svn-test>hg branch blizzard W:\programming\tcsite-svn-test>hg branches default 23:93642a8890ab <------ blizzard 0:be13a9580df0 Not surprisingly, hgsubversion puts pulled commits into the default branch when I really need them in the blizzard branch. From the docs, there is no way to rename the branch that a commit came from. Frustratingly I can't even come up with a way to do it on a repo with only the hgsubversion repo being pulled from, nothing else. All commits are tied to that one branch no matter what. Is there any suggestions on how to pull changes from an SVN repo and rename the branch to something else?

    Read the article

  • Rails runner command not saving to cache

    - by mark
    Hi I'm having a bit of a problem with a cron task generated by rails whenever plugin that should store remote data in the rails cache for display. What I have is this: schedule.rb set :path, '/var/www/apps/tuexplore/current' every 1.hour do runner "Weather.cache_remote", :environment => :production end calls this model class Weather def self.cache_remote Rails.cache.write('weather_data', Net::HTTP.get_response(URI.parse(WEATHER_URL)).body) end end Calling whenever returns this PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deploy/.gem/ruby/1.8/bin 0 * * * * /var/www/apps/tuexplore/current/script/runner -e production "Weather.cache_remote" This doesn't work. Calling the weather model method from a controller works fine, but I need to schedule it hourly. The cron task causes a "Cache write: weather_data" entry to appear in the production log but data isn't stored nor output into the page. Additional information, I can log into production console and run Weather.cache_remote, then read the data from the rails cache. I'd be really appreciative if someone could point out the error of my ways. If further explanation is needed please ask. Thanks in advance for any pointers.

    Read the article

  • problm with MANIFEST.MF in jar

    - by Atul
    hi I have created jar in the following folder: /usr/local/bin/niidle.jar. And my MANIFEST.MF file is as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: hector-0.6.0-17.jar And I verified that,this 'hector-0.6.0-17.jar' file is also present in the folder: /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar I don't want to give full class-path name in MANIFEST.MF file,because I have to run this jar on other's machine,so I gave only jar file name 'Class-Path=hector-0.6.0-17.jar' in MANIFEST.MF file. Inspite of mentioning the Class-Path in MANIFEST.MF file, when I run this using command: java -jar /usr/local/bin/niidle.jar arguments... It is showing error massage: --Exception in thread "main" java.lang.NoClassDefFoundError: me/prettyprint/hector/api/Serializer at com.ensarm.niidle.web.scraper.NiidleScrapeManager.main(NiidleScrapeManager.java:21) Caused by: java.lang.ClassNotFoundException: me.prettyprint.hector.api.Serializer at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 1 more Please give me solution for this error message..

    Read the article

  • Capistrano + Git + DreamHost

    - by Michael Sync
    Hello, I'm trying to deploy my rails application by using Passenger and Capistrano on Dreamhost. I'm using Git as a version control and we bought an account from GitHub. I have installed all required gems, Passenger and Capistrano in my local machine and I have cloned the repository of my project from GitHub in my local machine as wel. According to Dreamhost support, they have Passenger, Ruby, Rails and etc on their server as well. I'm currently following this article http://github.com/guides/deploying-with-capistrano for my deployment. The following is my deploy.rb. default_run_options[:pty] = true ssh_options[:forward_agent] = true # be sure to change these set :user, 'gituser' set :domain, 'github.com' set :application, 'MyProjectOnGit' #[email protected]:MyProjectOnGit.git # the rest should be good set :repository, "[email protected]:MyProjectOnGit.git" set :deploy_to, "/ruby.michaelsync.net/" set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :git_shallow_clone, 1 set :scm_verbose, true set :use_sudo, false set :git_enable_submodules, 1 server domain, :app, :web role :db, domain, :primary => true set :ssh_options, { :forward_agent => true } namespace :deploy do task :restart do run "touch #{current_path}/tmp/restart.txt" end end When I run "cap deploy", I'm getting the error below. [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: github.com (Net::SSH::AuthenticationFailed: gituser) connection failed for: github.com (Net::SSH::AuthenticationFailed: gituser) Thanks in advance..

    Read the article

  • iPhone: Does it ever make sense for an object to retain its delegate?

    - by randombits
    According to the rules of memory management in a non garbage collected world, one is not supposed to retain a the calling object in a delegate. Scenario goes like this: I have a class that inherits from UITableViewController and contains a search bar. I run expensive search operations in a secondary thread. This is all done with an NSOperationQueue and subclasses NSOperation instances. I pass the controller as a delegate that adheres to a callback protocol into the NSOperation. There are edge cases when the application crashes because once an item is selected from the UITableViewController, I dismiss it and thus its retain count goes to 0 and dealloc gets invoked on it. The delegate didn't get to send its message in time as the results are being passed at about the same time the dealloc happens. Should I design this differently? Should I call retain on my controller from the delegate to ensure it exists until the NSOperation itself is dealloc'd? Will this cause a memory leak? Right now if I put a retain on the controller, the crashes goes away. I don't want to leak memory though and need to understand if there are cases where retaining the delegate makes sense. Just to recap. UITableViewController creates an NSOperationQueue and NSOperation that gets embedded into the queue. The UITableViewController passes itself as a delegate to NSOperation. NSOperation calls a method on UITableViewController when it's ready. If I retain the UITableViewController, I guarantee it's there, but I'm not sure if I'm leaking memory. If I only use an assign property, edge cases occur where the UITableViewController gets dealloc'd and objc_msgSend() gets called on an object that doesn't exist in memory and a crash is imminent.

    Read the article

  • JMeter CSV Data Set is corrupting Japanese strings stored as proper UTF-8, I get Question Marks instead

    - by Mark Bennett
    I read in search terms from a simple text file to send to a search engine. It works fine in English, but gives me ???? for any Japanese text. Text with mixed English and Japanese does show the English text, so I know it's reading it. What I'm seeing: Input text: Snow Leopard ??????????????? Turns into: Snow Leopard ??????????????? This is in my POST field of an HTTP. If I set JMeter to encode the data, it just puts in the percent sequence for question marks. Interesting note: In the example above there are 15 Japanese characters, and then 15 question marks, so at some point it's being seen as full characters and not just bytes. About the Data: The CSV file is very simple in structure. There's only one field / one column, which I name TERM, and later use as ${TERM} I don't really need full CSV because it's only one string per line. There's no commas or quotes. When I run the Unix "file" command on the file, it says UTF-8 text. I've also verified it in command line and graphical mode on two machines. JMeter CSV Dataset Config: Filename: japanese-searches.csv File encoding: UTF-8 (also tried without) Variable names: TERM Delimiter: , Allow Quoted Data: False (I also tried True, different, but still wrong) Recycle at EOF: True Stop at EOF: False Staring mode: All threads A few things I've tried: Tried Allow quoted Data. It changed to other strange characters. -Dfile.encoding=UTF-8 Tried encoding the POST, but it just turned into a bunch of %nn for question marks And I'm not sure how "debug" just after the each line of the CSV is read in. I think it's corrupted right away, but I'm not sure. If it's only mangled when I reference it, then instead of ${TERM} perhaps there's some other "to bytes" function call. I'll start checking into that. I haven't done anything with the JMeter functions yet.

    Read the article

  • Can this extension method be improved?

    - by Newbie
    I have the following extension method public static class ListExtensions { public static IEnumerable<T> Search<T>(this ICollection<T> collection, string stringToSearch) { foreach (T t in collection) { Type k = t.GetType(); PropertyInfo pi = k.GetProperty("Name"); if (pi.GetValue(t, null).Equals(stringToSearch)) { yield return t; } } } } What it does is by using reflection, it finds the name property and then filteres the record from the collection based on the matching string. This method is being called as List<FactorClass> listFC = new List<FactorClass>(); listFC.Add(new FactorClass { Name = "BKP", FactorValue="Book to price",IsGlobal =false }); listFC.Add(new FactorClass { Name = "YLD", FactorValue = "Dividend yield", IsGlobal = false }); listFC.Add(new FactorClass { Name = "EPM", FactorValue = "emp", IsGlobal = false }); listFC.Add(new FactorClass { Name = "SE", FactorValue = "something else", IsGlobal = false }); List<FactorClass> listFC1 = listFC.Search("BKP").ToList(); It is working fine. But a closer look into the extension method will reveal that Type k = t.GetType(); PropertyInfo pi = k.GetProperty("Name"); is actually inside a foreach loop which is actually not needed. I think we can take it outside the loop. But how? PLease help. (C#3.0)

    Read the article

  • lightweight/portable VCS for server-hopping DBA?

    - by Aaron
    I'm looking for a VCS that'll help me keep all of my work scripts in-sync. Requirements: Portable (as in flash drive, not code-level) Run on Windows XP and Server 2003+ No installation dependencies (Cygwin, perl, Python) I use Mercurial on my work machine for version control of the various T-SQL, ksh, perl, and CMD/BAT scripts that I maintain as a MS SQL Server DBA and Unix sysadmin. So far, hg has worked for my AIX boxes- I mount my home directory as I login, and deal with the repo as if it were local. I haven't been able to find a similar solution for the Windows machines I use. Most of them I do not have Local Admin rights; even if I did, I'd rather not install (and maintain) Python + Mercurial on all of them. I can't get to my home directory on them remotely, which leaves a client running on each machine as the only option. Bonus points for an answer that would let me use a single repo for both the Windows and Unix machines. :) I'm running WinXP, with heavy use of Cygwin and a CrunchBang VM.

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >