Search Results

Search found 6905 results on 277 pages for 'fork join'.

Page 239/277 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Matching a Repeating Sub Series using a Regular Expression with PowerShell

    - by Hinch
    I have a text file that lists the names of a large number of Excel spreadsheets, and the names of the files that are linked to from the spreadsheets. In simplified form it looks like this: "Parent File1.xls" Link: ChildFileA.xls Link: ChildFileB.xls "ParentFile2.xls" "ParentFile3.xls" Blah Link: ChildFileC.xls Link: ChildFileD.xls More Junk Link: ChildFileE.xls "Parent File4.xls" Link: ChildFileF.xls In this example, ParentFile1.xls has embedded links to ChildFileA.xls and ChildFileB.xls, ParentFile2.xls has no embedded links, and ParentFile3.xls has 3 embedded links. I am trying to write a regular expression in PowerShell that will parse the text file producing output in the following form: ParentFile1.xls:ChildFileA.xls,ChildFileB.xls ParentFile3.xls:ChildFileC.xls,ChildFileD.xls,ChildFileE.xls etc The task is complicated by the fact that the text file contains a lot of junk between each of the lines, and a parent may not always have a child. Furthermore, a single file name may pass over multiple lines. However, it's not as bad as it sounds, as the parent and child file names are always clearly demarcated (the parent with quotes and the child with a prefix of Link: ). The PowerShell code I've been using is as follows: $content = [string]::Join([environment]::NewLine, (Get-Content C:\Temp\text.txt)) $regex = [regex]'(?im)\s*\"(.*)\r?\n?\s*(.*)\"[\s\S]*?Link: (.*)\r?\n?' $regex.Matches($content) | %{$_.Groups[1].Value + $_.Groups[2].Value + ":" + $_.Groups[3].Value} Using the example above, it outputs: ParentFile1.xls:ChildFileA.xls ParentFile2.xls""ParentFile3.xls:ChildFileC.xls ParentFile4.xls:ChildFileF.xls There are two issues. Firstly, the inclusion of the "" instead of a newline whenever a Parent without a Child is processed. And the second issue, which is the most important, is that only a single child is ever shown for each parent. I'm guessing I need to somehow recursively capture and display the multiple child links that exist for each parent, but I'm totally stumped as to how to do this with a regular expression. Amy help would be greatly appreciated. The file contains 100's of thousands of lines, and manual processing is not an option :)

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

  • How to get an X11 Window from a Process ID ?

    - by Adam Pierce
    Under Linux, my C++ application is using fork() and execv() to launch multiple instances of OpenOffice so as to view some powerpoint slide shows. This part works. Next I want to be able to move the OpenOffice windows to specific locations on the display. I can do that with the XMoveResizeWindow() function but I need to find the Window for each instance. I have the process ID of each instance, how can I find the X11 Window from that ? UPDATE - Thanks to Andy's suggestion, I have pulled this off. I'm posting the code here to share it with the Stack Overflow community. Unfortunately Open Office does not seem to set the _NET_WM_PID property so this doesn't ultimately solve my problem but it does answer the question. // Attempt to identify a window by name or attribute. // by Adam Pierce <[email protected]> #include <X11/Xlib.h> #include <X11/Xatom.h> #include <iostream> #include <list> using namespace std; class WindowsMatchingPid { public: WindowsMatchingPid(Display *display, Window wRoot, unsigned long pid) : _display(display) , _pid(pid) { // Get the PID property atom. _atomPID = XInternAtom(display, "_NET_WM_PID", True); if(_atomPID == None) { cout << "No such atom" << endl; return; } search(wRoot); } const list<Window> &result() const { return _result; } private: unsigned long _pid; Atom _atomPID; Display *_display; list<Window> _result; void search(Window w) { // Get the PID for the current Window. Atom type; int format; unsigned long nItems; unsigned long bytesAfter; unsigned char *propPID = 0; if(Success == XGetWindowProperty(_display, w, _atomPID, 0, 1, False, XA_CARDINAL, &type, &format, &nItems, &bytesAfter, &propPID)) { if(propPID != 0) { // If the PID matches, add this window to the result set. if(_pid == *((unsigned long *)propPID)) _result.push_back(w); XFree(propPID); } } // Recurse into child windows. Window wRoot; Window wParent; Window *wChild; unsigned nChildren; if(0 != XQueryTree(_display, w, &wRoot, &wParent, &wChild, &nChildren)) { for(unsigned i = 0; i < nChildren; i++) search(wChild[i]); } } }; int main(int argc, char **argv) { if(argc < 2) return 1; int pid = atoi(argv[1]); cout << "Searching for windows associated with PID " << pid << endl; // Start with the root window. Display *display = XOpenDisplay(0); WindowsMatchingPid match(display, XDefaultRootWindow(display), pid); // Print the result. const list<Window> &result = match.result(); for(list<Window>::const_iterator it = result.begin(); it != result.end(); it++) cout << "Window #" << (unsigned long)(*it) << endl; return 0; }

    Read the article

  • Are there some cases where Python threads can safely manipulate shared state?

    - by erikg
    Some discussion in another question has encouraged me to to better understand cases where locking is required in multithreaded Python programs. Per this article on threading in Python, I have several solid, testable examples of pitfalls that can occur when multiple threads access shared state. The example race condition provided on this page involves races between threads reading and manipulating a shared variable stored in a dictionary. I think the case for a race here is very obvious, and fortunately is eminently testable. However, I have been unable to evoke a race condition with atomic operations such as list appends or variable increments. This test exhaustively attempts to demonstrate such a race: from threading import Thread, Lock import operator def contains_all_ints(l, n): l.sort() for i in xrange(0, n): if l[i] != i: return False return True def test(ntests): results = [] threads = [] def lockless_append(i): results.append(i) for i in xrange(0, ntests): threads.append(Thread(target=lockless_append, args=(i,))) threads[i].start() for i in xrange(0, ntests): threads[i].join() if len(results) != ntests or not contains_all_ints(results, ntests): return False else: return True for i in range(0,100): if test(100000): print "OK", i else: print "appending to a list without locks *is* unsafe" exit() I have run the test above without failure (100x 100k multithreaded appends). Can anyone get it to fail? Is there another class of object which can be made to misbehave via atomic, incremental, modification by threads? Do these implicitly 'atomic' semantics apply to other operations in Python? Is this directly related to the GIL?

    Read the article

  • Should this even be a has_many :through association?

    - by GoodGets
    A Post belongs_to a User, and a User has_many Posts. A Post also belongs_to a Topic, and a Topic has_many Posts. class User < ActiveRecord::Base has_many :posts end class Topic < ActiveRecord::Base has_many :posts end class Post < ActiveRecord::Base belongs_to :user belongs_to :topic end Well, that's pretty simple and very easy to set up, but when I display a Topic, I not only want all of the Posts for that Topic, but also the user_name and the user_photo of the User that made that Post. However, those attributes are stored in the User model and not tied to the Topic. So how would I go about setting that up? Maybe it can already be called since the Post model has two foreign keys, one for the User and one for the Topic? Or, maybe this is some sort of "one-way" has_many through assiociation. Like the Post would be the join model, and a Topic would has_many :users, :through = :posts. But the reverse of this is not true. Like a User does NOT has_many :topics. So would this even need to be has_many :though association? I guess I'm just a little confused on what the controller would look like to call both the Post and the User of that Post for a give Topic. Edit: Seriously, thank you to all that weighed in. I chose tal's answer because I used his code for my controller; however, I could have just as easily chosen either j.'s or tim's instead. Thank you both as well. This was so damn simple to implement, and I think today marks the day that I'm beginning to fall in love with rails.

    Read the article

  • how to limit the number of datas in pdf

    - by udaya
    Hi I am exporting data from php page to word,, there i get 'n' number of datas in each page .... How to set the maximum number of data that a word page can contain ,,,, I want only 20 datas in a single page This is the coding i use to export the data to pdf In mysql_table.php the table for the pdf document is be generated <?php require('mysql_table.php'); class PDF extends PDF_MySQL_Table { function Header() { //Title $this->SetFont('Arial','',18); $this->Cell(0,6,'Country details',0,1,'C'); $this->Ln(10); parent::Header(); } } //Connect to database mysql_connect('localhost','root',''); mysql_select_db('cms'); $pdf=new PDF(); $pdf->AddPage(); //First table: put all columns automatically $pdf->Table("SELECT (SELECT COUNT(*) FROM tblentercountry t2 WHERE t2.dbName <= t1.dbName and dbIsDelete='0') AS SLNO ,dbName as Namee,t3.dbCountry as Country,t4.dbState as State,t5.dbTown as Town FROM tblentercountry t1 join tablecountry as t3, tablestate as t4, tabletown as t5 where t1.dbIsDelete='0' and t1.dbCountryId=t3.dbCountryId and t1.dbStateId=t4.dbStateId and t1.dbTownId=t5.dbTownId order by dbName"); $pdf->AddPage(); //Second table: specify 3 columns $pdf->AddCol('rank',20,'','C'); $pdf->AddCol('name',20,'tablecountry'); $pdf->AddCol('pop',20,'Pop (2001)','R'); $prop=array('HeaderColor'=>array(255,150,100), 'color1'=>array(210,245,255), 'color2'=>array(255,255,210), 'padding'=>2); //$pdf->Table('select dbCountry,dbCountryId from tablecountry limit 0,10',$prop); $pdf->Output(); ?> How to limit the number of datas in a page

    Read the article

  • sql server 2005 stored procedure unexpected behaviour

    - by user283405
    i have written a simple stored procedure (run as job) that checks user subscribe keyword alerts. when article posted the stored procedure sends email to those users if the subscribed keyword matched with article title. One section of my stored procedure is: OPEN @getInputBuffer FETCH NEXT FROM @getInputBuffer INTO @String WHILE @@FETCH_STATUS = 0 BEGIN --PRINT @String INSERT INTO #Temp(ArticleID,UserID) SELECT A.ID,@UserID FROM CONTAINSTABLE(Question,(Text),@String) QQ JOIN Article A WITH (NOLOCK) ON A.ID = QQ.[Key] WHERE A.ID > @ArticleID FETCH NEXT FROM @getInputBuffer INTO @String END CLOSE @getInputBuffer DEALLOCATE @getInputBuffer This job run every 5 minute and it checks last 50 articles. It was working fine for last 3 months but a week before it behaved unexpectedly. The problem is that it sends irrelevant results. The @String contains user alert keyword and it matches to the latest articles using Full text search. The normal execution time is 3 minutes but its execution time is 3 days (in problem). Now the current status is its working fine but we are unable to find any reason why it sent irrelevant results. Note: I am already removing noise words from user alert keyword. I am using SQL Server 2005 Enterprise Edition.

    Read the article

  • Comparing two date ranges within the same table

    - by Danny Herran
    I have a table with sales per store as follows: SQL> select * from sales; ID ID_STORE DATE TOTAL ---------- -------- ---------- -------------------------------------------------- 1 1 2010-01-01 500.00 2 1 2010-01-02 185.00 3 1 2010-01-03 135.00 4 1 2009-01-01 165.00 5 1 2009-01-02 175.00 6 5 2010-01-01 130.00 7 5 2010-01-02 135.00 8 5 2010-01-03 130.00 9 6 2010-01-01 100.00 10 6 2010-01-02 12.00 11 6 2010-01-03 85.00 12 6 2009-01-01 135.00 13 6 2009-01-02 400.00 14 6 2009-01-07 21.00 15 6 2009-01-08 45.00 16 8 2009-01-09 123.00 17 8 2009-01-10 581.00 17 rows selected. What I need to do is to compare two date ranges within that table. Lets say I need to know the differences in sales between 01 Jan 2009 to 10 Jan 2009 AGAINST 01 Jan 2010 to 10 Jan 2010. I'd like to build a query that returns something like this: ID_STORE_A DATE_A TOTAL_A ID_STORE_B DATE_B TOTAL_B ---------- ---------- --------- ---------- ---------- ------------------- 1 2010-01-01 500.00 1 2009-01-01 165.00 1 2010-01-02 185.00 1 2009-01-02 175.00 1 2010-01-03 135.00 1 NULL NULL 5 2010-01-01 130.00 5 NULL NULL 5 2010-01-02 135.00 5 NULL NULL 5 2010-01-03 130.00 5 NULL NULL 6 2010-01-01 100.00 6 2009-01-01 135.00 6 2010-01-02 12.00 6 2009-01-02 400.00 6 2010-01-03 85.00 6 NULL NULL 6 NULL NULL 6 2009-01-07 21.00 6 NULL NULL 6 2009-01-08 45.00 6 NULL NULL 8 2009-01-09 123.00 6 NULL NULL 8 2009-01-10 581.00 So, even if there are no sales in one range or another, it should just fill the empty space with NULL. So far, I've come up with this quick query, but I the "dates" from sales to sales2 sometimes are different in each row: SELECT sales.*, sales2.* FROM sales LEFT JOIN sales AS sales2 ON (sales.id_store=sales2.id_store) WHERE sales.date >= '2010-01-01' AND sales.date <= '2010-01-10' AND sales2.date >= '2009-01-01' AND sales2.date <= '2009-01-10' ORDER BY sales.id_store ASC, sales.date ASC, sales2.date ASC What am I missing?

    Read the article

  • Apply a recursive CTE on grouped table rows (SQL server 2005).

    - by Evan V.
    Hi all, I have a table (ROOMUSAGE) containing the times people check in and out of rooms grouped by PERSONKEY and ROOMKEY. It looks like this: PERSONKEY | ROOMKEY | CHECKIN | CHECKOUT | ROW ---------------------------------------------------------------- 1 | 8 | 13-4-2010 10:00 | 13-4-2010 11:00 | 1 1 | 8 | 13-4-2010 08:00 | 13-4-2010 09:00 | 2 1 | 1 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 1 | 1 | 13-4-2010 14:00 | 13-4-2010 15:00 | 2 1 | 1 | 13-4-2010 13:00 | 13-4-2010 14:00 | 3 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 2 I want to select just the consecutive rows for each PERSONKEY, ROOMKEY grouping. So the desired resulting table is: PERSONKEY | ROOMKEY | CHECKIN | CHECKOUT | ROW ---------------------------------------------------------------- 1 | 8 | 13-4-2010 10:00 | 13-4-2010 11:00 | 1 1 | 1 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 1 | 1 | 13-4-2010 14:00 | 13-4-2010 15:00 | 2 1 | 1 | 13-4-2010 13:00 | 13-4-2010 14:00 | 3 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 I want to avoid using cursors so I thought I would use a recursive CTE. Here is what I came up with: ;with CTE (PERSONKEY, ROOMKEY, CHECKIN, CHECKOUT, ROW) as (select RU.PERSONKEY, RU.ROOMKEY, RU.CHECKIN, RU.CHECKOUT, RU.ROW from ROOMUSAGE RU where RU.ROW = 1 union all select RU.PERSONKEY, RU.ROOMKEY, RU.CHECKIN, RU.CHECKOUT, RU.ROW from ROOMUSAGE RU inner join CTE on RU.ROWNUM = CTE.ROWNUM + 1 where CTE.CHECKIN = RU.CHECKOUT and CTE.PERSONKEY = RU.PERSONKEY and CTE.ROOMKEY = RU.ROOMKEY) This worked OK for very small datasets (under 100 records) but it's unusable on large datasets. I'm thinking that I should somehow apply the cte recursevely on each PERSONKEY, ROOMKEY grouping on my ROOMUSAGE table but I am not sure how to do that. Any help would be much appreciated, Cheers!

    Read the article

  • How to set different width for INPUT and DIV elements with Scriptaculous Ajax.Autocompleter?

    - by Grzegorz Gierlik
    Hello, I am working on autocompleter box based on Scriptaculous Ajax.Autocompleter. Here how my HTML/JS code looks like: <input type="text" maxlength="255" class="input iSearchInput" name="isearch_value" id="isearch" value="<wl@txt>Search</wl@txt>" onfocus="this.select()"> <br> <div id='isearch_choices' class='iSearchChoices'></div> <script> function iSearchGetSelectedId(text, li) { console.log([text, li.innerHTML].join("\n")); document.location.href = li.getAttribute("url"); } document.observe('dom:loaded', function() { new Ajax.Autocompleter("isearch", "isearch_choices", "/url", { paramName: "phrase", minChars: 1, afterUpdateElement : iSearchGetSelectedId }); $("isearch_choices").setStyle({width: "320px"}); }); </script> and CSS classes: input.iSearchInput { width: 155px; height: 26px; margin-top: 7px; line-height: 20px; } div.iSearchChoices { position:absolute; background-color:white; border:1px solid #888; margin:0; padding:0; width: 320px; } It works in general, but I need to have list of choices wider then input box. My first try was to set various width with CSS classes (like above), but it didn't work -- list of choices became as wide as input box. According to Firebug width defined by my CSS class was overwritten by width set by element.style CSS class, which seems to be defined by Ajax.Autocompleter. My second try was to set width for list of choices after creating Ajax.Autocompleter $("isearch_choices").setStyle({width: "320px"}); but it didn't work too :(. No more ideas :(. How to set different width for list of choices for Scriptaculous Ajax.Autocompleter?

    Read the article

  • Full-text search on App Engine with Whoosh

    - by Martin
    I need to do full text searching with Google App Engine. I found the project Whoosh and it works really well, as long as I use the App Engine Development Environement... When I upload my application to App Engine, I am getting the following TraceBack. For my tests, I am using the example application provided in this project. Any idea of what I am doing wrong? <type 'exceptions.ImportError'>: cannot import name loads Traceback (most recent call last): File "/base/data/home/apps/myapp/1.334374478538362709/hello.py", line 6, in <module> from whoosh import store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/__init__.py", line 17, in <module> from whoosh.index import open_dir, create_in File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/index.py", line 31, in <module> from whoosh import fields, store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/store.py", line 27, in <module> from whoosh import tables File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/tables.py", line 43, in <module> from marshal import loads Here is the import I have in my Python file. # Whoosh ---------------------------------------------------------------------- sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'utils'))) from whoosh.fields import Schema, STORED, ID, KEYWORD, TEXT from whoosh.index import getdatastoreindex from whoosh.qparser import QueryParser, MultifieldParser Thank you in advance for your help!

    Read the article

  • Entity Framework self referencing entity deletion.

    - by Viktor
    Hello. I have a structure of folders like this: Folder1 Folder1.1 Folder1.2 Folder2 Folder2.1 Folder2.1.1 and so on.. The question is how to cascade delete them(i.e. when remove folder2 all children are also deleted). I can't set an ON DELETE action because MSSQL does not allow it. Can you give some suggesions? UPDATE: I wrote this stored proc, can I just leave it or it needs some modifications? SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE sp_DeleteFoldersRecursive @parent_folder_id int AS BEGIN SET NOCOUNT ON; IF @parent_folder_id = 0 RETURN; CREATE TABLE #temp(fid INT ); DECLARE @Count INT; INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE FolderId = @parent_folder_id; SET @Count = @@ROWCOUNT; WHILE @Count > 0 BEGIN INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE EXISTS (SELECT FolderId FROM #temp WHERE Folders.ParentId = #temp.fid) AND NOT EXISTS (SELECT FolderId FROM #temp WHERE Folders.FolderId = #temp.fid); SET @Count = @@ROWCOUNT; END DELETE Folders FROM Folders INNER JOIN #temp ON Folders.FolderId = #temp.fid; DROP TABLE #temp; END GO

    Read the article

  • Twitter OAuth, Error when trying to POST direct message.

    - by Darxval
    So I am building a java script that is used in conjunction of my C++ application for sending direct messages to users. the script does the work of building the request that i send. When i send a request i receive "Incorrect signature" or "can not authenticate you" Does anyone see something i am missing or am doing wrong? I am continuing to investigate. Thank you in advance Javascript: var nDate = new Date(); var epoch = nDate.getTime(); var nounce = ""; nounce = Base64.encode(epoch+randomString()); var Parameters = [ "oauth_consumerkey="+sConsumerKey, "oauth_nonce="+nounce, "oauth_signature_method=HMAC-SHA1", "oauth_timestamp="+epoch, "oauth_token="+sAccessToken, "oauth_version=1.0", "text="+sText, "user="+sUser]; var SortedParameters = Parameters.sort(); var joinParameters = SortedParameters.join("&"); var encodeParameters = escape(joinParameters); signature_base_string = escape("POST&"+NormalizedURL+"&"+encodeParameters); signature_key = sConsumerSecret+"&"+sAccessSecret; signature = Base64.encode(hmacsha1(signature_base_string,signature_key)); sAuthHeader = " OAuth realm=, oauth_nonce="+nounce+", oauth_timestamp="+epoch+", oauth_consumer_key="+sConsumerKey+", oauth_signature_method=HMAC-SHA1, oauth_version=1.0, oauth_signature="+signature+", oauth_token="+sAccessToken+", text="+sText; goNVOut.Set("Header.Authorization: ", sAuthHeader);

    Read the article

  • Merging two Regular Expressions to Truncate Words in Strings

    - by Alix Axel
    I'm trying to come up with the following function that truncates string to whole words (if possible, otherwise it should truncate to chars): function Text_Truncate($string, $limit, $more = '...') { $string = trim(html_entity_decode($string, ENT_QUOTES, 'UTF-8')); if (strlen(utf8_decode($string)) > $limit) { $string = preg_replace('~^(.{1,' . intval($limit) . '})(?:\s.*|$)~su', '$1', $string); if (strlen(utf8_decode($string)) > $limit) { $string = preg_replace('~^(.{' . intval($limit) . '}).*~su', '$1', $string); } $string .= $more; } return trim(htmlentities($string, ENT_QUOTES, 'UTF-8', true)); } Here are some tests: // Iñtërnâtiônàlizætiøn and then the quick brown fox... (49 + 3 chars) echo dyd_Text_Truncate('Iñtërnâtiônàlizætiøn and then the quick brown fox jumped overly the lazy dog and one day the lazy dog humped the poor fox down until she died.', 50, '...'); // Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_... (50 + 3 chars) echo dyd_Text_Truncate('Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_jumped_overly_the_lazy_dog and one day the lazy dog humped the poor fox down until she died.', 50, '...'); They both work as it is, however if I drop the second preg_replace() I get the following: Iñtërnâtiônàlizætiøn_and_then_the_quick_brown_fox_jumped_overly_the_lazy_dog and one day the lazy dog humped the poor fox down until she died.... I can't use substr() because it only works on byte level and I don't have access to mb_substr() ATM, I've made several attempts to join the second regex with the first one but without success. Please help S.M.S., I've been struggling with this for almost an hour. EDIT: I'm sorry, I've been awake for 40 hours and I shamelessly missed this: $string = preg_replace('~^(.{1,' . intval($limit) . '})(?:\s.*|$)?~su', '$1', $string); Still, if someone has a more optimized regex (or one that ignores the trailing space) please share: "Iñtërnâtiônàlizætiøn and then " "Iñtërnâtiônàlizætiøn_and_then_" EDIT 2: I still can't get rid of the trailing whitespace, can someone help me out?

    Read the article

  • deploy a sinatra app with passenger gives only 404, page not founds. Yet a simple rack app works.

    - by berkes
    I have correctly (or prbably not) installed passenger on apache 2. Rack works, but sinatra keeps giving 404's. Here is what works: config.ru: #app = proc do |env| return [200, { "Content-Type" => "text/html" }, "hello <b>world</b>"] end run app Here is what works too: Running the app.rb (see below) with ruby app.rb and then looking at localhost:4567/about and / restarting the app, gives me a correct hello world. w00t. But then there is the sinatra entering the building: config.ru require 'rubygems' require 'sinatra' root_dir = File.dirname(__FILE__) set :environment, ENV['RACK_ENV'].to_sym set :root, root_dir set :app_file, File.join(root_dir, 'app.rb') disable :run run Sinatra::Application and an app.rb require 'rubygems' require 'sinatra' get '/' do "Hallo wereld!" end get '/about' do "Hello world, it's #{Time.now} at the server!" end This keeps giving 404s. /var/logs/apache2/error.log lists these correctly as "404" with something that worries me: 83.XXXXXXXXX - - [30/May/2010 16:06:52] "GET /about " 404 18 0.0007 83.XXXXXXXXX - - [30/May/2010 16:06:56] "GET / " 404 18 0.0007 The thing that worried me, is the space after the / and the /about. Would apache or sinatra go looking for /[space], like /%20? If anyone knows what this problem relates to, maybe a known bug (that I could not find) or a known gotcha? Maybe I am just being stupid and getting "it all wrong?" Otherwise any hints on where to get, read or log more developers data on a running rack, sinatra or passenger app would be helpfull too: to see what sinatra is looking for, for example. Some other information: Running ubuntu 9.04, apache2-mm-prefork (deb), mod_php5, ruby 1.8.7, passenger 2.2.11, sinatra 1.0

    Read the article

  • Trouble with LINQ databind to GridView and RowDataBound

    - by Michael
    Greetings all, I am working on redesigning my personal Web site using VS 2008 and have chosen to use LINQ to create by data-access layer. Part of my site will be a little app to help manage my budget better. My first LINQ query does successfully execute and display in a GridView, but when I try to use a RowDataBound event to work with the results and refine them a bit, I get the error: The type or namespace name 'var' could not be found (are you missing a using directive or an assembly reference?) This interesting part is, if I just try to put in a var s = "s"; anywhere else in the same file, I get the same error too. If I go to other files in the web project, var s = "s"; compiles fine. Here is the LINQ Query call: public static IQueryable pubGetRecentTransactions(int param_accountid) { clsDataContext db; db = new clsDataContext(); var query = from d in db.tblMoneyTransactions join p in db.tblMoneyTransactions on d.iParentTransID equals p.iTransID into dp from p in dp.DefaultIfEmpty() where d.iAccountID == param_accountid orderby d.dtTransDate descending, d.iTransID ascending select new { d.iTransID, d.dtTransDate, sTransDesc = p != null ? p.sTransDesc : d.sTransDesc, d.sTransMemo, d.mTransAmt, d.iCheckNum, d.iParentTransID, d.iReconciled, d.bIsTransfer }; return query; } protected void Page_Load(object sender, EventArgs e) { if (!this.IsPostBack) { this.prvLoadData(); } } internal void prvLoadData() { prvCtlGridTransactions.DataSource = clsMoneyTransactions.pubGetRecentTransactions(2); prvCtlGridTransactions.DataBind(); } protected void prvCtlGridTransactions_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { var datarow = e.Row.DataItem; var s = "s"; e.Row.Cells[0].Text = datarow.dtTransDate.ToShortDateString(); e.Row.Cells[1].Text = datarow.sTransDesc; e.Row.Cells[2].Text = datarow.mTransAmt.ToString("c"); e.Row.Cells[3].Text = datarow.iReconciled.ToString(); }//end if }//end RowDataBound My googling to date hasn't found a good answer, so I turn it over to this trusted community. I appreciate your time in assisting me.

    Read the article

  • Datastore performance, my code or the datastore latency

    - by fredrik
    I had for the last month a bit of a problem with a quite basic datastore query. It involves 2 db.Models with one referring to the other with a db.ReferenceProperty. The problem is that according to the admin logs the request takes about 2-4 seconds to complete. I strip it down to a bare form and a list to display the results. The put works fine, but the get accumulates (in my opinion) way to much cpu time. #The get look like this: outputData['items'] = {} labelsData = Label.all() for label in labelsData: labelItem = label.item.name if labelItem not in outputData['items']: outputData['items'][labelItem] = { 'item' : labelItem, 'labels' : [] } outputData['items'][labelItem]['labels'].append(label.text) path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, outputData)) #And the models: class Item(db.Model): name = db.StringProperty() class Label(db.Model): text = db.StringProperty() lang = db.StringProperty() item = db.ReferenceProperty(Item) I've tried to make it a number of different way ie. instead of ReferenceProperty storing all Label keys in the Item Model as a db.ListProperty. My test data is just 10 rows in Item and 40 in Label. So my questions: Is it a fools errand to try to optimize this since the high cpu usage is due to the problems with the datastore or have I just screwed up somewhere in the code? ..fredrik

    Read the article

  • Convert query with system objects from SQL 2000 to 2005/2008

    - by Dan
    I have some SQL I need to get working on SQL 2005/2008. The SQL is from SQL 2000 and uses some system objects to make it work. master.dbo.spt_provider_types master.dbo.syscharsets systypes syscolumns sysobjects I know SQL 2005 does no longer use system tables and I can get the same information from views, but I am looking for a solution that will work for both SQL 2000 and 2005/2008. Any ideas? select top 100 percent TABLE_CATALOG = db_name(), TABLE_SCHEMA = user_name(o.uid), TABLE_NAME = o.name, COLUMN_NAME = c.name, ORDINAL_POSITION = convert(int, ( select count(*) from syscolumns sc where sc.id = c.id AND sc.number = c.number AND sc.colid <= c.colid )), IS_COMPUTED = convert(bit, c.iscomputed) from syscolumns c left join syscomments m on c.cdefault = m.id and m.colid = 1, sysobjects o, master.dbo.spt_provider_types d, systypes t, master.dbo.syscharsets a_cha /* charset/1001, not sortorder.*/ where o.name = @table_name and permissions(o.id, c.name) <> 0 and (o.type in ('U','V','S') OR (o.type in ('TF', 'IF') and c.number = 0)) and o.id = c.id and t.xtype = d.ss_dtype and c.length = case when d.fixlen > 0 then d.fixlen else c.length end and c.xusertype = t.xusertype and a_cha.type = 1001 /* type is charset */ and a_cha.id = isnull(convert(tinyint, CollationPropertyFromID(c.collationid, 'sqlcharset')), convert(tinyint, ServerProperty('sqlcharset'))) -- make sure there's one and only one row selected for each column order by 2, 3, c.colorder ) tbl where IS_COMPUTED = 0

    Read the article

  • Parsing/Tokenizing a String Containing a SQL Command

    - by Alan Storm
    Are there any open source libraries (any language, python/PHP preferred) that will tokenize/parse an ANSI SQL string into its various components? That is, if I had the following string SELECT a.foo, b.baz, a.bar FROM TABLE_A a LEFT JOIN TABLE_B b ON a.id = b.id WHERE baz = 'snafu'; I'd get back a data structure/object something like //fake PHPish $results['select-columns'] = Array[a.foo,b.baz,a.bar]; $results['tables'] = Array[TABLE_A,TABLE_B]; $results['table-aliases'] = Array[a=TABLE_A, b=TABLE_B]; //etc... Restated, I'm looking for the code in a database package that teases the SQL command apart so that the engine knows what to do with it. Searching the internet turns up a lot of results on how to parse a string WITH SQL. That's not what I want. I realize I could glop through an open source database's code to find what I want, but I was hoping for something a little more ready made, (although if you know where in the MySQL, PostgreSQL, SQLite source to look, feel free to pass it along) Thanks!

    Read the article

  • Python MD5 Hash Faster Calculation

    - by balgan
    Hi everyone. I will try my best to explain my problem and my line of thought on how I think I can solve it. I use this code for root, dirs, files in os.walk(downloaddir): for infile in files: f = open(os.path.join(root,infile),'rb') filehash = hashlib.md5() while True: data = f.read(10240) if len(data) == 0: break filehash.update(data) print "FILENAME: " , infile print "FILE HASH: " , filehash.hexdigest() and using start = time.time() elapsed = time.time() - start I measure how long it takes to calculate an hash. Pointing my code to a file with 653megs this is the result: root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.624 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.373 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.540 Ok now 12 seconds +- on a 653mb file, my problem is I intend to use this code on a program that will run through multiple files, some of them might be 4/5/6Gb and it will take wayy longer to calculate. What am wondering is if there is a faster way for me to calculate the hash of the file? Maybe by doing some multithreading? I used a another script to check the use of the CPU second by second and I see that my code is only using 1 out of my 2 CPUs and only at 25% max, any way I can change this? Thank you all in advance for the given help.

    Read the article

  • The type of field isn't supported by declared persistence strategy "OneToMany"

    - by Robert
    We are new to JPA and trying to setup a very simple one to many relationship where a pojo called Message can have a list of integer group id's defined by a join table called GROUP_ASSOC. Here is the DDL: CREATE TABLE "APP"."MESSAGE" ( "MESSAGE_ID" INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1) ); ALTER TABLE "APP"."MESSAGE" ADD CONSTRAINT "MESSAGE_PK" PRIMARY KEY ("MESSAGE_ID"); CREATE TABLE "APP"."GROUP_ASSOC" ( "GROUP_ID" INTEGER NOT NULL, "MESSAGE_ID" INTEGER NOT NULL ); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_PK" PRIMARY KEY ("MESSAGE_ID", "GROUP_ID"); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_FK" FOREIGN KEY ("MESSAGE_ID") REFERENCES "APP"."MESSAGE" ("MESSAGE_ID"); Here is the pojo: @Entity @Table(name = "MESSAGE") public class Message { @Id @Column(name = "MESSAGE_ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private int messageId; @OneToMany(fetch=FetchType.LAZY, cascade=CascadeType.PERSIST) private List groupIds; public int getMessageId() { return messageId; } public void setMessageId(int messageId) { this.messageId = messageId; } public List getGroupIds() { return groupIds; } public void setGroupIds(List groupIds) { this.groupIds = groupIds; } } When we try to execute the following test code we get <openjpa-1.2.3-SNAPSHOT-r422266:907835 fatal user error> org.apache.openjpa.util.MetaDataException: The type of field "pojo.Message.groupIds" isn't supported by declared persistence strategy "OneToMany". Please choose a different strategy. Message msg = new Message(); List groups = new ArrayList(); groups.add(101); groups.add(102); EntityManager em = Persistence.createEntityManagerFactory("TestDBWeb").createEntityManager(); em.getTransaction().begin(); em.persist(msg); em.getTransaction().commit(); Help!

    Read the article

  • Code Golf: Shortest Turing-complete interpreter.

    - by ilya n.
    I've just tried to create the smallest possible language interpreter. Would you like to join and try? Rules of the game: You should specify a programming language you're interpreting. If it's a language you invented, it should come with a list of commands in the comments. Your code should start with example program and data assigned to your code and data variables. Your code should end with output of your result. It's preferable that there are debug statements at every intermediate step. Your code should be runnable as written. You can assume that data are 0 and 1s (int, string or boolean, your choice) and output is a single bit. The language should be Turing-complete in the sense that for any algorithm written on a standard model, such as Turing machine, Markov chains, or similar of your choice, it's reasonably obvious (or explained) how to write a program that after being executred by your interpreter performs the algorithm. The length of the code is defined as the length of the code after removal of input part, output part, debug statements and non-necessary whitespaces. Please add the resulting code and its length to the post. You can't use functions that make compiler execute code for you, such as eval(), exec() or similar. This is a Community Wiki, meaning neither the question nor answers get the reputation points from votes. But vote anyway!

    Read the article

  • Mapping self-table one-to-many using non-PK clolumns

    - by Harel Moshe
    Hey, i have a legacy DB to which a Person object is mapped, having a collection of family-members, like this: class Person { ... string Id; /* 9-digits string */ IList<Person> Family; ... } The PERSON table seems like: Id: CHAR(9), PK FamilyId: INT, NOT NULL and several other non-relevant columns. I'm trying to map the Family collection to the PERSON table using the FamilyId column, which is not the PK as mentioned above. So, i actually have a one-to-many which is self-table-referential. I'm getting an error saying 'Cast is not valid' when my mapping looks like this: ... <set name="Family" table="Person" lazy="false"> <key column="FamilyId" /> <one-to-many class="Person" /> </set> ... because obviously, the join NHibernate is trying to make is between the PK column, Id, and the 'secondary' column, FamilyId, instead of joining the FamilyId column to itself. Any ideas please?

    Read the article

  • More than 100 connection to sql server 2008 in "sleeping" status - Solved

    - by Allende
    I have a big trouble here, well at my server. I have an ASP .net web (framework 4.x) running on muy server, all the transactions/select/update/insert are made with ADO.NET. Well my problem is that after being using for a while (a couple of updates/selects/inserts) sometimes I got more than 100 connections on "sleeping" status when check for the connections on sql server with this query: SELECT spid, a.status, hostname, program_name, cmd, cpu, physical_io, blocked, b.name, loginame FROM master.dbo.sysprocesses a INNER JOIN master.dbo.sysdatabases b ON a.dbid = b.dbid where program_name like '%TMS%' ORDER BY spid I've been checking my code and closing every time I make a connection, I'm gonna test the new class, but I'm afraid the problem doesn't be fixed. It suppose that the connection pooling, keep the connections to re-use them, but until I see don't re-use them always. Any idea besides check for close all the connections open after use them? SOLVED(now I have just one and beautiful connection on "sleeping" status): Besides the anwser of David Stratton, I would like to share this link that help explain really well how the connection pool it works: http://dinesql.blogspot.com/2010/07/sql-server-sleeping-status-and.html Just to be short, you need to close every connection (sql connection objects) in order that the connection pool can re-use the connection and use the same connectinos string, to ensure this is highly recommended use one of the webConfig. Be careful with dataReaders you sould close its connection to (that was what make got out of my mind for while).

    Read the article

  • Why does my ActivePerl program report 'Sorry. Ran out of threads'?

    - by Zaid
    Tom Christiansen's example code (à la perlthrtut) is a recursive, threaded implementation of finding and printing all prime numbers between 3 and 1000. Below is a mildly adapted version of the script #!/usr/bin/perl # adapted from prime-pthread, courtesy of Tom Christiansen use strict; use warnings; use threads; use Thread::Queue; sub check_prime { my ($upstream,$cur_prime) = @_; my $child; my $downstream = Thread::Queue->new; while (my $num = $upstream->dequeue) { next unless ($num % $cur_prime); if ($child) { $downstream->enqueue($num); } else { $child = threads->create(\&check_prime, $downstream, $num); if ($child) { print "This is thread ",$child->tid,". Found prime: $num\n"; } else { warn "Sorry. Ran out of threads.\n"; last; } } } if ($child) { $downstream->enqueue(undef); $child->join; } } my $stream = Thread::Queue->new(3..shift,undef); check_prime($stream,2); When run on my machine (under ActiveState & Win32), the code was capable of spawning only 118 threads (last prime number found: 653) before terminating with a 'Sorry. Ran out of threads' warning. In trying to figure out why I was limited to the number of threads I could create, I replaced the use threads; line with use threads (stack_size => 1);. The resultant code happily dealt with churning out 2000+ threads. Can anyone explain this behavior?

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >