Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 721/906 | < Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • How do I get rid of these warnings?

    - by Brian Postow
    This is really several questions, but anyway... I'm working with a big project in XCode, relatively recently ported from MetroWorks (Yes, really) and there's a bunch of warnings that I want to get rid of. Every so often an IMPORTANT warning comes up, but I never look at them because there's too many garbage ones. So, if I can either figure out how to get XCode to stop giving the warning, or actually fix the problem, that would be great. Here are the warnings: It claims that <map.h> is antiquated. However, when I replace it with <map> my files don't compile. Evidently, there's something in map.h that isn't in map... this decimal constant is unsigned only in ISO C90 This is a large number being compared to an unsigned long. I have even cast it, with no effect. enumeral mismatch in conditional expression: <anonymous enum> vs <anonymous enum> This appears to be from a ?: operator. Possibly that the then and else branches don't evaluate to the same type? Except that in at least one case, it's (matchVp == NULL ? noErr : dupFNErr) And since those are both of type OSErr, which is mac defined... I'm not sure what's up. It also seems to come up when I have other pairs of mac constants... multi-character character constant This one is obvious. The problem is that I actually NEED multi-character constants... -fwritable-strings not compatible with literal CF/NSString I unchecked the "Strings are Read-Only" box in both the project and target settings... and it seems to have had no effect...

    Read the article

  • Scapy install issues. Nothing seems to actually be installed?

    - by Chris
    I have an apple computer running Leopard with python 2.6. I downloaded the latest version of scapy and ran "python setup.py install". All went according to plan. Now, when I try to run it in interactive mode by just typing "scapy", it throws a bunch of errors. What gives! Just in case, here is the FULL error message.. INFO: Can't import python gnuplot wrapper . Won't be able to plot. INFO: Can't import PyX. Won't be able to use psdump() or pdfdump(). ERROR: Unable to import pcap module: No module named pcap/No module named pcapy ERROR: Unable to import dnet module: No module named dnet Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 122, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 34, in _run_code exec code in run_globals File "/Users/owner1/Downloads/scapy-2.1.0/scapy/__init__.py", line 10, in <module> interact() File "scapy/main.py", line 245, in interact scapy_builtins = __import__("all",globals(),locals(),".").__dict__ File "scapy/all.py", line 25, in <module> from route6 import * File "scapy/route6.py", line 264, in <module> conf.route6 = Route6() File "scapy/route6.py", line 26, in __init__ self.resync() File "scapy/route6.py", line 39, in resync self.routes = read_routes6() File "scapy/arch/unix.py", line 147, in read_routes6 lifaddr = in6_getifaddr() File "scapy/arch/unix.py", line 123, in in6_getifaddr i = dnet.intf() NameError: global name 'dnet' is not defined

    Read the article

  • Message queue proxy in Python + Twisted

    - by gasper_k
    Hi, I want to implement a lightweight Message Queue proxy. It's job is to receive messages from a web application (PHP) and send them to the Message Queue server asynchronously. The reason for this proxy is that the MQ isn't always avaliable and is sometimes lagging, or even down, but I want to make sure the messages are delivered, and the web application returns immediately. So, PHP would send the message to the MQ proxy running on the same host. That proxy would save the messages to SQLite for persistence, in case of crashes. At the same time it would send the messages from SQLite to the MQ in batches when the connection is available, and delete them from SQLite. Now, the way I understand, there are these components in this service: message listener (listens to the messages from PHP and writes them to a Incoming Queue) DB flusher (reads messages from the Incoming Queue and saves them to a database; due to SQLite single-threadedness) MQ connection handler (keeps the connection to the MQ server online by reconnecting) message sender (collects messages from SQlite db and sends them to the MQ server, then removes them from db) I was thinking of using Twisted for #1 (TCPServer), but I'm having problem with integrating it with other points, which aren't event-driven. Intuition tells me that each of these points should be running in a separate thread, because all are IO-bound and independent of each other, but I could easily put them in a single thread. Even though, I couldn't find any good and clear (to me) examples on how to implement this worker thread aside of Twisted's main loop. The example I've started with is the chatserver.py, which uses service.Application and internet.TCPServer objects. If I start my own thread prior to creating TCPServer service, it runs a few times, but the it stops and never runs again. I'm not sure, why this is happening, but it's probably because I don't use threads with Twisted correctly. Any suggestions on how to implement a separate worker thread and keep Twisted? Do you have any alternative architectures in mind?

    Read the article

  • general learning methodology

    - by momo
    just wanted to hear on the different general learning paths people embark on when learning a new language/framework. the one i currently use, which is how i learned bash and am currently learning python, is: instant hacking tutorial (very short tutorial introducing the basic syntax, variable declaration, loops, data types, etc. and how they are generally used) in depth tutorial with good programming style and slightly topic-specific (e.g. Mark Pilgrim's Dive into Python), important topics for me personally are regex methods, file IO, and ways the different data types are utilized best (i wrote a very primitive bayesian spam filter using python's dictionaries to keep track of word occurrences) spaced-repition of syntax or short recipes (i use anki, with questions like 'create dictionary with filename and filesize metadata, human-readable' or simpler ones like 'match 0 - 3 occurences of the letter M in a string', or 'return/create an iterator from two sequences') the use of spaced-repitition has been invaluable, and i credit it with the ease that i can recall/create python algorithms. however, i've recently started looking into django, and i've found that spaced-repitition, at least in my case, doesn't work very well for learning a framework, it works best with short code recipes (either that or i should start looking into more basic django framework tutorials). the problem i'm encountering is that since framework programming is not only algorithms, but actually learning the API, which can be quite complex since you have to learn all the methods, modules, the places where they are stored, and the sequence of which things have to be done. for ex. in django to start a project that deals with polls (from the django tutorial), one has to create the project, edit the settings.py file, create the polls app, edit the models.py file (which requires knowing the classes that are present in the module models), edit the urls.py file, etc. i found that my spaced-repition method didn't work very well for this type of learning, so i wanted to ask you guys what method(s) you use for learning the different frameworks/APIs.

    Read the article

  • Why does C++ not allow multiple types in one auto statement?

    - by Walter
    The 2011 C++ standard introduced the new keyword auto, which can be used for defining variables instead of a type, i.e. auto p=make_pair(1,2.5); // pair<int,double> auto i=std::begin(c), end=std::end(c); // decltype(std::begin(c)) In the second line, i and end are of the same type, referred to as auto. The standard does not allow auto i=std::begin(container), e=std::end(container), x=*i; when x would be of different type. My question: why does the standard not allow this last line? It could be allowed by interpreting auto not as representing some to-be-decuded type, but as indicating that the type of any variable declared auto shall be deduced from its assigned value. Is there any good reason for the C++11 standard to not follow this approach? There is actually a use case for this, namely in the initialisation statement of for loops: for(auto i=std::begin(c), end=std::end(c), x=*i; i!=end; ++i, x+=*i) { ... } when the scope of the variables i, end, and x is limited to the for loop. AFAIK, this cannot be achieved in C++ unless those variables have a common type. Is this correct? (ugly tricks of putting all types inside a struct excluded) There may also be use cases in some variadic template applications.

    Read the article

  • Powerpoint displays a "can't start the application" error when an Excel Chart object is embedded in

    - by A9S6
    This is a very common problem when Excel Worksheet or Chart is embedded into Word or Powerpoint. I am seeing this problem in both Word and Powerpoint and the reason it seems is the COM addin attached to Excel. The COM addin is written in C# (.NET). See the attached images for error dialogs. I debugged the addin and found a very strange behavior. The OnConnection(...), OnDisConnection(...) etc methods in the COM addin works fine until I add an event handler to the code. i.e. handle the Worksheet_SheetChange, SelectionChange or any similar event available in Excel. As soon as I add even a single event handler (though my code has several), Word and Powerpoint start complaining and do not Activate the embedded object. On some of the posts on the internet, people have been asked to remove the anti-virus addins for office (none in my case) so this makes me believe that the problem is somewhat related to COM addins which are loaded when the host app activates the object. Does anyone have any idea of whats happening here?

    Read the article

  • Very weird C file-handling anomaly

    - by KáGé
    Hello, I got a very weird issue that I cant figure out in my school project, which is the simulation of a simple filesystem in a human-readable textfile. Unfortunately I don't yet have enough time to translate the comments in my code or make it less gibberish, so if you are bothered by that, you don't have to help, I understand. See the code HERE. Now in drive.h, at line 574 is this part: i = getline(); #ifdef DEBUG printf("Free space in all found at %d.\n\n", i); if(drive.disk != NULL){ printf("Disk OK\n\n"); } #endif //write in data state = seekline(i); Before this it finds place for the allocation database entry in the ALL sector (see the "image files" in the mounts folder, this issue was tested on mount_30.efs-dbf), then gets the line with i = getline() fine (getline is in lglobal.h, line 39), but after that any file manipulation (in this case seekline's fseek, but if I comment that out, then the first fprintf after that) crashes the program straight away. I think the file gets somehow corrupted (though the Disk OK message appears) but can't figure out how. I've tried putting i = getline(); into comment, but it didn't make any difference. I've also tried asking at local programming forums but they didn't really help either. The last few lines of the output before it crashes: Dir written. (drive.h line 562) Seekline entered: 268 (called at drive.h line 564) Getline entered. (called at drive.h line 574) Line got: 268. Free space in all found at 268. (drive.h line 576) Seekline entered: 268 (called at drive.h line 582, note that this exact call was run successfully less than 20 lines back. This one should set the pointer to the beginning of the line it is currently in) After this it crashes. Does anyone has any idea of what causes this and how could I fix it? Thank you.

    Read the article

  • Using PLINQ to calculate and update values within the enclosure does not work

    - by Keith
    I recently needed to do a running total on a report. Where for each group, I order the rows and then calculate the running total based on the previous rows within the group. Aha! I thought, a perfect use case for PLINQ! However, when I wrote up the code, I got some strange behavior. The values I was modifying showed as being modified when stepping through the debugger, but when they were accessed they were always zero. Sample Code: class Item { public int PortfolioID; public int TAAccountID; public DateTime TradeDate; public decimal Shares; public decimal RunningTotal; } List<Item> itemList = new List<Item> { new Item { PortfolioID = 1, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 1), Shares = 5.335m, }, new Item { PortfolioID = 1, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 2), Shares = -2.335m, }, new Item { PortfolioID = 2, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 1), Shares = 7.335m, }, new Item { PortfolioID = 2, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 2), Shares = -3.335m, }, }; var found = (from i in itemList where i.TAAccountID == 1 select new Item { TAAccountID = i.TAAccountID, PortfolioID = i.PortfolioID, Shares = i.Shares, TradeDate = i.TradeDate, RunningTotal = 0 }); found.AsParallel().ForAll(x => { var prevItems = found.Where(i => i.PortfolioID == x.PortfolioID && i.TAAccountID == x.TAAccountID && i.TradeDate <= x.TradeDate); x.RunningTotal = prevItems.Sum(s => s.Shares); }); foreach (Item i in found) { Console.WriteLine("Running total: {0}", i.RunningTotal); } Console.ReadLine(); If I change the select for found to be .ToArray(), then it works fine and I get calculated reuslts. Any ideas what I am doing wrong?

    Read the article

  • Diagnosing IIS Shutdowns

    - by Tom Ritter
    Symptoms: I attach a debugger, I wait a little while, it automatically detaches I watch the event log during normal operation - after a single request comes in, it waits a little bit, the shuts down Disagnosing. I've followed the following steps for logging shutdowns in IIS: http://weblogs.asp.net/scottgu/archive/2005/12/14/433194.aspx http://blogs.msdn.com/tess/archive/2006/08/02/asp-net-case-study-lost-session-variables-and-appdomain-recycles.aspx I know these are working because... What I see in the Event Logs when I change the web.config: The description for Event ID 0 from source ASP.NET 2.0.50727.0 cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: _shutdownMessage=IIS configuration change HostingEnvironment initiated shutdown CONFIG change CONFIG change HostingEnvironment caused shutdown _shutdownStack= at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at System.Environment.get_StackTrace() at System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at System.Web.Hosting.HostingEnvironment.InitiateShutdown() at System.Web.Hosting.PipelineRuntime.StopProcessing() the message resource is present but the message is not found in the string/message table But it doesn't help because the mysetery error doesn't tell me anything. I see the same thing as from before I added this extra logging: The description for Event ID 0 from source ASP.NET 2.0.50727.0 cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: _shutdownMessage=HostingEnvironment initiated shutdown HostingEnvironment caused shutdown _shutdownStack= at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at System.Environment.get_StackTrace() at System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at System.Web.Hosting.HostingEnvironment.InitiateShutdown() at System.Web.Hosting.PipelineRuntime.StopProcessing() the message resource is present but the message is not found in the string/message table Anyone have any ideas for more debugging?

    Read the article

  • Core Data: Deleting causes 'NSObjectInaccessibleException' from NSOperation with a reference to a deleted object

    - by Bryan Irace
    My application has NSOperation subclasses that fetch and operate on managed objects. My application also periodically purges rows from the database, which can result in the following race condition: An background operation fetches a bunch of objects (from a thread-specific context). It will iterate over these objects and do something with their properties. A bunch of rows are deleted in the main managed object context. The background operation accesses a property on an object that was deleted from the main context. This results in an 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault' Ideally, the objects that are fetched by the NSOperation can be operated on even if one is deleted in the main context. The best way I can think to achieve this is either to: Call [request setReturnsObjectsAsFaults:NO] to ensure that Core Data won't try to fulfill a fault for an object that no longer exists in the main context. The problem here is I may need to access the object's relationships, which (to my understanding) will still be faulted. Iterate through the managed objects up front and copy the properties I will need into separate non-managed objects. The problem here is that (I think) I will need to synchronize/lock this part, in case an object is deleted in the main context before I can finish copying. Am I missing something obvious? It doesn't seem like what I'm trying to accomplish is too out of the ordinary. Thanks for your help.

    Read the article

  • Testing Broadcasting and receiving messages

    - by Avik
    Guys am having some difficulty figuring this out: I am trying to test whether the code(in c#) to broadcast a message and receiving the message works: The code to send the datagram(in this case its the hostname) is: public partial class Form1 : Form { String hostName; byte[] hostBuffer = new byte[1024]; public Form1() { InitializeComponent(); StartNotification(); } public void StartNotification() { IPEndPoint notifyIP = new IPEndPoint(IPAddress.Broadcast, 6000); hostName = Dns.GetHostName(); hostBuffer = Encoding.ASCII.GetBytes(hostName); UdpClient newUdpClient = new UdpClient(); newUdpClient.Send(hostBuffer, hostBuffer.Length, notifyIP); } } And the code to receive the datagram is: public partial class Form1 : Form { byte[] receivedNotification = new byte[1024]; String notificationReceived; StringBuilder listBox; UdpClient udpServer; IPEndPoint remoteEndPoint; public Form1() { InitializeComponent(); udpServer = new UdpClient(new IPEndPoint(IPAddress.Any, 1234)); remoteEndPoint=null; startUdpListener1(); } public void startUdpListener1() { receivedNotification = udpServer.Receive(ref remoteEndPoint); notificationReceived = Encoding.ASCII.GetString(receivedNotification); listBox = new StringBuilder(this.listBox1.Text); listBox.AppendLine(notificationReceived); this.listBox1.Items.Add(listBox.ToString()); } } For the reception of the code I have a form that has only a listbox(listBox1). The problem here is that when i execute the code to receive, the program runs but the form isnt visible. However when I comment the function call( startUdpListener1() ), the purpose isnt served but the form is visible. Whats going wrong?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • How to define a "complicated" ComputedColumn in SQL Server?

    - by Slauma
    SQL Server Beginner question: I'm trying to introduce a computed column in SQL Server (2008). In the table designer of SQL Server Management Studio I can do this, but the designer only offers me one single edit cell to define the expression for this column. Since my computed column will be rather complicated (depending on several database fields and with some case differentiations) I'd like to have a more comfortable and maintainable way to enter the column definition (including line breaks for formatting and so on). I've seen there is an option to define functions in SQL Server (scalar value or table value functions). Is it perhaps better to define such a function and use this function as the column specification? And what kind of function (scalar value, table value)? To make a simplified example: I have two database columns: DateTime1 (smalldatetime, NULL) DateTime2 (smalldatetime, NULL) Now I want to define a computed column "Status" which can have four possible values. In Dummy language: if (DateTime1 IS NULL and DateTime2 IS NULL) set Status = 0 else if (DateTime1 IS NULL and DateTime2 IS NOT NULL) set Status = 1 else if (DateTime1 IS NOT NULL and DateTime2 IS NULL) set Status = 2 else set Status = 3 Ideally I would like to have a function GetStatus() which can access the different column values of the table row which I want to compute the value of "Status" for, and then only define the computed column specification as GetStatus() without parameters. Is that possible at all? Or what is the best way to work with "complicated" computed column definitions? Thank you for tips in advance!

    Read the article

  • Is it legal to extend the Class class?

    - by spiralganglion
    I've been writing a system that generates some templates, and then generates some objects based on those templates. I had the idea that the templates could be extensions of the Class class, but that resulted in some magnificent errors: VerifyError: Error #1107: The ABC data is corrupt, attempt to read out of bounds. What I'm wondering is if subclassing Class is even possible, if there is perhaps some case where doing this would be appropriate, and not just a gross misuse of OOP. I believe it should be possible, as ActionScript allows you to create variables of the Class type. This use is described in the LiveDocs entry for Class, yet I've seen no mentions of subclassing Class. Here's a pseudocode example: class Foo extends Class var a:Foo = new Foo(); trace(a is Class) // true, right? var b = new a(); I have no idea what the type of b would be. In summary: can you subclass the Class class? If so, how can you do it without errors, and what type are the instances of the instances of the Class subclass?

    Read the article

  • What is the base open source java package to filter/match URLs?

    - by Boaz
    Hi, I have an high performance application which deals with URLs. For every URL it needs to retrieve the appropriate settings from a predefined pool. Every settings object is associated with a URL pattern which indicates which URLs should use these settings. The matching rules are as follows: "google.com" match pattern should match all URLs pointing to the google domain (thus, maps.google.com and www.google.com/match are matched). "*.google.com" should match all URLs pointing to a subdomain of google.com (thus, maps.google.com matches, but google.com and www.google.com don't). "maps.google.com" should match all URLs pointing to this specific subdomain. Apart from the above rules, every match rule can contain a path, which means that the path part of the URL should start with the match rule path. So: "*.google.com/maps" matches "maps.google.com/maps" but not "maps.google.com/advanced". As you can see the rules above are overlapping. In the case two rules exist which match the same URL the most specific should apply. The list above is ranked from least specific to most specific. This seems to be such a standard problem that I was hoping to use a ready made library rather than program my self. Google reveals a couple of options but without a clear way to choose between them. What would you recommend as a good library for this task? Thanks, Boaz

    Read the article

  • How to do parrallel processing in Unix Shell script?

    - by Bikram Agarwal
    I have a shell script that transfers a build.xml file to a remote unix machine (devrsp02) and executes the ANT task wldeploy on that machine (devrsp02). Now, this wldeploy task takes around 15 minutes to complete and while this is running, the last line at the unix console is - "task {some digit} initialized". Once this task is complete, we get a "task Completed" msg and the next task in the script is executed only after that. But sometimes, there might be a problem with the weblogic domain and the deployment might be failing internally, with no effect on the status of the wldeploy task. The unix console will still be stuck at "task {some digit} initialized". The error of the deployment will be getting logged in a file called output.a So, what I want now is - Start a time counter before running wldeploy. If the wldeploy runs for more than 15 minutes, the following command should be run - tail -f output.a ## without terminating the wldeploy or cat output.a ## after terminating the wldeploy forcefully Point to be noted here is - I can't run the wldeploy task in background, as in that case the user won't get to know when the task is complete, which is crucial for this script. Could you please suggest anything to achieve this?

    Read the article

  • With NHibernate and Transaction do I rollback on commit failure or does it auto rollback on single c

    - by mattcodes
    I've built the following Dispose method for my Unit Of Work which essentially wraps the active NH session & transaction (transaction set as variable after opening session as to not be replaced if NH session gets new transaction after error) public void Dispose() { Func<ITransaction,bool> transactionStateOkayFunc = trans => trans != null && trans.IsActive && !trans.WasRolledBack; try { if(transactionStateOkayFunc(this.transaction)) { if (HasErrored) { transaction.Rollback(); } else { try { transaction.Commit(); } catch (Exception) { if(transactionStateOkayFunc(transaction)) transaction.Rollback(); throw; } } } } finally { if(transaction != null) transaction.Dispose(); if(session.IsOpen) session.Close(); } I can't help feeling that code is a little bloated, will a transaction automatically rollback is a discrete Commit fails in the case of non-nested transactions? Will Commit or Rollback automatically Dipose the transaction? If not will Session.Close() automatically dispose the associated transaction?

    Read the article

  • wrapping aspx user controls commands in a transaction

    - by Hans Gruber
    I'm working on heavily dynamic and configurable CMS system. Therefore, many pages are composed of a dynamically loaded set of user controls. To enable loose coupling between containers (pages) and children (user controls), all user controls are responsible for their own persistence. Each User Control is wired up to its data/service layer dependencies via IoC. They also implement an IPersistable interface, which allows the container .aspx page to issue a Save command to its children without knowledge of the number or exact nature of these user controls. Note: what follows is only pseudo-code: public class MyUserControl : IPersistable, IValidatable { public void Save() { throw new NotImplementedException(); } public bool IsValid() { throw new NotImplementedException(); } } public partial class MyPage { public void btnSave_Click(object sender, EventArgs e) { foreach (IValidatable control in Controls) { if (!control.IsValid) { throw new Exception("error"); } } foreach (IPersistable control in Controls) { if (!control.Save) { throw new Exception("error"); } } } } I'm thinking of using declarative transactions from the System.EnterpriseService namespace to wrap the btnSave_Click in a transaction in case of an exception, but I'm not sure how this might be achieved or any pitfalls to such an approach.

    Read the article

  • Searching a column containing CSV data in a MySQL table for existence of input values

    - by Adarsh R
    Hi, I have a table say, ITEM, in MySQL that stores data as follows: ID FEATURES -------------------- 1 AB,CD,EF,XY 2 PQ,AC,A3,B3 3 AB,CDE 4 AB1,BC3 -------------------- As an input, I will get a CSV string, something like "AB,PQ". I want to get the records that contain AB or PQ. I realized that we've to write a MySQL function to achieve this. So, if we have this magical function MATCH_ANY defined in MySQL that does this, I would then simply execute an SQL as follows: select * from ITEM where MATCH_ANY(FEAURES, "AB,PQ") = 0 The above query would return the records 1, 2 and 3. But I'm running into all sorts of problems while implementing this function as I realized that MySQL doesn't support arrays and there's no simple way to split strings based on a delimiter. Remodeling the table is the last option for me as it involves lot of issues. I might also want to execute queries containing multiple MATCH_ANY functions such as: select * from ITEM where MATCH_ANY(FEATURES, "AB,PQ") = 0 and MATCH_ANY(FEATURES, "CDE") In the above case, we would get an intersection of records (1, 2, 3) and (3) which would be just 3. Any help is deeply appreciated. Thanks

    Read the article

  • Using SiteMesh with RequestDispatcher's forward()

    - by Rob Hruska
    I'm attempting to integrate SiteMesh into a legacy application using Tomcat 5 as my a container. I have a main.jsp that I'm decorating with a simple decorator. In decorators.xml, I've just got one decorator defined: <decorators defaultdir="/decorators"> <decorator name="layout-main" page="layout-main.jsp"> <pattern>/jsp/main.jsp</pattern> </decorator> </decorators> This decorator works if I manually go to http://example.com/my-webapp/jsp/main.jsp. However, there are a few places where a servlet, instead of doing a redirect to a jsp, does a forward: getServletContext().getRequestDispatcher("/jsp/main.jsp").forward(request, response); This means that the URL remains at something like http://example.com/my-webapp/servlet/MyServlet instead of the jsp file and is therefore not being decorated, I presume since it doesn't match the pattern in decorators.xml. I can't do a <pattern>/*</pattern> because there are other jsps that do not need to be decorated by layout-main.jsp. I can't do a <pattern>/servlet/MyServlet*</pattern> because MyServlet may forward to main.jsp sometimes and perhaps error.jsp at other times. Is there a way to work around this without expansive changes to how the servlets work? Since it's a legacy app I don't have as much freedom to change things, so I'm hoping for something configuration-wise that will fix this. SiteMesh's documentation really isn't that great. I've been working mostly off the example application that comes with the distribution. I really like SiteMesh, and am hoping I can get it to work in this case.

    Read the article

  • Parsing two-dimensional text

    - by alexbw
    I need to parse text files where relevant information is often spread across multiple lines in a nonlinear way. An example: 1234 1 IN THE SUPERIOR COURT OF THE STATE OF SOME STATE 2 IN AND FOR THE COUNTY OF SOME COUNTY 3 UNLIMITED JURISDICTION 4 --o0o-- 5 6 JOHN SMITH and JILL SMITH, ) ) 7 Plaintiffs, ) ) 8 vs. ) No. 12345 ) 9 ACME CO, et al., ) ) 10 Defendants. ) ___________________________________) I need to pull out Plaintiff and Defendant identities. These transcripts have a very wide variety of formattings, so I can't always count on those nice parentheses being there, or the plaintiff and defendant information being neatly boxed off, e.g.: 1 SUPREME COURT OF THE STATE OF SOME OTHER STATE COUNTY OF COUNTYVILLE 2 First Judicial District Important Litigation 3 --------------------------------------------------X THIS DOCUMENT APPLIES TO: 4 JOHN SMITH, 5 Plaintiff, Index No. 2000-123 6 DEPOSITION 7 - against - UNDER ORAL EXAMINATION 8 OF JOHN SMITH, 9 Volume I 10 ACME CO, et al, 11 Defendants. 12 --------------------------------------------------X The two constants are: "Plaintiff" will occur after the name of the plaintiff(s), but not necessarily on the same line. Plaintiffs and defendants' names will be in upper case. Any ideas?

    Read the article

  • Webfaction apache + mod_wsgi + django configuration issue

    - by Dmitry Guyvoronsky
    A problem that I stumbled upon recently, and, even though I solved it, I would like to hear your opinion of what correct/simple/adopted solution would be. I'm developing website using Django + python. When I run it on local machine with "python manage.py runserver", local address is http://127.0.0.1:8000/ by default. However, on production server my app has other url, with path - like "http://server.name/myproj/" I need to generate and use permanent urls. If I'm using {% url view params %}, I'm getting paths that are relative to / , since my urls.py contains this urlpatterns = patterns('', (r'^(\d+)?$', 'myproj.myapp.views.index'), (r'^img/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/img' }), (r'^css/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/css' }), ) So far, I see 2 solutions: modify urls.py, include '/myproj/' in case of production run use request.build_absolute_uri() for creating link in views.py or pass some variable with 'hostname:port/path' in templates Are there prettier ways to deal with this problem? Thank you. Update: Well, the problem seems to be not in django, but in webfaction way to configure wsgi. Apache configuration for application with URL "hostname.com/myapp" contains the following line WSGIScriptAlias / /home/dreamiurg/webapps/pinfont/myproject.wsgi So, SCRIPT_NAME is empty, and the only solution I see is to get to mod_python or serve my application from root. Any ideas?

    Read the article

< Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >