Search Results

Search found 5679 results on 228 pages for 'kill processes'.

Page 201/228 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Which apache worker to use with passenger and how?

    - by Millisami
    I've this config in my apache2.conf <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MaxClients 15 MinSpareThreads 4 MaxSpareThreads 5 ThreadsPerChild 15 MaxRequestsPerChild 50000 </IfModule> Now I'm confused here. Which module gets loaded on which conditions? The phusion guys have suggested to use the worker module. Since both are present in apache conf file, do I have to comment the mpm_prefork_module or leave it as it is? Following is my passenger conf file for apache: LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4 PassengerRuby /usr/bin/ruby1.8 PassengerMaxPoolSize 3 PassengerPoolIdleTime 999999 RailsFrameworkSpawnerIdleTime 0 RailsAppSpawnerIdleTime 0 I'm running just a single Rails 2.3.2 app on 256MB slice at slicehost. I'm not quite satisfied with the performance yet. Are the settings above are any good??

    Read the article

  • Efficient Method for Preventing Hotlinking via .htaccess

    - by Michael Robinson
    I need to confirm something before I go accuse someone of ... well I'd rather not say. The problem: We allow users to upload images and embed them within text on our site. In the past we allowed users to hotlink to our images as well, but due to server load we unfortunately had to stop this. Current "solution": The method the programmer used to solve our "too many connections" issue was to rename the file that receives and processes image requests (image_request.php) to image_request2.php, and replace the contents of the original with <?php header("HTTP/1.1 500 Internal Server Error") ; ?> Obviously this has caused all images with their src attribute pointing to the original image_request.php to be broken, and is also the wrong code to be sending in this case. Proposed solution: I feel a more elegant solution would be: In .htaccess If the request is for image_request.php Check referrer If referrer is not our site, send the appropriate header If referrer is our site, proceed to image_request.php and process image request What I would like to know is: Compared to simply returning a 500 for each request to image_request.php: How much more load would be incurred if we were to use my proposed alternative solution outlined above? Is there a better way to do this? Our main concern is that the site stays up. I am not willing to agree that breaking all internally linked images is the best / only way to solve this. I refuse to tell our users that because of something WE changed they must now manually change the embed code in all their previously uploaded content.

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • pluto or jetspeed on google app engine?

    - by Patrick Cornelissen
    I am trying to build something "portlet server"-ish on the google app engine. (as open source) I'd like to use the JSR168/286 standards, but I think that the restrictions of the app engine will make it somewhere between tricky and impossible. Has anyone tried to run jetspeed or an application that uses pluto internally on the google app engine? Based on my current knowledge of portlets and the google app engine I'm anticipating these problems: A war file with portlets is from the deployment standpoint more or less a complete webapp (yes, I know that it doesn't really work without a portal server). The war file may contain it's own web.xml etc. This makes deployment on the app engine rather difficult, because the apps are not visible to each other, so all portlet containing archives need to be included in the war file of the deployed "app engine based portal server". The "portlets" are (at least in liferay) started as permanent servlet processes, based on their portlet.xmls and web.xmls which is located in the same spot for every portlet archive that is loaded. I think this may be problematic in the app engine, because everything is in one big "web app", so it may be tricky to access the portlet.xmls from each archive. This prevents a 100% compatibility in my opinion. Is here anyone who has any experience with the combination of portlets and the app engine? Do you think it's feasible to modify jetspeed, pluto or any other portlet container to be able to run it on the app engine?

    Read the article

  • How can I use `pipe` to facilitate interprocess communication in Perl?

    - by Shiftbit
    Can anyone explain how I can successfully get my processes communicating? I find the perldoc on IPC confusing. What I have so far is: $| = 1; $SIG{CHLD} = {wait}; my $parentPid = $$; if ($pid = fork();) ) { if ($pid == 0) { pipe($parentPid, $$); open PARENT, "<$parentPid"; while (<PARENT>) { print $_; } close PARENT; exit(); } else { pipe($parentPid, $pid); open CHILD, ">$pid"; or error("\nError opening: childPid\nRef: $!\n"); open (FH, "<list") or error("\nError opening: list\nRef: $!\n"); while(<FH>) { print CHILD, $_; } close FH or error("\nError closing: list\nRef: $!\n"); close CHILD or error("\nError closing: childPid\nRef: $!\n); } else { error("\nError forking\nRef: $!\n"); } First: What does perldoc pipe mean by READHANDLE, WRITEHANDLE? Second: Can I implement a solution without relying on CPAN or other modules?

    Read the article

  • Processing XML form input in ASP

    - by Omar Kooheji
    I'm maintaining a legacy application which consists of some ASP.Net pages with c# code behinds and some asp pages. I need to change the way the application accepts it's input from reading a set of parameters from some form fields to reading in one form field which contains contains some XML and parsing to get the parameters out. I've written a C# class that takes an The NameValueCollection from the C# HttpRequest's Form Element. Like so NameValueCollection form = Request.Form; Dictionary<string, string> fieldDictionary = RequestDataExtractor.BuildFieldDictionary(form); The code in the class looks for a particular parameter and if it's there processes the XML and outputs a Dictionary, if its not there it just cycles through the Form parameters and puts them all into the dictionary (Allowing the old method to still work) How would I do this in ASP? Can I use my same class, or a modified version of it? or do I have to write some new code to get this working? If I have to write ASP code Whats the best way to process the XML in ASP? Sorry if this seems like a stupid question but I know next to nothing about ASP and VB.

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • POST xml to php with apache2

    - by Berry
    I'm working on an application that receives XML data via POST, processes it with a PHP script, and returns an XML response. I'm getting the XML with this PHP code: $requestStr = file_get_contents('php://input'); $requests = simplexml_load_string($requestStr); which works fine on the Linux-based product hardware using nginx as the server. However, for testing I'd like to be able to run it on my MacBook Pro, so I can avoid the "build image, install on product, reboot product, wait, test change" loop while I do targeted development on this XML processor. I enabled "web sharing" which starts up Apache, added a rewrite rule to point a convenient URI at my development source directory and used curl to send a request to my PHP script thus: curl -H "Content-Type:text/xml" -d @request.xml http://localhost/test/path/testscript "testscript" is handled by the PHP script fine, but when it goes to read "php:://input" I get nothing -- the empty string. Anyone have a clue why this would work under Linux with nginx and not under MacOS with Apache? I've googled and searched stackoverlow.com to no avail. Thanks for any answers. UPDATE: I've discovered that at least in this configuration, reading from php://stdin will work fine, while php://input will not. Who knew?

    Read the article

  • Suggestions for a Cron like scheduler in Python?

    - by jamesh
    I'm looking for a library in Python which will provide at and cron like functionality. I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron. For those unfamiliar with cron: you can schedule tasks based upon an expression like: 0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday 0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays. The cron time expression syntax is less important, but I would like to have something with this sort of flexibility. If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received. Edit I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process. To this end, I'm looking for the expressivity of the cron time expression, but in Python. Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence.

    Read the article

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • is there a Universal Model for languages?

    - by Smandoli
    Many programming languages share generic and even fairly universal features. For example, if you compared Java, VB6, .NET, PHP, Python, then you would find common functions such as control structures, numeric and string manipulation, etc. What has been done to define these features at a meta-language (or language-agnostic) level? UML offers a descriptive reference of software in every aspect, but the real-world focus seems to be data processes. Is UML relevant? I'm not asking "Why we don't have a single language that replaces the current plethora." We need many different tools (at least in this eon). I'm not asking that all languages fit a template -- assembly vs. compiled languages are different enough to make that unfeasible (and some folks call HTML a language, though I wouldn't). Any attempt would start with a properly narrow scope. In line with this, I wouldn't expect the model to cover even a small selection with full validity. I would expect however that such a model could be used to transpose from one language to another (with limited goals -- think jist translation).

    Read the article

  • How can I work out what events are being waited for with WinDBG in a kernel debug session

    - by Benj
    I'm a complete WinDbg newbie and I've been trying to debug a WindowsXP problem that a customer has sent me where our software and some third party software prevent windows from logging off. I've reproduced the problem and have verified that only when our software and the customers software are both installed (although not necessarily running at logoff) does the log off problem occur. I've observed that WM_ENDSESSION messages are not reaching the running windows when the user tries to log off and I know that the third party software uses a kernel driver. I've been looking at the processes in WinDbg and I know that csrss.exe would normally send all the windows a WM_ENDSESSION message. When I ran: !process 82356020 6 To look at csrss.exe's stack I can see: WARNING: Frame IP not in any known module. Following frames may be wrong. 00000000 00000000 00000000 00000000 00000000 0x7c90e514 THREAD 8246d998 Cid 0248.02a0 Teb: 7ffd7000 Win32Thread: e1627008 WAIT: (WrUserRequest) UserMode Non-Alertable 8243d9f0 SynchronizationEvent 81fe0390 SynchronizationEvent Not impersonating DeviceMap e1004450 Owning Process 82356020 Image: csrss.exe Attached Process N/A Image: N/A Wait Start TickCount 1813 Ticks: 20748 (0:00:05:24.187) Context Switch Count 3 LargeStack UserTime 00:00:00.000 KernelTime 00:00:00.000 Start Address 0x75b67cdf Stack Init f80bd000 Current f80bc9c8 Base f80bd000 Limit f80ba000 Call 0 Priority 14 BasePriority 13 PriorityDecrement 0 DecrementCount 0 Kernel stack not resident. ChildEBP RetAddr Args to Child f80bc9e0 80500ce6 00000000 8246d998 804f9af2 nt!KiSwapContext+0x2e (FPO: [Uses EBP] [0,0,4]) f80bc9ec 804f9af2 804f986e e1627008 00000000 nt!KiSwapThread+0x46 (FPO: [0,0,0]) f80bca24 bf80a4a3 00000002 82475218 00000001 nt!KeWaitForMultipleObjects+0x284 (FPO: [Non-Fpo]) f80bca5c bf88c0a6 00000001 82475218 00000000 win32k!xxxMsgWaitForMultipleObjects+0xb0 (FPO: [Non-Fpo]) f80bcd30 bf87507d bf9ac0a0 00000001 f80bcd54 win32k!xxxDesktopThread+0x339 (FPO: [Non-Fpo]) f80bcd40 bf8010fd bf9ac0a0 f80bcd64 00bcfff4 win32k!xxxCreateSystemThreads+0x6a (FPO: [Non-Fpo]) f80bcd54 8053d648 00000000 00000022 00000000 win32k!NtUserCallOneParam+0x23 (FPO: [Non-Fpo]) f80bcd54 7c90e514 00000000 00000022 00000000 nt!KiFastCallEntry+0xf8 (FPO: [0,0] TrapFrame @ f80bcd64) This waitForMultipleObjects looks interesting because I'm wondering if csrss.exe is waiting on some event which isn't arriving to allow the logoff. Can anyone tell me how I might find out what event it's waiting for anything else I might do to further investigate the problem?

    Read the article

  • ActiveRecord exceptions not rescued

    - by zoopzoop
    I have the following code block: unless User.exist?(...) begin user = User.new(...) # Set more attributes of user user.save! rescue ActiveRecord::RecordInvalid, ActiveRecord::RecordNotUnique => e # Check if that user was created in the meantime user = User.exists?(...) raise e if user.nil? end end The reason is, as you can probably guess, that multiple processes might call this method at the same time to create the user (if it doesn't already exist), so while the first one enters the block and starts initializing a new user, setting the attributes and finally calling save!, the user might already be created. In that case I want to check again if the user exists and only raise the exception if it still doesn't (= if no other process has created it in the meantime). The problem is, that regularly ActiveRecord::RecordInvalid exceptions are raised from the save! and not rescued from the rescue block. Any ideas? EDIT: Alright, this is weird. I must be missing something. I refactored the code according to Simone's tip to look like this: unless User.find_by_email(...).present? # Here we know the user does not exist yet user = User.new(...) # Set more attributes of user unless user.save # User could not be saved for some reason, maybe created by another request? raise StandardError, "Could not create user for order #{self.id}." unless User.exists?(:email => ...) end end Now I got the following exception: ActiveRecord::RecordNotUnique: Mysql::DupEntry: Duplicate entry '[email protected]' for key 'index_users_on_email': INSERT INTO `users` ... thrown in the line where it says 'unless user.save'. How can that be? Rails thinks the user can be created because the email is unique but then the Mysql unique index prevents the insert? How likely is that? And how can it be avoided?

    Read the article

  • C# WinForms MultiThreading in Loop

    - by Goober
    Scenario I have a background worker in my application that runs off and does a bunch of processing. I specifically used this implementation so as to keep my User Interface fluid and prevent it from freezing up. I want to keep the background worker, but inside that thread, spawn off ONLY 3 MORE threads - making them share the processing (currently the worker thread just loops through and processes each asset one-by-one. However I would like to speed this up but using only a limited number of threads. Question Given the code below, how can I get the loop to choose a thread that is free, and then essentially wait if there isn't one free before it continues. CODE foreach (KeyValuePair<int, LiveAsset> kvp in laToHaganise) { Haganise h = new Haganise(kvp.Value, busDate, inputMktSet, outputMktSet, prodType, noOfAssets, bulkSaving); h.DoWork(); } Thoughts I'm guessing that I would have to start off by creating 3 new threads, but my concern is that if I'm instantiating a new Haganise object each time - how can I pass the correct "h" object to the correct thread..... Thread firstThread = new Thread(new ThreadStart(h.DoWork)); Thread secondThread =new Thread(new ThreadStart(h.DoWork)); Thread thirdThread = new Thread(new ThreadStart(h.DoWork)); Help greatly appreciated.

    Read the article

  • Perl : get substring which matches refex error

    - by Michael Mao
    Hi all: I am very new to Perl, so please bear with my simple question: Here is the sample output: Most successful agents in the Emarket climate are (in order of success): 1. agent10896761 ($-8008) 2. flightsandroomsonly ($-10102) 3. agent10479475hv ($-10663) Most successful agents in the Emarket climate are (in order of success): 1. agent10896761 ($-7142) 2. agent10479475hv ($-8982) 3. flightsandroomsonly ($-9124) I am interested only in agent names as well as their corresponding balances, so I am hoping to get the following output: agent10896761 -8008 flightsandroomsonly -10102 agent10479475hv -10663 agent10896761 -7142 agent10479475hv -8982 flightsandroomsonly -9124 For later processes. This is the code I've got so far: #!/usr/bin/perl -w open(MYINPUTFILE, $ARGV[0]); while(<MYINPUTFILE>) { my($line) = $_; chomp($line); # regex match test if($line =~ m/agent10479475/) { if($line =~ m/($-[0-9]+)/) { print "$1\n"; } } if($line =~ m/flightsandroomsonly/) { print "$line\n"; } } The second regex match has nothing wrong, 'cause that is printing out the whole line. However, for the first regex match, I've got some other output such like: $ ./compareResults.pl 3.txt 2. flightsandroomsonly ($-10102) 0479475 0479475 3. flightsandroomsonly ($-9124) 1. flightsandroomsonly ($-8053) 0479475 1. flightsandroomsonly ($-6126) 0479475 If I "escape" the braces like this if($line =~ m/\($-[0-9]+\)/) { print "$1\n"; } Then there is never a match for the first regex... So I stuck with a problem of making that particular regex work. Any hints for this? Many thanks in advance.

    Read the article

  • NUnit integration programmatically with spring

    - by harkon
    Hi! I have a component based architecture framework designed and I use NUnit for isolated testing - okay so far. Now I want to enable integration tests. Therefore the tests use real implementations of the existing components. Each element of the component has a life cycle (init, start and stop) and I created a NUnit component. In the start section the Console runner of the NUnit will be executed. Okay - now if I have a test fixture class in my dlls in the execution path the runner exectues them - fine! But: And this is crucial! Each to be tested implementation exists so far in the process and I want to use this instances for testing. If I use NUnit runner in the current way each instance will be created twice - and above all: I have a spring container and a implementation registry. Via this registry I can get access to all instances in the processes. But how do I give the test fixture access to the existing registry? Good: I can start the component architecture framework in the startup of the nunit runner - but this is not what I want. My guide is the apache Cactus framework (with JUnit and tomcat, JBoss etc.) Can someone help? Thanks a lot! Check: http://cone.codeplex.com

    Read the article

  • Hibernate database integrity with multiple java applications

    - by Austen
    We have 2 java web apps both are read/write and 3 standalone java read/write applications (one loads questions via email, one processes an xml feed, one sends email to subscribers) all use hibernate and share a common code base. The problem we have recently come across is that questions loaded via email sometimes overwrite questions created in one of the web apps. We originally thought this to be a caching issue. We've tried turning off the second level cache, but this doesn't make a difference. We are not explicitly opening and closing sessions, but rather let hibernate manage them via Util.getSessionFactory().getCurrentSession(), which thinking about it, may actually be the issue. We'd rather not setup a clustered 2nd level cache at this stage as this creates another layer of complexity and we're more than happy with the level of performance we get from the app as a whole. So does implementing a open-session-in-view pattern in the web apps and manually managing the sessions in the standalone apps sound like it would fix this? Or any other suggestions/ideas please? <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.use_second_level_cache">false</property>

    Read the article

  • Idea for a small project, should I use Python?

    - by Robb
    I have a project idea, but unsure if using Python would be a good idea. Firstly, I'm a C++ and C# developer with some SQL experience. My day job is C++. I have a project idea i'd like to create and was considering developing it in a language I don't know. Python seems to be popular and has piqued my interests. I definitely use OOP in programming and I understand Python would work fine with that style. I could be way off on this, I've only read small bits and pieces about the language. The project won't be public or anything, just purely something of my own creation do dabble in at home. So the project would essentially represent a simple game idea I have. The game would consist roughly these things: Data structures to hold specific information (would be strongly typed). A way to output the gamestate for the players. This is completely up in the air, it can be graphical or text based, I don't really care at this point. A way to save off game data for the players in something like a database or file system. A relatively easy way for me to input information and a 'GO' button which processes the changes and obviously creates a new gamestate. The game would function similar to a board game. Really nothing out of the ordinary when I look back at that list. Would this be a fun way to learn Python or should I select another language?

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • Relative Paths in .htaccess, how to attach to a variable?

    - by devians
    I have a very heavy htaccess mod_rewrite file that runs my application. As we sometimes take over legacy websites, I sometimes need to support old urls to old files, where my application processes everything post htaccess. My ultimate goal is to have a 'Demilitarized Zone' for old file structures, and use mod rewrite to check for existence there before pushing to the application. This is pretty easy to do with files, by using: RewriteCond %{IS_SUBREQ} true RewriteRule .* - [L] RewriteCond %{ENV:REDIRECT_STATUS} 200 RewriteRule .* - [L] RewriteCond Public/DMZ/$1 -F [OR] RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] This allows pseudo support for relative urls by not hardcoding my base path (I cant assume I will ever be deployed in document root) anywhere and using subrequests to check for file existence. Works fine if you know the file name, ie http://domain.com/path/to/app/legacyfolder/index.html However, my legacy urls are typically http://domain.com/path/to/app/legacyfolder/ Mod_Rewrite will allow me to check for this by using -d, but it needs the complete path to the directory, ie RewriteCond Public/DMZ/$1 -F [OR] RewriteCond /var/www/path/to/app/Public/DMZ/$1 -d RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] I want to avoid the hardcoded base path. I can see one possible solutions here, somehow determining my path and attaching it to a variable [E=name:var] and using it in the condition. Any implementation that allows me to existence check a directory is more than welcome.

    Read the article

  • svnsync looses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • Raw types and subtyping

    - by Dmitrii
    We have generic class SomeClass<T>{ } We can write the line: SomeClass s= new SomeClass<String>(); It's ok, because raw type is supertype for generic type. But SomeClass<String> s= new SomeClass(); is correct to. Why is it correct? I thought that type erasure was before type checking, but it's wrong. From Hacker's Guide to Javac When the Java compiler is invoked with default compile policy it performs the following passes: parse: Reads a set of *.java source files and maps the resulting token sequence into AST-Nodes. enter: Enters symbols for the definitions into the symbol table. process annotations: If Requested, processes annotations found in the specified compilation units. attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding. flow: Performs data ow analysis on the trees from the previous step. This includes checks for assignments and reachability. desugar: Rewrites the AST and translates away some syntactic sugar. generate: Generates Source Files or Class Files. Generic is syntax sugar, hence type erasure invoked at 6 pass, after type checking, which invoked at 4 pass. I'm confused.

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • Generate and merge data with python multiprocessing

    - by Bobby
    I have a list of starting data. I want to apply a function to the starting data that creates a few pieces of new data for each element in the starting data. Some pieces of the new data are the same and I want to remove them. The sequential version is essentially: def create_new_data_for(datum): """make a list of new data from some old datum""" return [datum.modified_copy(k) for k in datum.k_list] data = [some list of data] #some data to start with #generate a list of new data from the old data, we'll reduce it next newdata = [] for d in data: newdata.extend(create_new_data_for(d)) #now reduce the data under ".matches(other)" reduced = [] for d in newdata: for seen in reduced: if d.matches(seen): break #so we haven't seen anything like d yet seen.append(d) #now reduced is finished and is what we want! I want to speed this up with multiprocessing. I was thinking that I could use a multiprocessing.Queue for the generation. Each process would just put the stuff it creates on, and when the processes are reducing the data, they can just get the data from the Queue. But I'm not sure how to have the different process loop over reduced and modify it without any race conditions or other issues. What is the best way to do this safely? or is there a different way to accomplish this goal better?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >