Search Results

Search found 24037 results on 962 pages for 'every'.

Page 169/962 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • A couple of questions on exceptions/flow control and the application of custom exceptions

    - by dotnetdev
    1) Custom exceptions can help make your intentions clear. How can this be? The intention is to handle or log the exception, regardless of whether the type is built-in or custom. The main reason I use custom exceptions is to not use one exception type to cover the same problem in different contexts (eg parameter is null in system code which may be effect by an external factor and an empty shopping basket). However, the partition between system and business-domain code and using different exception types seems very obvious and not making the most of custom exceptions. Related to this, if custom exceptions cover the business exceptions, I could also get all the places which are sources for exceptions at the business domain level using "Find all references". Is it worth adding exceptions if you check the arguments in a method for being null, use them a few times, and then add the catch? Is it a realistic risk that an external factor or some other freak cause could cause the argument to be null after being checked anyway? 2) What does it mean when exceptions should not be used to control the flow of programs and why not? I assume this is like: if (exceptionVariable != null) { } Is it generally good practise to fill every variable in an exception object? As a developer, do you expect every possible variable to be filled by another coder?

    Read the article

  • Anyone have BlazeDS working with WebLogic Security (j_security_check) ??

    - by Brian
    I'm working on a Flex implementation (currently using SDK 3.5) on WebLogic 10.3 (11G.) We originally used Glassfish v2.1.1 with zero issues (there was an active directory group lookup bug but it did not hinder our progress.) Since transitioning to WebLogic we have an issue where the flexsession is invalidated after logging in using j_security_check: [BlazeDS]Unexpected error encountered in Message Broker servlet flex.messaging.LocalizedException: The FlexSession is invalid. at flex.messaging.FlexSession.checkValid(FlexSession.java:943) at flex.messaging.FlexSession.getUserPrincipal(FlexSession.java:254) at flex.messaging.HttpFlexSession.getUserPrincipal(HttpFlexSession.java:286) at flex.messaging.MessageBrokerServlet.service(MessageBrokerServlet.java:296) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) I've tried almost every option in services-config.xml: [security [login-command class="flex.messaging.security.WeblogicLoginCommand" server="Weblogic"/> [!-- Uncomment the correct app server [login-command class="flex.messaging.security.TomcatLoginCommand" server="JBoss"> [login-command class="flex.messaging.security.JRunLoginCommand" server="JRun"/> [login-command class="flex.messaging.security.TomcatLoginCommand" server="Tomcat"/> [login-command class="flex.messaging.security.WebSphereLoginCommand" server="WebSphere"/> --> [/security> I've even completely removed this section with no luck. The login functions correctly from a non-BlazeDS perspective. It authenticates the user correctly. Without authentication, BlazeDS works fine (no errors for any remote calls.) Together its a big ball of fail (invalid flexsession every time.) Has anyone got this working? Any tips?

    Read the article

  • Unit Testing.... a data provider ?

    - by TomTom
    Given problem: I like unit tests. I develop connectivity software to external systems that pretty much and often use a C++ library The return of this systems is nonndeterministic. Data is received while running, but making sure it is all correctly interpreted is hard. How can I test this properly? I can run a unit test that does a connect. Sadly, it will then process a life data stream. I can say I run the test for 30 or 60 seconds before disconnecting, but getting code ccoverage is impossible - I simply dont even comeclose to get all code paths EVERY ONCE PER DAY (error code paths are rarely run). I also can not really assert every result. Depending on the time of the day we talk of 20.000 data callbacks per second - all of which are not relly determined good enough to validate each of them for consistency. Mocking? Well, that would leave me testing an empty shell of myself because the code handling the events basically is the to be tested case, and in many cases we talk here of a COMPLEX c level structure - hard to have mocking frameworks that integrate from Csharp to C++ Anyone any idea? I am short on giving up using unit tests for this part of the application.

    Read the article

  • Writing an efficient cron job script utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • Use XMPP instead of HTTP

    - by pavel
    Hey guys. My friend and I, we are working on iPhone application. This application uses XMPP protocol to provide chat functionality. Right now we are designing architecture for the application. So my friend is working on iPhone side, and I am ruby on rails guy. My friend suggested, that we wrap every call, that is usually served via HTTP into XMPP. So, user registration, users search, profile editing, photo uploading, everything goes via XMPP. No HTTP at all. My friend wants to use XMPP, because he says, that it's much easier to implement XMPP on client-side rather HTTP. As for me, this is bullshit, but we've got a product owner, who have been working with my friend for a long time and he trusts him. So what I'm trying to do is to convince my friend and product owner that using XMPP for what HTTP can work find — is totally not the best idea. I feel, that if we implement everything on XMPP, we will have a pain in an ass till the end of lives. But how do I prove it? P.S. I'm not against chat over XMPP, I am against users search, photo uploading, rankings, nearby search and various other restful requests. Please, leave response. Any help appreciated. A little update: Yesterday we had a long discussion. And it turns out, it's quiete hard to receive response from both XMPP and HTTP in Objective-C. Because every single object and its data should be stored in Core Data model, while this model can't be securely modified from various places. Say, if you use HTTP transport, you always want to use only HTTP transport to update data in your model. And if you use XMPP, you should always use XMPP. So, you can't use both. That's what my iPhone buddy told me. It sounds weird for me, can anyone explain me that?

    Read the article

  • MKMapView loading all annotation views at once (including those that are outside the current rect)

    - by jmans
    Has anyone else run into this problem? Here's the code: - (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(WWMapAnnotation *)annotation { // Only return an Annotation view for the placemarks. Ignore for the current location--the iPhone SDK will place a blue ball there. NSLog(@"Request for annotation view"); if ([annotation isKindOfClass:[WWMapAnnotation class]]){ MKPinAnnotationView *browse_map_annot_view = (MKPinAnnotationView *)[mapView dequeueReusableAnnotationViewWithIdentifier:@"BrowseMapAnnot"]; if (!browse_map_annot_view) { browse_map_annot_view = [[[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:@"BrowseMapAnnot"] autorelease]; NSLog(@"Creating new annotation view"); } else { NSLog(@"Recycling annotation view"); browse_map_annot_view.annotation = annotation; } ... As soon as the view is displayed, I get 2009-08-05 13:12:03.332 xxx[24308:20b] Request for annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Creating new annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Request for annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Creating new annotation view and on and on, for every annotation (~60) I've added. The map (correctly) only displays the two annotations in the current rect. I am setting the region in viewDidLoad: if (center_point.latitude == 0) { center_point.latitude = 35.785098; center_point.longitude = -78.669899; } if (map_span.latitudeDelta == 0) { map_span.latitudeDelta = .001; map_span.longitudeDelta = .001; } map_region.center = center_point; map_region.span = map_span; NSLog(@"Setting initial map center and region"); [browse_map_view setRegion:map_region animated:NO]; The log entry for the region being set is printed to the console before any annotation views are requested. The problem here is that since all of the annotations are being requested at once, [mapView dequeueReusableAnnotationViewWithIdentifier] does nothing, since there are unique MKAnnotationViews for every annotation on the map. This is leading to memory problems for me. One possible issue is that these annotations are clustered in a pretty small space (~1 mile radius). Although the map is zoomed in pretty tight in viewDidLoad (latitude and longitude delta .001), it still loads all of the annotation views at once. Thanks...

    Read the article

  • Django ForeignKey TemplateSyntaxError and ProgrammingError

    - by Daniel Garcia
    This is are my models i want to relate. i want for collection to appear in the form of occurrence. class Collection(models.Model): id = models.AutoField(primary_key=True, null=True) code = models.CharField(max_length=100, null=True, blank=True) address = models.CharField(max_length=100, null=True, blank=True) collection_name = models.CharField(max_length=100) def __unicode__(self): return self.collection_name class Meta: db_table = u'collection' ordering = ('collection_name',) class Occurrence(models.Model): id = models.AutoField(primary_key=True, null=True) reference = models.IntegerField(null=True, blank=True, editable=False) collection = models.ForeignKey(Collection, null=True, blank=True, unique=True), modified = models.DateTimeField(null=True, blank=True, auto_now=True) class Meta: db_table = u'occurrence' Every time i go to check the Occurrence object i get this error TemplateSyntaxError at /admin/hotiapp/occurrence/ Caught an exception while rendering: column occurrence.collection_id does not exist LINE 1: ...LECT "occurrence"."id", "occurrence"."reference", "occurrenc.. And every time i try to add a new occurrence object i get this error ProgrammingError at /admin/hotiapp/occurrence/add/ column occurrence.collection_id does not exist LINE 1: SELECT (1) AS "a" FROM "occurrence" WHERE "occurrence"."coll... What am i doing wrong? or how does ForeignKey works?

    Read the article

  • Building Web Application project using MSBuild from command line on 64-bit: missing targets file

    - by James Allen
    Building a solution containing a web application project using MSBuild from powershell like this: msbuild "/p:OutDir=$build_dir\" $solution_file Works fine for me on 32-bit, but on a 64-bit machine I am running into this error: error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. I am using Visual Studio 2008 and powershell v2. The problem has already been documented here and here. Basically on 64-bit install of VS, the Microsoft.WebApplication.targets needed by MSBuild is in the Program Files(x86) dir, not the Program Files dir, but MSBuild doesn't recognise this and so looks in the wrong place. The two solutions are not ideal: Manually copy the file on 64-bit from Program Files(x86) to Program Files. This is a poor solution - every dev will have to do this manually. Manually edit the csproj file so MSBuild looks in the right place. Again not ideal: I would rather not have to get everyone on 64bit to manually edit csproj files on every new project. e.g. <Import Project="$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)" Condition="Exists('$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)')" /> Ideally I want a way to tell MSBuild to import the target file form the right place from the command line but I can't work out how to do that. Any solutions?

    Read the article

  • Mocking WebResponse's from a WebRequest

    - by Rob Cooper
    I have finally started messing around with creating some apps that work with RESTful web interfaces, however, I am concerned that I am hammering their servers every time I hit F5 to run a series of tests.. Basically, I need to get a series of web responses so I can test I am parsing the varying responses correctly, rather than hit their servers every time, I thought I could do this once, save the XML and then work locally. However, I don't see how I can "mock" a WebResponse, since (AFAIK) they can only be instantiated by WebRequest.GetResponse How do you guys go about mocking this sort of thing? Do you? I just really don't like the fact I am hammering their servers :S I dont want to change the code too much, but I expect there is a elegant way of doing this.. Update Following Accept Will's answer was the slap in the face I needed, I knew I was missing a fundamental point! Create an Interface that will return a proxy object which represents the XML. Implement the interface twice, on that uses WebRequest, the other that returns static "responses". The interface implmentation then either instantiates the return type based on the response, or the static XML. You can then pass the required class when testing or at production to the service layer. Once I have the code knocked up, I'll paste some samples. Thanks Will :)

    Read the article

  • Thread resource sharing

    - by David
    I'm struggling with multi-threaded programming... I have an application that talks to an external device via a CAN to USB module. I've got the application talking on the CAN bus just fine, but there is a requirement for the application to transmit a "heartbeat" message every second. This sounds like a perfect time to use threads, so I created a thread that wakes up every second and sends the heartbeat. The problem I'm having is sharing the CAN bus interface. The heartbeat must only be sent when the bus is idle. How do I share the resource? Here is pseudo code showing what I have so far: TMainThread { Init: CanBusApi =new TCanBusApi; MutexMain =CreateMutex( "CanBusApiMutexName" ); HeartbeatThread =new THeartbeatThread( CanBusApi ); Execution: WaitForSingleObject( MutexMain ); CanBusApi->DoSomething(); ReleaseMutex( MutexMain ); } THeartbeatThread( CanBusApi ) { Init: MutexHeart =CreateMutex( "CanBusApiMutexName" ); Execution: Sleep( 1000 ); WaitForSingleObject( MutexHeart ); CanBusApi->DoHeartBeat(); ReleaseMutex( MutexHeart ); } The problem I'm seeing is that when DoHeartBeat is called, it causes the main thread to block while waiting for MutexMain as expected, but DoHeartBeat also stops. DoHeartBeat doesn't complete until after WaitForSingleObject(MutexMain) times out in failure. Does DoHeartBeat execute in the context of the MainThread or HeartBeatThread? It seems to be executing in MainThread. What am I doing wrong? Is there a better way? Thanks, David

    Read the article

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • Java random values and duplicates

    - by f-Prime
    I have an array (cards) of 52 cards (13x4), and another array (cardsOut) of 25 cards (5x5). I want to copy elements from the 52 cards into the 25 card array by random. Also, I dont want any duplicates in the 5x5 array. So here's what I have: double row=Math.random() *13; double column=Math.random() *4; boolean[][] duplicates=new boolean[13][4]; pokerGame[][] cardsOut = new pokerGame[5][5]; for (int i=0;i<5;i++) for (int j=0;j<5;j++){ if(duplicates[(int)row][(int)column]==false){ cardsOut[i][j]=cards[(int)row][(int)column]; duplicates[(int)row][(int)column]=true; } } 2 problems in this code. First, the random values for row and column are only generated once, so the same value is copied into the 5x5 array every time. Since the same values are being copied every time, I'm not sure if my duplicate checker is very effective, or if it works at all. How do I fix this?

    Read the article

  • How can parallelism affect number of results?

    - by spender
    I have a fairly complex query that looks something like this: create table Items(SomeOtherTableID int,SomeField int) create table SomeOtherTable(Id int,GroupID int) with cte1 as ( select SomeOtherTableID,COUNT(*) SubItemCount from Items t where t.SomeField is not null group by SomeOtherTableID ),cte2 as ( select tc.SomeOtherTableID,ROW_NUMBER() over (partition by a.GroupID order by tc.SubItemCount desc) SubItemRank from Items t inner join SomeOtherTable a on a.Id=t.SomeOtherTableID inner join cte1 tc on tc.SomeOtherTableID=t.SomeOtherTableID where t.SomeField is not null ),cte3 as ( select SomeOtherTableID from cte2 where SubItemRank=1 ) select * from cte3 t1 inner join cte3 t2 on t1.SomeOtherTableID<t2.SomeOtherTableID option (maxdop 1) The query is such that cte3 is filled with 6222 distinct results. In the final select, I am performing a cross join on cte3 with itself, (so that I can compare every value in the table with every other value in the table at a later point). Notice the final line : option (maxdop 1) Apparently, this switches off parallelism. So, with 6222 results rows in cte3, I would expect (6222*6221)/2, or 19353531 results in the subsequent cross joining select, and with the final maxdop line in place, that is indeed the case. However, when I remove the maxdop line, the number of results jumps to 19380454. I have 4 cores on my dev box. WTF? Can anyone explain why this is? Do I need to reconsider previous queries that cross join in this way?

    Read the article

  • How should I handle persistence in a Java MUD? OptimisticLockException handling

    - by Chase
    I'm re-implementing a old BBS MUD game in Java with permission from the original developers. Currently I'm using Java EE 6 with EJB Session facades for the game logic and JPA for the persistence. A big reason I picked session beans is JTA. I'm more experienced with web apps in which if you get an OptimisticLockException you just catch it and tell the user their data is stale and they need to re-apply/re-submit. Responding with "try again" all the time in a multi-user game would make for a horrible experience. Given that I'd expect several people to be targeting a single monster during a fight I think the chance of an OptimisticLockException would be high. My view code, the part presenting a telnet CLI, is the EJB client. Should I be catching the PersistenceExceptions and TransactionRolledbackLocalExceptions and just retrying? How do you decide when to stop? Should I switch to pessimistic locking? Is persisting after every user command overkill? Should I be loading the entire world in RAM and dumping the state every couple of minutes? Do I make my session facade a EJB 3.1 singleton which would function as a choke point and therefore eliminating the need to do any type of JPA locking? EJB 3.1 singletons function as a multiple reader/single writer design (you annotate the methods as readers and writers). Basically, what is the best design and java persistence API for highly concurrent data changes in an application where it is not acceptable to present resubmit/retry prompts to the user?

    Read the article

  • Java complex validation in Dropwizard?

    - by miku
    I'd like to accept JSON on an REST endpoint and convert it to the correct type for immediate validation. The endpoint looks like this: @POST public Response createCar(@Valid Car car) { // persist to DB // ... } But there are many subclasses of Car, e.g. Van, SelfDrivingCar, RaceCar, etc. How could I accept the different JSON representations on the endpoint, while keeping the validation code in the Resource as concise as something like @Valid Car car? Again: I send in JSON like (here, it's the representation of a subclass of Car, namely SelfDrivingCar): { "id" : "t1", // every Car has an Id "kind" : "selfdriving", // every Car has a type-hint "max_speed" : "200 mph", // some attribute "ai_provider" : "fastcarsai ltd." // this is SelfDrivingCar-specific } and I'd like the validation machinery look into the kind attribute, create an instance of the appropriate subclass, here e.g. SelfDrivingCar and perform validation. I know I could create different endpoints for all kind of cars, but thats does not seem DRY. And I know that I could use a real Validator instead of the annotation and do it by hand, so I'm just asking if there's some elegant shortcut for this problem.

    Read the article

  • Why does Http Web Request and IWebProxy work at wierd times

    - by Mike Webb
    Another question about Web proxy. Here is my code: IWebProxy Proxya = System.Net.WebRequest.GetSystemWebProxy(); Proxya.Credentials = CredentialCache.DefaultNetworkCredentials; HttpWebRequest rqst = (HttpWebRequest)WebRequest.Create(targetServer); rqst.Proxy = Proxya; rqst.Timeout = 5000; try { rqst.GetResponse(); } catch(WebException wex) { connectErrMsg = wex.Message; proxyworks = false; } This code fails the first time it is called. After that on successive calls it works sometimes, but not others. It also never hits the catch block. Now the weird part. If I add a MessageBox.Show(msg) call in the first section of code before the GetResponse() call this all will work every time. Here is an example: try { // ========Here is where I make the call and get the response======== System.Windows.Forms.MessageBox.Show("Getting Response"); // ========This makes the whole thing work every time======== rqst.GetResponse(); } catch(WebException wex) { connectErrMsg = wex.Message; proxyworks = false; } I'm baffled about why it is behaving this way. I don't know if the timeout is not working (it's in milliseconds, not seconds, so should timeout after 5 seconds, right?...) or what is going on. The most confusing this is that the message box call makes it all work. So any help and suggestions on what is happening is appreciated. These are the kind of bugs that drive me absolutely out of my mind.

    Read the article

  • How can I get a Silverlight application to check for an update without the user clicking a button?

    - by Edward Tanguay
    I have made an out-of-browser silverlight application which I want to automatically update every time there is a new .xap file uploaded to the server. When the user right-clicks the application and clicks on Updates, the default is set to "Check for updates, but let me choose whether to download and install them": This leads me to believe that it is possible to make my Silverlight application automatically detect if there is a new .xap file present on the server, and if there is, the Silverlight client will automatically ask the user if he would like to install it. This however is not the case. I upload a new .xap file and the Silverlight application does nothing. Even if I add this to my App.xaml.cs: -- private void Application_Startup(object sender, StartupEventArgs e) { this.RootVisual = new BaseApp(); if (Application.Current.IsRunningOutOfBrowser) { Application.Current.CheckAndDownloadUpdateAsync(); } } and update the .xap file, the Silverlight application does nothing. This information leads me to believe that I have to make a button which the user clicks to see if there is an update. But I don't want the user to have to click a button every day to see if there is an update. I want the application to check by itself if there is a new .xap file and if there is, let the client ask the user if he wants the update. How do I make my Silverlight application check, each time it starts, if there is a new .xap file, and if there is, pass control to the Silverlight client to ask the user if he wants to download it, as the above dialogue implies is possible?

    Read the article

  • Auto load a specific link at various time intervals

    - by user228837
    Here's what I need to do. I'm using Google Chrome. I have page that auto-reloads every 5 seconds using a script: javascript: timeout=prompt("Set timeout [s]"); current=location.href; if(timeout>0) setTimeout('reload()',1000*timeout); else location.replace(current); function reload() { setTimeout('reload()',1000*timeout); fr4me='<frameset cols=\'*\'>\n<frame src=\''+current+'\'/>'; fr4me+='</frameset>'; with(document){write(fr4me);void(close())}; } I found that script by Googling. The reason why the page auto-reloads every 5 seconds is I'm waiting for a specific link or url to appear in the page. It appears at random times. Once I see the link I'm waiting for, I immediately click the link. That's fine. But I want more. What I want is the page will auto-reload and I want it to auto-detect the the link I'm waiting for. Once the script finds the link I'm waiting for, it automatically loads that link on a new tab or page. For example, I'm auto-reloading www.example.com. I'm waiting for a specific url "BUY NOW". When the page auto-reloads, it checks if there's a url "BUY NOW". If it sees one, it should automatically open that link. Thanks.

    Read the article

  • PHP Site Deployment Suggestion

    - by TheOnly92
    I'm currently quite troubled by the way of deployment my team is adopting... It's very old-fashioned and I know it doesn't work very well. But I don't exactly know how to change it, so please give some suggestions about it... Here is our current setup: 2 webservers 1 database server 1 test server Current deployment adaptation 1. We develop and work on the test server, every changes is uploaded manually to the test server. 2. When a change or feature is complete, we then commit the changes to SVN repository. 3. After committing the changes, we then upload our changes to the first webserver, where there will be a cronjob running every minute to sync the files between the servers. Something very annoying is, whenever we upload a file just as the syncing job starts, the file that is sync-ed will appear corrupted, since it is only half-uploaded. Another thing is whenever there is a deployment fault, it will be extremely difficult to revert. These are basically the problem I'm facing, what should I do?

    Read the article

  • Using {% url ??? %} in django templates

    - by user563247
    I have looked a lot on google for answers of how to use the 'url' tag in templates only to find many responses saying 'You just insert it into your template and point it at the view you want the url for'. Well no joy for me :( I have tried every permutation possible and have resorted to posting here as a last resort. So here it is. My urls.py looks like this: from django.conf.urls.defaults import * from login.views import * from mainapp.views import * import settings # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^weclaim/', include('weclaim.foo.urls')), (r'^login/', login_view), (r'^logout/', logout_view), ('^$', main_view), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: # (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^admin/', include(admin.site.urls)), #(r'^static/(?P<path>.*)$', 'django.views.static.serve',{'document_root': '/home/arthur/Software/django/weclaim/templates/static'}), (r'^static/(?P<path>.*)$', 'django.views.static.serve',{'document_root': settings.MEDIA_ROOT}), ) My 'views.py' in my 'login' directory looks like: from django.shortcuts import render_to_response, redirect from django.template import RequestContext from django.contrib import auth def login_view(request): if request.method == 'POST': uname = request.POST.get('username', '') psword = request.POST.get('password', '') user = auth.authenticate(username=uname, password=psword) # if the user logs in and is active if user is not None and user.is_active: auth.login(request, user) return render_to_response('main/main.html', {}, context_instance=RequestContext(request)) #return redirect(main_view) else: return render_to_response('loginpage.html', {'box_width': '402', 'login_failed': '1',}, context_instance=RequestContext(request)) else: return render_to_response('loginpage.html', {'box_width': '400',}, context_instance=RequestContext(request)) def logout_view(request): auth.logout(request) return render_to_response('loginpage.html', {'box_width': '402', 'logged_out': '1',}, context_instance=RequestContext(request)) and finally the main.html to which the login_view points looks like: <html> <body> test! <a href="{% url logout_view %}">logout</a> </body> </html> So why do I get 'NoReverseMatch' every time? *(on a slightly different note I had to use 'context_instance=RequestContext(request)' at the end of all my render-to-response's because otherwise it would not recognise {{ MEDIA_URL }} in my templates and I couldn't reference any css or js files. I'm not to sure why this is. Doesn't seem right to me)*

    Read the article

  • Debugging Key-Value-Observing overflow.

    - by Paperflyer
    I wrote an audio player. Recently I started refactored some of the communication flow to make it fully MVC-compliant. Now it crashes, which in itself is not surprising. However, it crashes after a few seconds inside the Cocoa key-value-observing routines with a HUGE stack trace of recursive calls to NSKeyValueNotifyObserver. Obviously, it is recursively observing a value and thus overflowing the NSArray that holds pending notifications. According to the stack trace, the program loops from observeValueForKeyPath to setMyValue and back. Here is the according code: - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if ([keyPath isEqual:@"myValue"] && object == myModel && [self myValue] != [myModel myValue]) { [self setMyValue:[myModel myValue]; } } and - (void)setMyValue:(float)value { myValue = value; [myModel setMyValue:value]; } myModel changes myValue every 0.05 seconds and if I log the calls to these two functions, they get called only every 0.05 seconds just as they should be, so this is working properly. The stack trace looks like this: -[MyDocument observeValueForKeyPath:ofObject:change:context:] NSKeyValueNotifyObserver NSKeyValueDidChange -[NSObject(NSKeyValueObserverNotification) didChangeValueForKey:] -[MyDocument setMyValue:] _NSSetFloatValueAndNotify …repeated some ~8k times until crash Do you have any idea why I could still be spamming the KVO queue?

    Read the article

  • How to make dynamically generated .net service client read configuration from another location than

    - by Bryan
    Hi, I've currently written code to use the ServiceContractGenerator to generate web service client code based on a wsdl, and then compile it into an assembly in memory using the code dom. I'm then using reflection to set up the binding, endpoint, service values/types, and then ultimately invoke the web service method based on xml configuration that can be altered at run time. This all currently works fine. However, the problem I'm currently running into, is that I'm hitting several exotic web services that require lots of custom binding/security settings. This is forcing me to add more and more configuration into my custom xml configurations, as well as the corresponding updates to my code to interpret and set those binding/security settings in code. Ultimately, this makes adding these 'exotic' services slower, and I can see myself eventually reimplementing the 'system.serviceModel' section of the web or app.config file, which is never a good thing. My question is, and this is where my lack of experience .net and C# shows, is there a way to define the configuration normally found in the web.config or app.config 'system.serviceModel' section somewhere else, and at run time supply this to configuration to the web service client? Is there a way to attach an app.config directly to an assembly as a resource or any other way to supply this configuration to the client? Basically, I'd like attach an app.config only containing a 'system.serviceModel' to the assembly containing a web service client so that it can use its configuration. This way I wouldn't need to handle every configuration under the sun, I could let .net do it for me. Fyi, it's not an option for me to put the configuration for every service in the app.config for the running application. Any help would be greatly appreciated. Thanks in advance! Bryan

    Read the article

  • CakePHP 1.3 and Uploadifive/Uploadify - Change Upload Filename to a random string

    - by CaboONE
    I have somehow implemented UPLOADIFIVE in my CakePHP application. Everything seems to work great including uploading multiple files and inserting the correct information in the Database. Based on the following code, I would like to UPLOAD AND SAVE EVERY FILE WITH A RANDOM NAME TAKING INTO ACCOUNT THE CURRENT DATE OR SOMETHING SIMILAR. How could I accomplish this? In my Photos Controller I have the following function: // This function is called at every file upload. It uploads the file onto the server // and save the corresponding image name, etc, to the database table `photos`. function upload() { $uploadDir = '/img/uploads/photos/'; if (!empty($_FILES)) { debug($_FILES); $tempFile = $_FILES['Filedata']['tmp_name'][0]; $uploadDir = $_SERVER['DOCUMENT_ROOT'] . $uploadDir; $targetFile = $uploadDir . $_FILES['Filedata']['name'][0]; // Validate the file type $fileTypes = array('jpg', 'jpeg', 'gif', 'png'); // Allowed file extensions $fileParts = pathinfo($_FILES['Filedata']['name'][0]); // Validate the filetype if (in_array($fileParts['extension'], $fileTypes)) { // Save the file move_uploaded_file($tempFile,$targetFile); $_POST['image'] = $_FILES['Filedata']['name'][0]; $this->Photo->create(); if ($this->Photo->save($_POST)) { $this->Session->setFlash($targetFile, 'default', array('class' => 'alert_success')); $this->redirect(array('action' => 'index')); } } else { // The file type wasn't allowed //echo 'Invalid file type.'; $this->Session->setFlash(__('The photo could not be saved. Please, try again.', true)); } } } In my View file - admin_add.ctp I have added the following function $('#file_upload').uploadifive({ 'auto' : false, 'uploadScript' : '/photos/upload', 'buttonText' : 'BROWSE FILES', 'method' : 'post', 'onAddQueueItem' : function(file) { this.data('uploadifive').settings.formData = { 'photocategory_id' : $('#PhotoPhotocategoryId').val() }; } }); <input type="file" name="file_upload" id="file_upload" />

    Read the article

  • Duration of Excessive GC Time in "java.lang.OutOfMemoryError: GC overhead limit exceeded"

    - by jilles de wit
    Occasionally, somewhere between once every 2 days to once every 2 weeks, my application crashes in a seemingly random location in the code with: java.lang.OutOfMemoryError: GC overhead limit exceeded. If I google this error I come to this SO question and that lead me to this piece of sun documentation which expains: The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line. Which tells me that my application is apparently spending 98% of the total time in garbage collection to recover only 2% of the heap. But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond? I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.

    Read the article

  • mod_rewrite to nginx rewrite rules

    - by Andrew Bestic
    I have converted most of my Apache HTTPd mod_rewrite rules over to nginx's HttpRewrite module (which calls PHP-FPM via FastCGI on every dynamic request). Simple rules which are defined by hard locations work fine: location = /favicon.ico { rewrite ^(.*)$ /_core/frontend.php?type=ico&file=include__favicon last; } I am still having trouble with regular expressions, which are parsed in mod_rewrite like this (note that I am accepting trailing slashes within the rules, as well as appending the query string to every request): mod_rewrite # File handler RewriteRule ^([a-z0-9-_,+=]+)\.([a-z]+)$ _core/frontend.php?type=$2&file=$1 [QSA,L] # Page handler RewriteRule ^([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1/$2 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1/$2 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1/$2/$3 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1/$2/$3 [QSA,L] I have come up with the following server configuration for the site, but I am met with unmatched rules after parsing a request (eg; GET /user/auth): attempted nginx rewrite location / { # File handler rewrite ^([a-z0-9-_,+=]+)\.([a-z]+)?(.*)$ /_core/frontend.php?type=$2&file=$1&$3 break; # Page handler rewrite ^([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1&$2 break; rewrite ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1/$2&$3 break; rewrite ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1/$2/$3&$4 break; } What would you suggest for dealing with my File Handler (which is just filename.ext), and my Page Handler (which is a unique route request with up to 3 properties defined by a forward slash)? As I haven't gotten a response from this yet, I am also unsure if this will override my PHP parser which is defined with location ~ \.php {}, which is included before these rewrite rules. Bonus points if I can solve the parsing issues without the need to use a new rule for each number of route properties.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >