Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 365/428 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Set size of JTable in JScrollPane and in JPanel with the size of the JFrame

    - by user1761818
    I want the table with the same width as the frame and also when I resize the frame the table need to be resized too. I think setSize() of JTable doesn't work correctly. Can you help me? import java.awt.Color; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JScrollPane; import javax.swing.JTable; import javax.swing.SwingUtilities; public class Main extends JFrame { public Main() { setSize(400, 600); String[] columnNames = {"A", "B", "C"}; Object[][] data = { {"Moni", "adsad", 2}, {"Jhon", "ewrewr", 4}, {"Max", "zxczxc", 6} }; JTable table = new JTable(data, columnNames); JScrollPane tableSP = new JScrollPane(table); int A = this.getWidth(); int B = this.getHeight(); table.setSize(A, B); JPanel tablePanel = new JPanel(); tablePanel.add(tableSP); tablePanel.setBackground(Color.red); add(tablePanel); setTitle("Marks"); setLocationRelativeTo(null); setDefaultCloseOperation(EXIT_ON_CLOSE); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { Main ex = new Main(); ex.setVisible(true); } }); } }

    Read the article

  • cancelPreviousPerformRequestWithTarget is not canceling my previously delayed thread started with pe

    - by jmurphy
    Hello, I've launched a delayed thread using performSelector but the user still has the ability to hit the back button on the current view causing dealloc to be called. When this happens my thread still seems to be called which causes my app to crash because the properties that thread is trying to write to have been released. To solve this I am trying to call cancelPreviousPerformRequestsWithTarget to cancel the previous request but it doesn't seem to be working. Below are some code snippets. - (void) viewDidLoad { [self performSelector:@selector(myStopUpdatingLocation) withObject:nil afterDelay:6]; } (void)viewWillDisappear:(BOOL)animated { [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(myStopUpdatingLocation) object:nil]; } Am I doing something incorrect here? The method myStopUpdatingLocation is defined in the same class that I'm calling the perform requests. A little more background. The function that I'm trying to implement is to find a users location, search google for some locations around that location and display several annotations on the map. On viewDidLoad I start updating the location with CLLocationManager. I've build in a timeout after 6 seconds if I don't get my desired accuracy within the timeout and I'm using a performSelector to do this. What can happen is the user clicks the back button in the view and this thread will still execute even though all my properties have been released causing a crash. Thanks in advance! James

    Read the article

  • Combine Search Bar and URL Bar into One (WebView)

    - by Jay Bush
    So I'm in the midst of updating my Web Browser app for iOS devices, from the ground up, and I'm trying to implement some more convenient features. One feature that seems to be really popular now, that I have been getting a lot of requests for, is the combination of a Google Search bar and a URL bar in one, like that of the Chrome application. Below is a screenshot of the Google Chrome app, and as you can see, they've made it so you can either enter in a search query like "apple ipad" and it will return a Google search page of 'Apple iPad', or you can enter in a URL "http://apple.com/ipad/" and it will load that URL. I have looked all over the internet, but all I could find were tutorials on how to Search Google with value of the UITextField. I have a feeling that the best way to do this is to probably make a 'check'. Like if the entered value contains 'http://' 'www.' '.com' or no spaces, then load it as a URL, if not then load it in a Google Search page, and then have the webview load up the Google Search page. If anybody could show me to the right direction, that would be great, or even supplying me with some code would be even greater. :) Thanks! If anyone needs part of the code, just ask.

    Read the article

  • Spring 3 MVC validation BindingResult doesn't contain any errors

    - by Travelsized
    I'm attempting to get a Spring 3.0.2 WebMVC project running with the new annotated validation support. I have a Hibernate entity annotated like this: @Entity @Table(name = "client") public class Client implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "clientId", nullable = false) @NotEmpty private Integer clientId; @Basic(optional = false) @Column(name = "firstname", nullable = false, length = 45) @NotEmpty @Size(min=2, max=45) private String firstname; ... more fields, getters and setters } I've turned on mvc annotation support in my applicationContext.xml file: <mvc:annotation-driven /> And I have a method in my controller that responds to a form submission: @RequestMapping(value="/addClient.htm") public String addClient(@ModelAttribute("client") @Valid Client client, BindingResult result) { if(result.hasErrors()) { return "addClient"; } service.storeClient(client); return "clientResult"; } When my app loads in the server, I can see in the server log file that it loads a validator: 15 [http-8084-2] INFO org.hibernate.validator.util.Version - Hibernate Validator 4.0.2.GA The problem I'm having is that the validator doesn't seem to kick in. I turned on the debugger, and when I get into the controller method, the BindingResult contains 0 errors after I submit an empty form. (The BindingResult does show that it contains the Client object as a target.) It then proceeds to insert a record without an Id and throws an exception. If I fill out an Id but leave the name blank, it creates a record with the Id and empty fields. What steps am I missing to get the validation working?

    Read the article

  • Request size limitation when using MultipartHttpServletRequest of Spring 3.0

    - by Spiderman
    I'd like to know what is the size limitation if I upload list of files in one client's form submition using HTTP multipart content type. On the server side I am using Spring's MultipartHttpServletRequest to handle the request. mM questions: Is there should be different file size limitation and total request size limitation or file size is the only limitation and the request is capable of uploading 100s of files as lonng as they are not too large. Doest the Spring request wrapper read the complete request and store it in the JAVA heap memory or it store temporaray files of it to be able to use big quota. Is the use of reading the httpservlet request in streaming would change the size limitation than using complete http request read at-once by the application server. What is the bottleneck of this process - Java heap size, the quota of the filesystem on which my web-server runs, the maximum allowed BLOB size that the DataBase in which I am gonna save the file alows? or Spring internal limitations? Related threads that still don't have exact answer to this: does-spring-framework-support-streaming-mode-in-mutlipart-requests is-there-a-way-to-get-raw-http-request-stream-from-java-servlet-handler how-to- drop-body-of-a-request-after-checking-headers-in-servlet apache-commons-fileupload-throws-malformedstreamexception

    Read the article

  • What is the IoC / "Springy" way to handle MVP in GWT? (Hint, probably not the Spring Roo 1.1 way)

    - by Ehrann Mehdan
    This is the Spring Roo 1.1 way of doing a factory that returns a GWT Activity (Yes, Spring Framework) public Activity getActivity(ProxyPlace place) { switch (place.getOperation()) { case DETAILS: return new EmployeeDetailsActivity((EntityProxyId<EmployeeProxy>)place.getProxyId(), requests, placeController, ScaffoldApp.isMobile() ? EmployeeMobileDetailsView.instance() : EmployeeDetailsView.instance()); case EDIT: return makeEditActivity(place); case CREATE: return makeCreateActivity(); } throw new IllegalArgumentException("Unknown operation " + place.getOperation()); } It seems to me that we just went back hundred of years if we use a switch case with constants to make a factory. Now this is official auto generated Spring roo 1.1 with GWT / GAE integration, I kid you not I can only assume this is some executives empty announcements because this is definitly not Spring It seems VMWare and Google were too fast to get something out and didn't quite finish it, isn't it? Am I missing something or this is half baked and by far not the way Spring + GWT MVP should work? Do you have a better example of how Spring, GWT (2.1 MVP approach) and GAE should connect? I would hate to do all the plumbing of managing history and activities like this. (no annotations? IOC?) I also would hate to reinvent the wheel and write my own Spring enhancement just to find someone else did the same, or worse, find out that SpringSource and Google will release roo 1.2 soon and make it right

    Read the article

  • Flushing writes in buffer of Memory Controller to DDR device

    - by Rohit
    At some point in my code, I need to push the writes in my code all the way to the DIMM or DDR device. My requirement is to ensure the write reaches the row,ban,column of the DDR device on the DIMM. I need to read what I've written to the main memory. I do not want caching to get me the value. Instead after writing I want to fetch this value from main memory(DIMM's). So far I've been using Intel's x86 instruction wbinvd(write back and invalidate cache). However this means the caches and TLB are flushed. Write-back requests go to the main memory. However, there is a reasonable amount of time this data might reside in the write buffer of the Memory Controller( Intel calls it integrated memory controller or IMC). The Memory Controller might take some more time depending on the algorithm that runs in the Memory Controller to handle writes. Is there a way I force all existing or pending writes in the write buffer of the memory controller to the DRAM devices ?? What I am looking for is something more direct and more low-level than wbinvd. If you could point me to right documents or specs that describe this I would be grateful. Generally, the IMC has a several registers which can be written or read from. From looking at the specs for that for the chipset I could not find anything useful. Thanks for taking the time to read this.

    Read the article

  • Proper status codes for JSON responses to Ajax calls?

    - by anonymous coward
    My project is returning JSON to Ajax calls from the browser. I'm wondering what the proper status code is for sending back with responses to invalid (but successfully handled) data submissions. For example, jQuery has the following two particular callbacks when making Ajax requests: success: Fired when a 200/2xx status code is delivered along with the response. error: Fired when 4xx, 5xx, etc, status codes come back with the response. If a user attempts to create a new "Person" object, I send back a JSON representation of the newly created object upon success, thus giving javascript access to the necessary unique ID's for the new object, etc. This, of course, is sent with a 200 status code. If a user submits malformed or invalid data (say, an invalid/incomplete "name" field), I would like to send back the validation error messages via JSON. (I don't see why this would be a bad thing). My question is: in doing so, should I send a 200 status code, because I successfully handled their invalid data? Therefore, I'd be using the jQuery success callback, but simply check for errors... Or, should I use a 4xx status code, perhaps 'Bad Request', because the data they sent me is invalid? (and thus, use the error callback to do the necessary client-side notifications).

    Read the article

  • Componentizing complex functionality in an MVC web app

    - by NXT
    Hi Everyone, This is question about MVC web-app architecture, and how it can be extended to handle componentizing moderately complex units of functionality. I have an MVC style web-app with a customer facing credit card charge page. I've been asked to allow the admins to enter credit card payments as well, for times when credit cards are taken over the phone. The customer facing credit card charge section of the website is currently it's own controller, with approximately 3 pages and a login. That controller is responsible for: Customer login credential authentication Credit card data collection Calling a library to do the actual charge. reporting the results to the user. I would like to extract the card data collection pages into a component of some kind so that I can easily reuse the code on the admin side of the app. Right now my components are limited to single "view" pages with PHP style embedded Perl code. This is a simple, custom MVC framework written in Perl. Right now, controllers are called directly from the framework to service web requests. My idea is to allow controllers to be called from other controllers, so that I can componentize more complex functionality. For simplicity I think I prefer composition over inheritance, even though it will require writing a bunch of pass-through methods (actions). Being Perl, I could in theory do multiple inheritance. I'm wondering if anyone with experience in other MVC web frameworks can comment on how this sort of thing is usually done. Thank you.

    Read the article

  • Doctrine - get the offset of an object in a collection (implementing an infinite scroll)

    - by dan
    I am using Doctrine and trying to implement an infinite scroll on a collection of notes displayed on the user's browser. The application is very dynamic, therefore when the user submits a new note, the note is added to the top of the collection straightaway, besides being sent (and stored) to the server. Which is why I can't use a traditional pagination method, where you just send the page number to the server and the server will figure out the offset and the number of results from that. To give you an example of what I mean, imagine there are 20 notes displayed, then the user adds 2 more notes, therefore there are 22 notes displayed. If I simply requests "page 2", the first 2 items of that page will be the last two items of the page currently displayed to the user. Which is why I am after a more sophisticated method, which is the one I am about to explain. Please consider the following code, which is part of the server code serving an AJAX request for more notes: // $lastNoteDisplayedId is coming from the AJAX request $lastNoteDisplayed = $repository->findBy($lastNoteDisplayedId); $allNotes = $repository->findBy($filter, array('createdAt' => 'desc')); $offset = getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId); // retrieve the page to send back so that it can be appended to the listing $notesPerPage = 30 $notes = $repository->findBy( array(), array('createdAt' => 'desc'), $notesPerPage, $offset ); $response = json_encode($notes); return $response; Basically I would need to write the method getLastNoteDisplayedOffset, that given the whole set of notes and one particoular note, it can give me its offset, so that I can use it for the pagination of the previous Doctrine statement. I know probably a possible implementation would be: getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId) { $i = 0; foreach ($allNotes as $note) { if ($note->getId() === $lastNoteDisplayedId->getId()) { break; } $i++; } return $i; } I would prefer not to loop through all notes because performance is an important factor. I was wondering if Doctrine has got a method itself or if you can suggest a different approach.

    Read the article

  • How to encapsulate a third party complex object structure?

    - by tangens
    Motivation Currently I'm using the java parser japa to create an abstract syntax tree (AST) of a java file. With this AST I'm doing some code generation (e.g.: if there's an annotation on a method, create some other source files, ...) Problem When my code generation becomes more complex, I've to dive deeper into the structure of the AST (e.g. I have to use visitors to extract some type information of method parameters). But I'm not sure if I want to stay with japa or if I will change the parser library later. Because my code generator uses freemarker (which isn't good at automatic refactoring) I want the interface that it uses to access the AST information to be stable, even if I decide to change the java parser. Question What's the best way to encapsulate complex datastructures of third party libraries? I could create my own datatypes and copy the parts of the AST that I need into these. I could create lots of specialized access methods that work with the AST and create exactly the infos I need (e.g. the fully qualified return type of a method as one string, or the first template parameter of a class). I could create wrapper classes for the japa datastructures I currently need and embed the japa types inside, so that I can delegate requests to the japa types and transform the resulting japa types to my wrapper classes again. Which solution should I take? Are there other (better) solutions to this problem?

    Read the article

  • Node & Redis: Crucial Design Issues in Production Mode

    - by Ali
    This question is a hybrid one, being both technical and system design related. I'm developing the backend of an application that will handle approx. 4K request per second. We are using Node.js being super fast and in terms of our database struction we are using MongoDB, with Redis being a layer between Node and MongoDB handling volatile operations. I'm quite stressed because we are expecting concurrent requests that we need to handle carefully and we are quite close to launch. However I do not believe I've applied the correct approach on redis. I have a class Student, and they constantly change stages(such as 'active', 'doing homework','in lesson' etc. Thus I created a Redis DB for each state. (1 for being 'active', 2 for being 'doing homework'). Above I have the structure of the 'active' students table; xa5p - JSON stringified object #1 pQrW - JSON stringified object #2 active_student_table - {{studentId:'xa5p'}, {studentId:'pQrW'}} Since there is no 'select all keys method' in Redis, I've been suggested to use a set such that when I run command 'smembers' I receive the keys and later on do 'get' for each id in order to find a specific user (lets say that age older than 15). I've been also suggested that in fact I used never use keys in production mode. My question is, no matter how 'conceptual' it is, what specific things I should avoid doing in Node & Redis in production stage?. Are they any issues related to my design? Students must be objects and I sure can list them in a list but I haven't done yet. Is it that crucial in production stage?

    Read the article

  • using jquery selector to change attribute in variable returned from ajax request

    - by Blake
    I'm trying to pull in a filename.txt (contains html) using ajax and change the src path in the data variable before I load it into the target div. If I first load it into the div the browser first requests the broken image and I don't want this so I would like to do my processing before I load anything onto the page. I can pull the src values fine but I can't change them. In this example the src values aren't changed. Is there a way to do this with selectors or can they only modify DOM elements? Otherwise I may have to do some regex replace but using a selector will be more convenient if possible. $.ajax( { url: getDate+'/'+name+'.txt', success: function(data) { $('img', data).attr('src', 'new_test_src'); $('#'+target).fadeOut('slow', function(){ $('#'+target).html(data); $('#'+target).fadeIn('slow'); }); } }); My reason is I'm building a fully standalone javascript template system for a newsletter and since images and other things are upload via a drupal web file manager I want the content creators to keep their paths very short and simple and I can then modify them before I load in the content. This will also be distributed on a CD so I can need to change the paths for that so they still work.

    Read the article

  • Ruby On Rails with HTML5 offline apps - Firefox does not cache the application.manifest but Safari does

    - by hoitomt
    I'm working off of this Railscast tutorial: episode 247 I’m up to this point in the tutorial: added the rack-offline gem, added the application.manifest route, and added a reference to the manifest in the html tag. Right before it starts talking about problems with caching. Safari works as intended – When the server is running the page is served. From the server logs I can see that Safari is making a single request to the server every time for the items page. When I turn off the server the page displays as well, even after shutting down the browser and restarting. It appears to be pulling from the application.manifest (cache manifest). Firefox does not work as intended – When accessing the page for the first time, Firefox lets me know that the web page wants to store something locally, I allow. After clicking on allow, Firefox makes 5 requests to the server for the page (from the server log). The hash is different in every request. Is it is possible that the changing hash is triggering Firefox to keep trying to get the new manifest until it reaches some maximum (5 attempts)? Then, after the server is stopped, Firefox will not show the page at all. It looks like it isn’t caching the application.manifest. Firefox also gives you a way to see what sites are storing stuff locally by going to Tools/Options/Advanced/Network (Firefox/Preferences/Advanced/Network on Apple). I see localhost there but the size is 0 bytes. So for some reason, Firefox is not downloading my application.manifest along with the files

    Read the article

  • error: invalid type argument of '->' (have 'struct node')

    - by Roshan S.A
    Why cant i access the pointer "Cells" like an array ? i have allocated the appropriate memory why wont it act like an array here? it works like an array for a pointer of basic data types. #include<stdio.h> #include<stdlib.h> #include<ctype.h> #define MAX 10 struct node { int e; struct node *next; }; typedef struct node *List; typedef struct node *Position; struct Hashtable { int Tablesize; List Cells; }; typedef struct Hashtable *HashT; HashT Initialize(int SIZE,HashT H) { int i; H=(HashT)malloc(sizeof(struct Hashtable)); if(H!=NULL) { H->Tablesize=SIZE; printf("\n\t%d",H->Tablesize); H->Cells=(List)malloc(sizeof(struct node)* H->Tablesize); should it not act like an array from here on? if(H->Cells!=NULL) { for(i=0;i<H->Tablesize;i++) the following lines are the ones that throw the error { H->Cells[i]->next=NULL; H->Cells[i]->e=i; printf("\n %d",H->Cells[i]->e); } } } else printf("\nError!Out of Space"); } int main() { HashT H; H=Initialize(10,H); return 0; } The error I get is as in the title-error: invalid type argument of '->' (have 'struct node').

    Read the article

  • Using Matlab to find maxima for data with a lot of noise

    - by jimbo
    I have noisy data set with three peaks in Matlab and want to do some image processing on it. The peaks are about 5-9 pixels wide at the base, in a 50 x 50 array. How do I locate the peaks? Matlab is very new to me. Here is what I have so far... For my original image, let's call it "array", I tried J = fspecial('gaussian',[5 5], 1.5); C = imfilter(array, J) peaks = imregionalmax(C); but there is still some noise along the baseline between the peaks so I end up getting a ton of local max that are really just noise values. (I tried playing with the size of the filter, but that didn't help.) I also tried peaks = imextendedmax(C,threshold); where the threshold was determined visually... which works but is definitely not a good way to do it since it's not that robust obviously. So, how do I locate these peaks in a robust way? Thanks.

    Read the article

  • about c# OBJECTS and the Possibilties it has.

    - by user527825
    As a novice programmer and i always wonder about c# capabilities.i know it is still early to judge that but all i want to know is can c# do complex stuffs or something outside windows OS. 1- I think c# is a proprietary language (i don't know if i said that right) meaning you can't do it outside visual studio or windows. 2-also you cant create your own controller(called object right?) like you are forced to use these available in toolbox and their properties and methods. 3-can c# be used with openGL API or DirectX API . 4-Finally it always bothers me when i think i start doing things in visual studio, i know it sounds arrogant to say but sometimes i feel that i don't like to be forced to use something even if its helpful, like i feel (do i have the right to feel?) that i want to do all things by myself? don't laugh i just feel that this will give me a better understanding. 5- is visual c# is like using MaxScript inside 3ds max in that c# is exclusive to do windows and forms and components that are windows related and maxscript is only for 3d editing and manipulation for various things in the software. If it is too difficult for a beginner i hope you don't answer the fourth question as i don't have enough motivation and i want to keep the little i have. thank you for your time. Note: 1-sorry for my English, i am self taught and never used the language with native speakers so expect so errors. 2-i have a lot of questions regarding many things, what is the daily ratio you think for asking (number of questions) that would not bother the admins of the site and the members here. thank you for your time.

    Read the article

  • When are predicates appropriate and what is the best pattern for usage

    - by Maxim Gershkovich
    When are predicates appropriate and what is the best pattern for usage? What are the advantages of predicates? It seems to me like most cases where a predicate can be employed a tight loop would accomplish the same functionality? I don’t see a reusability argument given you will probably only implement a predicate in one method right? They look and feel nice but besides that they seem like you would only employ them when you need a quick hack on the collection classes? UPDATE But why would you be rewriting the tight loop again and again? In my mind/code when it comes to collections I always end up with something like Class Person End Class Class PersonList Inherits List(Of Person) Function FindByName(Name) as Person tight loop.... End Function End Class @Ani By that same logic I could implement the method as such Class PersonList Inherits List(Of Person) Function FindByName(Name) as PersonList End Function Function FindByAge(Age) as PersonList End Function Function FindBySocialSecurityNumber(SocialSecurityNumber) as PersonList End Function End Class And call it as such Dim res as PersonList = MyList.FindByName("Max").FindByAge(25).FindBySocialSecurityNumber(1234) and the result along with the amount of code and its reusability is largely the same, no? I am not arguing just trying to understand.

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Server.TransferRequest returns blank page on specific server

    - by jishi
    I'm facing an issue that seems to be related to configuration. I have a webapplication based on MonoRail, where we utilize the routing feature from MonoRail. On the first request after the application has started, the routing isn't initialized. To circumvent this, I have the following code in Application_OnError(): public virtual void Application_OnError() { if ( // identified as routing error ) Server.TransferRequest( Context.Request.RawUrl, false ); return; } Problem beeing that on our development server (which runs server 2008 R2, with IIS 7.5 and .NET 3.5) returns a blank page without headers, but on my workstation (which runs win7, IIS 7.5 and .NET 3.5) it works fine. What could be the cause of this? If the code in Application_OnError() throws an exception, what would be the expected output? I have verified the following: The access-log shows one entry, I'm not sure if a TransferRequest would show up as a second entry when invoked successfully The application actually do some work according to my internal logs, and it doesn't die since a subsequent requests works flawlessly (because routing will be active) Any hints on what to look for would be greatly appreciated!

    Read the article

  • What caused the rails application crash?

    - by so1o
    I'm sure someone can explain this. we have an application that has been in production for an year. recently we saw an increase in number of support requests for people having difficulty signing into the system. after scratching our head because we couldn't recreate the problem in development, we decided we'll switch on debug logger in production for a month. that was june 5th. application worked fine with the above change and we were waiting. then yesterday we noticed that the log files were getting huge so we made another change in production config.logger = Logger.new("#{RAILS_ROOT}/log/production.log", 50, 1048576) after this change, the application started crashing while processing a particular file. this particular line of code was RAILS_DEFAULT_LOGGER.info "Payment Information Request: ", request.inspect as you can see there was a comma instead of a plus sign. this piece of code was introduced in Mar. the question is this: why did the application fail now? if changing the debug level caused the application to process this line of code it should have started failing on june 5th! why today. please someone help us. Are we missing the obvious here? if you dont have an answer, at least let us know we aren't the only one that are bonkers.

    Read the article

  • trouble setting up anonymous login in ejabberd

    - by sofia
    Hi, In ejabberd.cfg I have the following {host_config, "thisislove-MacBook-2.local", [{auth_method, [internal, anonymous]}, {allow_multiple_connections, false}, {anonymous_protocol, both}]}. but when using speeqe javascript client (speeqe.com) to connect, I see it sends <body rid='1366284187' xmlns='http://jabber.org/protocol/httpbind' to='thisislove-macbook-2.local' xml:lang='en' wait='60' hold='1' window='5' content='text/xml; charset=utf-8' ver='1.6' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh'/> and the server responds with <body xmlns='http://jabber.org/protocol/httpbind' sid='f89bf034b02fa6b884bb0c55be3f1f69e45e3866' wait='60' requests='2' inactivity='30' maxpause='120' polling='2' ver='1.8' from='thisislove-macbook-2.local' secure='true' authid='353072658' xmlns:xmpp='urn:xmpp:xbosh' xmlns:stream='http://etherx.jabber.org/streams' xmpp:version='1.0'><stream:features xmlns:stream='http://etherx.jabber.org/streams'><mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><mechanism>DIGEST-MD5</mechanism><mechanism>PLAIN</mechanism></mechanisms><register xmlns='http://jabber.org/features/iq-register'/></stream:features></body> Notice the mechanisms, DIGEST-MD5 & PLAIN. If I'm not mistaken it should have ANONYMOUS as a mechanism as well. So what happens is that speeqe simply terminates the connection. As such I'm thinking i must be missing something in the anonymous configuration or the muc config. In the mod_muc configg, I have {mod_muc, [ %%{host, "conference.@HOST@"}, {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_room_name, 190}, {max_room_desc, 190}, {max_users, 500} ]} So what am I missing? Thanks

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • Implementation of MVC with SQLite and NSURLConnection, use cases?

    - by user324723
    I'm interested in knowing how others have implemented/designed database & web services in their iphone app and how they simplified it for the entire application. My application is dependent on these services and I can't figure out a efficient way to use them together due to the (semi)complexity of my requirements. My past attempts on combining them haven't been completely successful or at least optimal in my mind. I'm building a database driven iphone app that uses a relational database in sqlite and consumes web services based on missing content or user interaction. Like this hasn't been done before...right? Since I am using a relational database - any web services consumed requires normalization, parsing the result and persisting it to the database before it can be displayed in a table view controller. The applications UI consists of nested(nav controller) table views where a user can select a cell and be taken to the next table view where it attempts to populate the table views data source from the database. If nothing exists in the database then it will send a request via web services to download its content, thus download - parse - persist - query - display. Since the user has the ability to request a refresh of this data it still requires the same process. Quickly describing what I've implemented and tried to run with - 1st attempt - Used a singleton web service class that handled sending web service requests, parsing the result and returning it to the table view controller via delegate protocols. Once the controller received that data it would then be responsible for persisting it to the database and re-returning the result. I didn't like the idea of only preventing the case where the app delegate selector doesn't exists(released) causing the app to crash. 2nd attempt - Used NSNotificationCenter for easy access to both database and web services but later realized it was more complex due to adding and removing observers per view(which isn't advised anyways).

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >