Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 363/428 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Fiscal year, quarters, student table, and faculty table... How do I relate these?!

    - by yuudachi
    I have a student and faculty table. The primary key for student is studendID (SID) and faculty's primary key is facultyID, naturally. Student has an advisor column and a requested advisor column, which are foreign key to faculty. That's simple enough, right? However, now I have to throw in dates. I want to be able to view who their advisor was for a certain quarter (such as 2009 Winter) and who they had requested. The result will be a table like this: Year | Term | SID | Current | Requested ------------------------------------------------ 2009 | Winter | 860123456 | 1 | NULL 2009 | Winter | 860445566 | 3 | NULL 2009 | Winter | 860369147 | 5 | 1 And then if I feel like it, I could also go ahead and view a different year and a different term. I am not sure how these new table(s) will look like. Will there be a year table with three columns that are Fall, Spring and Winter? And what will the Fall, Spring, Winter table have? I am new to the art of tables, so this is baffling me... Also, I feel I should clarify how the site works so far now. Admin can approve student requests, and what happens is that the student's current advisor gets overwritten with their request. However, I think I should not do that anymore, right?

    Read the article

  • Altering an embedded truetype font so it will be useable by Windows GDI

    - by Ritsaert Hornstra
    I am trying to render PDF content to a GDI device context (a 24bit bitmap to be exact). Parsing the PDF stream into PDF objects and rendering the PDF commands from the content dictionary works well, inclduing font rendering. Embedded fonts are decompressed from their FontFile streams and "loaded" using AddFontMemResourceEx. Now some embedded fonts remove some TrueType tables that are needed by GDI, like the NAME table. Because of this, I tried to modify the font by parsing the TrueType subset font into it's tables and modify those tables that have data missing / missing tables are regenerated with as correct information as possible. I use the Microsoft Font Validator tool to see how "correct" the generated font is. I still get a few errors, like for the maxp table the max values are usually too large (it is a subset) or The xAvgCharWidth field does not equal the calculated value of the OS/2 table is not correct but this does not stop other embedded fonts to be useable.The fonts embedded using PDFCreator are the ones that are problematic. Question: - How can I determine what I need to change to the font file in order for GDI to be able to use it? - Are there any other font validation tools that might give me insight into what is still wrong with the fontfile? If needed: I can make an original fontfile and an altered fontfile available for download somewhere.

    Read the article

  • python tkinter gui

    - by Lewis Townsend
    I'm wanting to make a small python program for yearly temperatures. I can get nearly everything working in the standard console but I'm wanting to implement it into a GUI. The program opens a csv file reads it into lists, works out the average, and min & max temps. Then on closing the application will save a summary to a new text file. I am wanting the default start up screen to show All Years. When a button is clicked it just shows that year's data. Here is a what I want it to look like. Pretty simple layout with just the 5 buttons and the out puts for each. I can make up the buttons for the top fine with: Code: class App: def __init__(self, master): frame = Frame(master) frame.pack() self.hi_there = Button(frame, text="All Years", command=self.All) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2011", command=self.Y1) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2012", command=self.Y2) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2013", command=self.Y3) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="Save & Exit", command=self.Exit) self.hi_there.pack(side=LEFT) I'm not sure as to how to make the other elements, such as the title & table. I was going to post the code of the small program but decided not to. Once I have the structure/framework I think I can populate the fields & I might learn better this way. Using Python 2.7.3

    Read the article

  • gen_server with a dict vs mnesia table vs ets

    - by pablo
    Hi, I'm building an erlang server. Users sends http requests to the server to update their status. The http request process on the server saves the user status message in memory. Every minute the server sends all messages to a remote server and clear the memory. If a user update his status several times in a minute, the last message overrides the previous one. It is important that between reading all the messages and clearing them no other process will be able to write a status message. What is the best way to implement it? gen_server with a dict. The key will be the userid. dict:store/3 will update or create the status. The gen_server solves the 'transaction' issue. mnesia table with ram_copies. Handle transactions and I don't need to implement a gen_server. Is there too much overhead with this solution? ETS table which is more light weight and have a gen_server. Is it possible to do the transaction in ETS? To lock the table between reading all the messages and clearing them? Thanks

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • What type of web service should I put together?

    - by Jake
    I want to write a web service using Visual Studio. The service needs to support some type of authentication, and should be able to receive commands via simple HTTP GET requests. The input would only be a method call with some parameters, and the responses will be simple status/error codes. My instinct would be to go with an ASP.NET Web Service, but this isn't an option in C# 4.0 and it makes me wonder if I should be using something that's more up-to-date. I've looked into WCF, but it seems like this requires a running application on the client-side - is there a way to query a WCF host by just accessing a URL? The authentication is also an important piece. Developing my own little authentication system seems like a bad idea - I've read that it's too easy to mess up. What would be the standard way of authenticating with a web service like this? I'd love to look up all of the specifics on this and learn it myself, but I really don't even know where to begin. Some direction would be greatly appreciated!

    Read the article

  • Building asynchronous cache pattern with JSP

    - by merweirdo
    I have a JSP that will take some 8 minutes to render. The code logic itself can not be made more efficient (it will update often and be updated by basically a pointy haired boss). I tried wrapping it with a caching layer like <%@ taglib uri="/WEB-INF/classes/oscache.tld" prefix="oscache" %> <oscache:cache time="60"> <div class="pagecontent"> ..... my logic </div> </oscache:cache> This is nice until the 60 seconds is over. The next query after that blocks until the 8 minutes of rendering is done with again. I would need a way to build a pattern something like: If there is no version of the dynamic content in the cache run the actual logic (and populate the cache for subsequent requests) If there is a non-expired version of the dynamic content in the cache serve the output of the JSP logic from the cache If there is an expired version of the dynamic content in the cache serve the output of the JSP logic still from the cache AND run the JSP logic in the background so that the cache gets updated transparently to the user - avoiding the user have to wait for 8 minutes I found out that at least EHCache might be able to do some asynchronous cache updating but it did not sadly seem to apply to the JSP tags... Also I have to take in 10-20 parameters for the actual logic of the JSP and some of them should be used as a key for caching. Code example and/or pointers would be greatly appreciated. I do not frankly care if the solution provided is extremely ugly. I just want a simple 5 minute caching with asynchronous cache update taking into account some parameters as a key.

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • Accessing global variable in multithreaded Tomcat server

    - by jwegan
    I have a singleton object that I construct like thus: private static volatile KeyMapper mapper = null; public static KeyMapper getMapper() { if(mapper == null) { synchronized(Utils.class) { if(mapper == null) { mapper = new LocalMemoryMapper(); } } } return mapper; } The class KeyMapper is basically a synchronized wrapper to HashMap with only two functions, one to add a mapping and one to remove a mapping. When running in Tomcat 6.24 on my 32bit Windows machine everything works fine. However when running on a 64 bit Linux machine (CentOS 5.4 with OpenJDK 1.6.0-b09) I add one mapping and print out the size of the HashMap used by KeyMapper to verify the mapping got added (i.e. verify size = 1). Then I try to retrieve the mapping with another request and I keep getting null and when I checked the size of the HashMap it was 0. I'm confident the mapping isn't accidentally being removed since I've commented out all calls to remove (and I don't use clear or any other mutators, just get and put). The requests are going through Tomcat 6.24 (configured to use 200 threads with a minimum of 4 threads) and I passed -Xnoclassgc to the jvm to ensure the class isn't inadvertently getting garbage collected (jvm is also running in -server mode). I also added a finalize method to KeyMapper to print to stderr if it ever gets garbage collected to verify that it wasn't being garbage collected. I'm at my wits end and I can't figure out why one minute the entry in HashMap is there and the next it isn't :(

    Read the article

  • How can I differentiate a manual scroll (via mousewheel/scrollbar) from a Javascript/jQuery scroll?

    - by David Murdoch
    UPDATE: Here is a jsbin example demonstrating the problem. Basically, I have the following javascript which scrolls the window to an anchor on the page: // get anchors with href's that start with "#" $("a[href^=#]").live("click", function(){ var target = $($(this).attr("href")); // if the target exists: scroll to it... if(target[0]){ // If the page isn't long enough to scroll to the target's position // we want to scroll as much as we can. This part prevents a sudden // stop when window.scrollTop reaches its maximum. var y = Math.min(target.offset().top, $(document).height() - $(window).height()); // also, don't try to scroll to a negative value... y=Math.max(y,0); // OK, you can scroll now... $("html,body").stop().animate({ "scrollTop": y }, 1000); } return false; }); It works perfectly......until I manually try to scroll the window. When the scrollbar or mousewheel is scrolled I need to stop the current scroll animation...but I'm not sure how to do this. This is probably my starting point... $(window).scroll(e){ if(IsManuallyScrolled(e)){ $("html,body").stop(); } } ...but I'm not sure how to code the IsManuallyScrolled function. I've checked out e (the event object) in Google Chrome's console and AFAIK there is not way to differentiate between a manual scroll and jQuery's animate() scroll. How can I differentiate between a manual scroll and one called via jQuery's $.fn.animate function?

    Read the article

  • Powershell scripts to backup SQL, SVN

    - by bszom
    I'm trying to use PowerShell to create some backups, and then to copy these to a web folder (or, in other words, upload them to a WebDAV share). At first I thought I'd do the WebDAV stuff from within PowerShell, but it seems this still requires a fair amount of "manual labour", ie: constructing HTTP requests. I then settled for creating a web folder from the script and letting Windows handle the WebDAV stuff. It seems that all it takes to create a web folder is to create a standard shortcut, as described here. What I can't figure out is how to actually copy files to the shortcut's target..? Maybe I'm going about this the wrong way. It would be ideal if I could somehow encrypt the credentials for the WebDAV in the script, then have it create the web folder, shunt over the files, and delete the web folder again. Or even better, not use a web folder at all. Third option would be to just create the web folder manually and leave it there, though I'd rather not. Any ideas/pointers/tips? :)

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Help me to refactor this F# code to tail recursion

    - by Kev
    I write some code to learning F#. Here is a example: let nextPrime list= let rec loop n= match n with | _ when (list |> List.filter (fun x -> x <= ( n |> double |> sqrt |> int)) |> List.forall (fun x -> n % x <> 0)) -> n | _ -> loop (n+1) loop (List.max list + 1) let rec findPrimes num= match num with | 1 -> [2] | n -> let temp = findPrimes <| n-1 (nextPrime temp ) :: temp //find 10 primes findPrimes 10 |> printfn "%A" I'm very happy that it just works! I'm totally beginner to recursion Recursion is a wonderful thing. I think findPrimes is not efficient. Someone help me to refactor findPrimes to tail recursion if possible? BTW, is there some more efficient way to find first n primes?

    Read the article

  • Exposed onsite vs IFD deployments for MS Dynamics CRM

    - by Greg McGuffey
    I'm working for the first time on a MS Dyanmics CRM 4.0 project. Our company has a high number of remote employees and even more remote consultants. As such it will be necessary to make the CRM solution available over the internet. As near as I can tell, I have three options: Have everyone use a VPN to access an intranet site (typical onsite deployment). However, we have found that VPNs are far from trouble free and cause many support issues. We avoid them like the plague. Use IFD to expose the CRM on the internet. I don't know much about this except that the URL will be different than the onsite URL, which could cause some headaches (see below). Expose the CRM site by opening the site to the internet, using SSL to encrypt traffic. We currently do this with our MS sharepoint sites. I'm not sure how secure this would be (one of the reasons for this question). I'd like to avoid using both the onsite intranet deployment and the IFD together for a couple of reasons. One of the requests for the solution is use email to notify users that they've been assigned a task, and include the URL to the task within the email. For this reason. If both deployments are used, then I'll need to include two URLs and the user would need to know which to use. Which leads to the second reason, the main users of the solution split time between being in the office and being remote. Thus they would need to access the solution two different ways, and know when to use which. Bad. So, what are the advantages/disadvantages of any of these methods? Any other options? Is there any issue using IFD from within the intranet? Security issues? Thanks!

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Rails Tableless Model

    - by mplacona
    I'm creating a tableless Rails model, and am a bit stuck on how I should use it. Basically I'm trying to create a little application using Feedzirra that scans a RSS feed every X seconds, and then sends me an email with only the updates. I'm actually trying to use it as an activerecord model, and although I can get it to work, it doesn't seem to "hold" data as expected. As an example, I have an initializer method that parses the feed for the first time. On the next requests, I would like to simply call the get_updates method, which according to feedzirra, is the existing object (created during the initialize) that gets updated with only the differences. I'm finding it really hard to understand how this all works, as the object created on the initialize method doesn't seem to persist across all the methods on the model. My code looks something like: def initialize feed parse here end def get_updates feedzirra update passing the feed object here end Not sure if this is the right way of doing it, but it all seems a bit confusing and not very clear. I could be over or under-doing here, but I'd like your opinion about this approach. Thanks in advance

    Read the article

  • How do i make divs go into another row when full?

    - by acidzombie24
    My code is something like the below. When theres 3 images everything is fine once theres 4 it gets full and moves the entire div.top into another row. How do i make the div inside top just start a new row instead? I tried writing .top width=500px but once it hits or passes it instead the images inside are squeeze together instead of each being 150x150. I tried max-width on top instead and in opera and chrome i see the border of top as 500width but the images continue to render pass it. (i have a firefox problem with my div so the width looks fixed to something else). So how do i make these divs go into another row? and not try to squeeze together <div class="top"> <div><a href><img/></a></div> <div><a href><img/></a></div> <div><a href><img/></a></div> </div>

    Read the article

  • Animation is slow on iPhone

    - by Anthony Chan
    I'm developing an app that would display images and change them according to the user's action. I've created a subclass of UIView to contain an image, an index number and an array of image names. The code is like this: @interface CustomPic : UIView { UIImageView *pic; NSInteger index; NSMutableArray *picNames; //<-- an array of NSString } And in the implementation part, it has a method to change the image using a dissolve effect. - (void)nextPic { index++; if (index >= [picNames count]) { index = 0; } UIImageView *temp = pic; pic.image = [UIImage imageNamed:[picNames objectAtIndex:index]]; temp.alpha = 0; [UIView beginAnimations:nil context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; [UIView setAnimationDuration:0.25]; pic.alpha = 0; temp.alpha = 1; [UIView commitAnimations]; } In the viewController, there are several CustomPic which would change the images depends on users' choice. The images would change as expected with the fade in/out effect, but the animation performance is really bad. I've tested it on an iPhone 3G, the Instruments shows that the animation is only 2-3FPS! I tried many methods to simplify and modify the codes but with no hope. Is there something wrong in my code or in my concept? Thanks for any help. P.S. all the images are 320*480 PNGs with a max size of 15KB.

    Read the article

  • Twitter API Rate Limit - Overcoming on an unauthenticated JSON Get with Objective C?

    - by Cian
    I see the rate limit is 150/hr per IP. This'd be fine, but my application is on a mobile phone network (with shared IP addresses). I'd like to query twitter trends, e.g. GET /trends/1/json. This doesn't require authorization, however what if the user first authorized with my application using OAuth, then hit the JSON API? The request is built as follows: - (void) queryTrends:(NSString *) WOEID { NSString *urlString = [NSString stringWithFormat:@"http://api.twitter.com/1/trends/%@.json", WOEID]; NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *theRequest=[NSURLRequest requestWithURL:url cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:10.0]; NSURLConnection *theConnection=[[NSURLConnection alloc] initWithRequest:theRequest delegate:self startImmediately:YES]; if (theConnection) { // Create the NSMutableData to hold the received data. theData = [[NSMutableData data] retain]; } else { NSLog(@"Connection failed in Query Trends"); } //NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]]; } I have no idea how I'd build this request as an authenticated one however, and haven't seen any examples to this effect online. I've read through the twitter OAuth documentation, but I'm still puzzled as to how it should work. I've experimented with OAuth using Ben Gottlieb's prebuild library, and calling this in my first viewDidLoad: OAuthViewController *oAuthVC = [[OAuthViewController alloc] initWithNibName:@"OAuthTwitterDemoViewController" bundle:[NSBundle mainBundle]]; // [self setViewController:aViewController]; [[self navigationController] pushViewController:oAuthVC animated:YES]; This should store all the keys required in the app's preferences, I just need to know how to build the GET request after authorizing! Maybe this just isn't possible? Maybe I'll have to proxy the requests through a server side application? Any insight would be appreciated!

    Read the article

  • Want to display a 3D model on the iPhone: how to get started?

    - by JeremyReimer
    I want to display and rotate a single 3D model, preferably textured, on the iPhone. Doesn't have to zoom in and out, or have a background, or anything. I have the following: an iPhone a MacBook the iPhone SDK Blender My knowledge base: I can make 3D models in various 3D programs (I'm most comfortable with 3D Studio Max, which I once took a course on, but I've used others) General knowledge of procedural programming from years ago (QuickBasic - I'm old!) Beginner's knowledge of object-oriented programming from going through simple Java and C# tutorials (Head Start C# book and my wife's intro to OOP course that used Java) I have managed to display a 3D textured model and spin it using a tutorial in C# I got off the net (I didn't just copy and paste, I understand basically how it works) and the XNA game development library, using Visual Studio on Windows. What I do not know: Much about Objective C Anything about OpenGL or OpenGL ES, which the iPhone apparently uses Anything about XCode My main problem is that I don't know where to start! All the iPhone books I found seem to be about creating GUI applications, not OpenGL apps. I found an OpenGL book but I don't know how much, if any, applies to iPhone development. And I find the Objective C syntax somewhat confusing, with the weird nested method naming, things like "id" that don't make sense, and the scary thought that I have to do manual memory management. Where is the best place to start? I couldn't find any tutorials for this sort of thing, but maybe my Google-Fu is weak. Or maybe I should start with learning Objective C? I know of books like Aaron Hillgrass', but I've also read that they are outdated and much of the sample code doesn't work on the iPhone SDK, plus it seems geared towards the Model-View-Controller paradigm which doesn't seem that suited for 3D apps. Basically I'm confused about what my first steps should be.

    Read the article

  • How to Load external Div that has a dynamic content using ajax and jsp/servlets ?

    - by A.S al-shammari
    I need to use ajax feature to load external div element ( external jsp file) into the current page. That JSP page contains a dynamic content - e.g. content that is based on values received from the current session. I solved this problem , but I'm in doubt because I think that my solution is bad , or maybe there is better solution since I'm not expert. I have three files: Javascript function that is triggered when a element is clicked, it requests html data from a servlet: $("#inboxtable tbody tr").click(function(){ var trID = $(this).attr('id'); $.post("event?view=read",{id:trID}, function(data){ $("#eventContent").html(data); // load external file },"html"); // type }); The servlet "event" loads the data and generates HTML content using include method : String id = request.getParameter("id"); if (id != null) { v.add("Test"); v.add(id); session.setAttribute("readMessageVector", v); request.getRequestDispatcher("readMessage.jsp").include(request, response); } The readMessage jsp file looks like this: <p> Text: ${readMessageVector[0]} </p> <p> ID: ${readMessageVector[1]} </p> My questions Is this solution good enough to solve this problem - loading external jsp that has dynamic content ? Is there better solution ?

    Read the article

  • VS2010 development web server does not use integrated-mode HTTP handlers/modules

    - by Domenic
    I am developing an ASP.NET MVC 2 web site, targeted for .NET Framework 4.0, using Visual Studio 2010. My web.config contains the following code: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="XhtmlModule" type="DomenicDenicola.Website.XhtmlModule" /> </modules> <handlers> <add name="DotLess" type="dotless.Core.LessCssHttpHandler,dotless.Core" path="*.less" verb="*" /> </handlers> </system.webServer> When I use Build > Publish to put the web site on my local IIS7 instance, it works great. However, when I use Debug > Start Debugging, neither the HTTP handler nor module are executed on any requests. Strangely enough, when I put the handler and module <add /> tags back into <system.web /> under <httpHandlers /> and <httpModules />, they work. This seems to imply that the development web server is running in classic mode. How do I fix this?

    Read the article

  • Application Design: Single vs. Multiple Hits to the DB

    - by shyneman
    I'm building a service that performs a set of configured activities based on the type of request that it receives. Each activity involves going to the database and retrieving/updating some kind of information. The logic for each activity can be generalized and re-used across different request types. The activities may need to participate in a transaction for the duration of the servicing the request. One option, I'm considering is having each activity maintain its own access to DAL/database. This fully encapsulates the activity into a stand-alone re-usable piece, but hitting the database multiple times for one request doesn't seem like a viable option. I don't really know how to easily implement the concept of a transaction across the multiple activities here either. The second option is to encapsulate ALL the activities into one big activity and hit the database once. But this does not allow re-use and configuration of these activities for different requests. Does anyone have any suggestions and input about what should be the best way to approach my problem? Thanks for any help.

    Read the article

  • How to Specify Columntype in fluent nHibernate?

    - by Bipul
    I have a class CaptionItem public class CaptionItem { public virtual int SystemId { get; set; } public virtual int Version { get; set; } protected internal virtual IDictionary<string, string> CaptionValues {get; private set;} } I am using following code for nHibernate mapping Id(x => x.SystemId); Version(x => x.Version); Cache.ReadWrite().IncludeAll(); HasMany(x => x.CaptionValues) .KeyColumn("CaptionItem_Id") .AsMap<string>(idx => idx.Column("CaptionSet_Name"), elem => elem.Column("Text")) .Not.LazyLoad() .Cascade.Delete() .Table("CaptionValue") .Cache.ReadWrite().IncludeAll(); So in database two tables get created. One CaptionValue and other CaptionItem. In CaptionItem table has three columns 1. CaptionItem_Id int 2. Text nvarchar(255) 3. CaptionSet_Name nvarchar(255) Now, my question is how can I make the columnt type of Text to nvarchar(max)? Thanks in advance.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >