Search Results

Search found 8849 results on 354 pages for 'cloud hosting'.

Page 327/354 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • image appears larger than bounding rectangle?

    - by Kildareflare
    Hello I'm implementing a custom print preview control. One of the objects it needs to display is an image which can be several pages high and wide. I've successfully divided the image into pages and displayed them in the correct order. Each "page" is drawn within the marginbounds. (That is, each page is sized to be the same as the marginBounds.) The problem I have is that the image the page represents exceeds the bottom margin. However if I draw a rectangle with the same dimensions as the image, at the same position as the image, the rectangle matches the marginbounds. Thus when drawn to the page (in print preview and printed page) the image is larger than a rectangle drawn based on the image dimensions. I.e. The image is drawn at the same size as e.MarignBounds and is drawn at e.MarginBounds.Location yet exceeds the bottom margin. How is this? I've checked resoutions at each step of the process and they are all 96DPI. //... private List<Image> m_printImages = new List<Image>(); //... private void ImagePrintDocument_PrintPage(object sender, PrintPageEventArgs e) { PrepareImageForPrinting(e); e.Graphics.DrawImage(m_printImages[m_currentPage], new Point(e.MarginBounds.X, e.MarginBounds.Y); e.Graphics.DrawRectangle(new Pen(Color.Blue, 1.0F), new Rectangle( new Point(e.MarginBounds.X, e.MarginBounds.Y), new Size(m_printImages[m_currentPage].Width, m_printImages[m_currentPage].Height))); //rest of handler } PS Not able to show an image as free image hosting blocked here.

    Read the article

  • How to rdc to a particular machine that is member of a TS Farm?

    - by Amit Arora
    I created a Terminal Services farm comprising of 3 TS hosts (say, TS1, TS2 and TS3) running Windows 2008 R2 Enterprise, a TS Connection broker and a TS Gateway for the purpose of hosting a windows application as a TS RemoteApp. The setup works just fine. Now, I want to do some further configuration changes on a particular TS host, say TS2 and not on any other TS host. I try to rdc to TS2 but I find myself getting connected to a randomly chosen TS host (sometimes TS1, sometimes TS2, and at other times, TS3). I think rdc connection is also going via the Connection Broker that is forwarding me to a TS host it decides is best. Is there a way I can deterministically connect to a particular TS host using rdc? I don't have option to login locally on a TS host as the entire setup is hosted in a remote data center. I think this is a very common scenario and must have a straight forward solution. It could be as easy as doing rdc to Connection Broker server and disabling it for a while, but I don't know how to do that too. Any help will be highly appreciated.

    Read the article

  • php fopen function dies, though I have file permissions set to read and write

    - by Matthew Robert Keable
    I'm following a tutorial on php, and am having difficulty getting this to work. I set the appropriate directory permissions to read and write, but every time I run this, I get the die string. The code is: $ourFileName = "testFile.txt"; $ourFileHandle = fopen($ourFileName, 'w') or die("can't open file"); fclose($ourFileHandle); As far as my basic understanding goes, if "testFile.txt" does not exist, fopen should create that file (I have basic knowledge of Python, and remember this same principle in that language). But it...it doesn't. Even if I create the aforementioned file, and put it up, that line of code still returns a die string. My hosting account does not give me permission to execute. Is this a problem? My server runs on Windows. I am using Dreamweaver CS5, on OSX 10.5.8. I've done some searching on this, and see other people having similar issues - but none of them keyed to exactly my range of problems. Being that I'm a beginner, I feel that it might be something I'm overlooking. Thanks!!

    Read the article

  • Display pdf file inline in Rails app

    - by Martas
    Hi, I have a pdf file attachment saved in the cloud. The file is attached using attachment_fu. All I do to display it in the view is: <%= image_tag @model.pdf_attachment.public_filename %> When I load the page with this code in the browser, it does what I want: it displays the attached pdf file. But only on Mac. On Windows, browsers will display a broken image placeholder. Chrome's Developer Tools report: "Resource interpreted as image but transferred with MIME type application/pdf." I also tried sending the file from controller: in PdfAttachmentController: def send_pdf_attachment pdf_attachment = PdfAttachment.find params[:id] send_file pdf_attachment.public_filename, :type => pdf_attachment.content_type, :file_name => pdf_attachment.filename, :disposition => 'inline' end in routes.rb: map.send_pdf_attachment '/pdf_attachments/send_pdf_attachment/:id', :controller => 'pdf_attachments', :action => 'send_pdf_attachment' and in the view: <%= send_pdf_attachment_path @model.pdf_attachment %> or <%= image_tag( send_pdf_attachment_path @model.pdf_attachment ) %> And that doesn't display the file on Mac (I didn't try on Windows), it displays the path: pdf_attachments/send_pdf_attachment/35 So, my question is: what do I do to properly display a pdf file inline? Thanks martin

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Using ASP.NET session state with Silverlight (PRISM)

    - by Jon Andersen
    Hi, The scenario: I have a PRISM application developed in Silverlight (4), and I'm using a ASP.NET server side application to host several web-services (which, in turn, accesses WCF-services, but that's not really important here). The Silverlight application must be able to call the web services cross-domain (meaning that the web services isn't necessarily on the same server hosting the silverlight application). The Silverlight application consists of several modules, each accessing the ASP.NET web-services. I do not have much experience with Silverlight and PRISM, but as far as I can see, this is not a very unusual scenario... The problem: My challange is, that when 2 different modules access the web-services, I get 2 new sessions on the web-server. I would have thought that since both modules live on the same HTML-page (and then also in the same browser session), they would get the same session on the web-server...? I have tried to make the web-service Proxy-client globally available in the container (using Unity), by registering an instance (using Container.RegisterInstance), and then getting this instance whenever a module needs to make a web-service call (using Container.Resolve), but this doesn't seem to help. However, any calls made within the same module always gets the same session on the server. Can anyone see what I'm missing here...? Thanks! Jon

    Read the article

  • Did anyone have this issue with a simple Facebook app or know how to solve it?

    - by Jian Lin
    I have a really simple few lines of Facebook app, using the new Facebook API: <pre> <?php require 'facebook.php'; // Create our Application instance. $facebook = new Facebook(array( 'appId' => '117676584930569', 'secret' => '**********', // hidden here on the post... 'cookie' => true, )); var_dump($facebook); ?> but it is giving me the following output: http://apps.facebook.com/woolaladev/i2.php would give out object(Facebook)#1 (6) { ["appId:protected"]=> string(15) "117676584930569" ["apiSecret:protected"]=> string(32) "**********" <--- just hidden on this post ["session:protected"]=> NULL ["sessionLoaded:protected"]=> bool(false) ["cookieSupport:protected"]=> bool(true) ["baseDomain:protected"]=> string(0) "" } Session is NULL for some reason, but I am logged in and can access my home and profile and run other apps on Facebook (to see that I am logged on). I am following the sample on: http://github.com/facebook/php-sdk/blob/master/examples/example.php http://github.com/facebook/php-sdk/blob/master/src/facebook.php (download using raw URL: wget http://github.com/facebook/php-sdk/raw/master/src/facebook.php ) Trying on both hosting companies at dreamhost.com and netfirms.com, and the results are the same.

    Read the article

  • How to send parameters with same encoding from javascript?

    - by nimcap
    I have a javascript file that lots of people have embedded to their pages. Since I am hosting the file, I have control over that javascript file; I cannot control the way it is embedded because lots of people is using it already. This javascript file sends GET requests to my servlets, and the parameters passed with the request are recorded to DB. For example, javascript sends a request to http://myserver.com/servlet?p1=123&p2=aString and then servlet records 123 and aString to DB somehow. Before sending strings I use encodeURIComponent() to encode it. But what I figured out is every client sends the same string with different encodings depending on either their browser or the site they are visiting. As a result, same strings are represented with different characters when it reaches servlet (so they are different strings). What I am trying to do is to convert the strings to one kind of encoding from javascript so when they reach the client same words are represented with same characters. How is this possible? PS. If there is a way to convert the encoding from Java it is also applicable.

    Read the article

  • How to achieve high availability?

    - by tanyehzheng
    My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent). In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal. External cloud system not allowed as he has no confidence. The system should take into account of scalability which means addition of new servers should be easy to configure. The servers should be load balanced. The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption). The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time) Please help. I'm already at the end of my wit. Thank you in advance.

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. I am now about to venture out to look for a hosting provider etc. This is my problem: I have read on many posts (admittedly old posts) that Debian server is much more robust than Ubuntu server - is this till the case?. In particular, I am constantly "raising elephants" with my Ubuntu 9.10 - this is "ok" for home use, but for a website server, I would not be so forgiving. Also, there seems to be a new "patch" every few weeks - which I would not like on a server (I want to leave the server well alone, and let it get on with its business of serving pages). So in this instance Debian looks a more attractive proposition. I am worried that the C++ apps I have developed on Ubuntu may not be binary compatable with Debian (or I may need to install additional libraries/packages etc to get things to work), and I have zero experience with Debian. Additionally, I dont want to be grappling with the learning curve associated with a new OS, whilst trying to launch a new web site (I am assuming Debian UI is quite different from Ubuntu). In this case, the maxim "the devil you know is better than the one you dont", seems appropriate - and I find Ubuntu a more attractive proposition (atleast I know my apps will run without any probs etc). Can anyone provide some rational advice (based on actual experience), to help me decide which route to take - given the two (conflicting) trains of thought outlined above?

    Read the article

  • Go for Zend framework or Django for a modular web application?

    - by dr. squid
    I am using both Zend framework and Django, and they both have they strengths and weakness, but they are both good framworks in their own way. I do want to create a highly modular web application, like this example: modules: Admin cms articles sections ... ... ... I also want all modules to be self contained with all confid and template files. I have been looking into a way to solve this is zend the last days, but adding one omer level to the module setup doesn't feel right. I am sure this could be done, but should I? I have also included Doctrine to my zend application that could give me even more problems in my module setup! When we are talking about Django this is easy to implement (Easy as in concept, not in implementation time or whatever) and a great way to create web apps. But one of the downsides of Django is the web hosing part. There are some web hosts offering Django support, but not that many.. So then I guess the question is what have the most value; rapid modular development versus hosting options! Well, comments are welcome! Thanks

    Read the article

  • Identify server that made call to web service

    - by sleepybobos
    I am working within an intranet environment. We have both a production and development sharepoint server (WSS 3). We have a 3rd party workflow product which runs on top of sharepoint. It is installed on both the production and development sharepoint servers. The workflow product can call web services I have written which are hosted on our web server. How would I have the web services determine which sharepoint server made the call to the web service, be it the production or development server? I would then use this information to server specific information from web.config or database etc. Currently the site hosting web services is setup to allow anonymous access so code such as System.Web.HttpContext.Current.User.Identity.Name; returns and empty string. If windows authenticaion is used it returns the identity of the currently logged in user, which is no user in identifying the server the call was made from. I need a push in the right direction to address what I believe is probably a common scenario please.

    Read the article

  • Why is PHP5 SQLite PDO silently failing on DB connection?

    - by danieltalsky
    I have no idea why my code is failing silently. PDO and PDO SQLite are confirmed loaded. Errors are turned on and OTHER errors display. The SQLite file exists. Perms are set correctly. If I change the filename, PHP actually DOES create the file but still fails silently. No output or commands are excecuted after the "$dbh = new PDO($db_conn);" command. I'm not sure what else I can possibly do to troubleshoot. What else... this is on Modwest shared hosting. ABOUT TO RUN <?php // Destination $db_name = '/confirmed/valid/path/DBName.db3'; $db_conn = 'sqlite:' . $db_name; try { var_dump($db_conn); $dbh = new PDO($db_conn); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch (Exception $e) { exit("Failed to open database: {$e->getMessage()} \n" ); } ?> THIS NEVER OUTPUTS! sdf

    Read the article

  • Loading a gallery of external images with JQuery

    - by pingu
    Hi guys, I'm creating a gallery of images pulled in via a twitter image hosting service, and I want to show a loading graphic before they are displayed, but the logic seems to be failing me. I'm trying to do it the following way: <ul> <li> <div class="holder loading"> </div> </li> <li> <div class="holder loading"> </div> </li> <li>...</li> <li>...</li> </ul> Where there "loading" class has an animated spinner set as it's background image. JQuery: $('.holder').each(function(){ var imgsrc = http://example.com/imgsrc; var img = new Image(); $(img).load(function () { $(this).hide(); $(this).parent('.holder').removeClass('loading'); $(this).parent().append(this); $(this).fadeIn(); }) .error(function () { }).attr('src', imgsrc); }); Which isn't working - I'm not sure that the logic is correct and how to access the "holder" from inside the img.load nested function. What would be the best way to build this gallery so that the images display only after they've been loaded?

    Read the article

  • Excessive httpd processes to stack up on my Rails + Apache2 + Passenger production setup?

    - by LeoAlmighty
    I have a Rails + Apache2 + Postgres + Passenger application running in production mode in OSX Snow Leopard. The application serves as a data warehouse for another application in the cloud so I'm constantly getting API calls to my OSX production build. After a recent reboot, I'm finding a ton of httpd processes stacking up and eventually requiring an apache reboot. I haven't changed any settings, everything was running fine before. Any ideas on the best way to troubleshoot this? $ ps -ef|grep httpd 0 6203 1 0 0:00.20 ?? 0:00.47 /usr/sbin/httpd -D FOREGROUND 70 6222 6203 0 0:00.05 ?? 0:00.11 /usr/sbin/httpd -D FOREGROUND 70 6224 6203 0 0:00.31 ?? 0:00.50 /usr/sbin/httpd -D FOREGROUND 70 6233 6203 0 0:00.05 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6234 6203 0 0:00.43 ?? 0:00.64 /usr/sbin/httpd -D FOREGROUND 70 6243 6203 0 0:00.02 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6319 6203 0 0:00.08 ?? 0:00.16 /usr/sbin/httpd -D FOREGROUND 70 6334 6203 0 0:00.02 ?? 0:00.05 /usr/sbin/httpd -D FOREGROUND 70 6469 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6487 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6593 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6709 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6718 6203 0 0:00.04 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6834 6203 0 0:00.01 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6852 6203 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND 70 6853 6203 0 0:00.01 ?? 0:00.02 /usr/sbin/httpd -D FOREGROUND

    Read the article

  • details in one Row without any redundancy in the CATID

    - by alkitbi
    $query1 = "select * from linkat_link where emailuser = '$email2' or linkname ='$domain_name2' ORDER BY date desc LIMIT $From,$PageNO"; id   catid      discription            price ------------------------------------ 1         1     domain name     100 2         1      book                  50 3        2      hosting               20 4        2      myservice           20 in this script i have one problem , if i have an ID for Each Cantegory , i have some duplicated CATID which has different content but shares the same CATID, i need to make any duplicated CATID to show in one , and all the discription will be in the same line (Cell) on the same row . So Each CatID will have all the details in one Row without any redundancy in the CATID

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • IIS6, ASP.NET MVC 1 and random slowdowns

    - by Mr Snuffle
    I've recently deployed a MVC application to an IIS6 web server. One strange behaviour I've been having is the load times will randomly blow up to 30sec+ and then return to normal. Our tests have shown this occurring on multiple connections at the same time. Once the wait has passed, the site become responsive again. It's completely random when this will occur, but will probably happen about once every 15 minutes or so. My first thought was the application was being restarted by the web server for some reason, but I determined this wasn't the case because the process recycling is set very infrequently, and I placed some logging in the application startup. It's also nothing to do with the database connection. This slowdown happens simply by moving between static pages too. I've watched the database with a SQL profiler, and nothing is hitting it when these slowdowns occur. Finally, I've placed entry and exit logging on my controller actions, the slowdown always happens outside of the controller. The entry and exit time for a controller action is always appropriately fast. Does anyone have any ideas of what could be causing this? I've tried running it locally on IIS7 and I haven't had the issue. I can only think it's something to do with our hosting provider.

    Read the article

  • I would like to interactively detect when an ActiveX component has been installed, and asynchronousl

    - by Brian Stinar
    Hello, I am working on a website, and I would like to refresh a portion of the page after an ActiveX component has been installed. I have a general idea of how to do this with polling, which I am working on getting going : function detectComponentThenSleep(){ try{ // Call what I want ActiveX for, if the method is available, or // ActiveXComponent.object == null --- test for existance document.getElementById("ActiveXComponent").someMethod(); } catch{ // Try again, if the method is not available setTimeout(detectComponentThenSleep, 100); } } However, what I would REALLY like to do is something like this: ActiveXObject.addListener("onInstall", myfunction); I don't actually have the source for the ActiveX component, but I have complete control of the page I am hosting it on. I would like to use JavaScript, if possible, to accomplish this. So, my question is 1.) will this actually work with the polling method? and 2.) Is there an interrupt/listener like way of doing this? I am sure I am missing something with connecting the dots here, I can already detect if the component is present, but I am having trouble doing this asynchronously. Thank you very much for your time and help, -Brian J. Stinar-

    Read the article

  • Can this code cause a "500" internal server error ?

    - by Scott B
    A few of my customers are reporting that they are getting "500" Internal Server errors lately. I believe it might be caused by various plugins they are using but each time, the hosting company (multiple hosts) are saying that the htaccess file had to be replaced to fix the issue. I'm submitting the code below from my custom theme because its the only place where I trigger an htaccess write. And I want to be sure that there are no problems here that could cause an issue that might contribute to the 500 errors... if (file_exists(ABSPATH.'/wp-admin/includes/taxonomy.php')) { require_once(ABSPATH.'/wp-admin/includes/taxonomy.php'); if(get_option('permalink_structure') !== "/%postname%/" || get_option('mycustomtheme_permalinks') !=="/%postname%/") { $mycustomtheme_permalinks = get_option('mycustomtheme_permalinks'); require_once(ABSPATH . '/wp-admin/includes/misc.php'); require_once(ABSPATH . '/wp-admin/includes/file.php'); global $wp_rewrite; $wp_rewrite->set_permalink_structure($mycustomtheme_permalinks); $wp_rewrite->flush_rules(); } if(!get_cat_ID('topMenu')){wp_create_category('topMenu');} if(!get_cat_ID('hidden')){wp_create_category('hidden');} if(!get_cat_ID('noads')){wp_create_category('noads');} } if (!is_dir(ABSPATH.'wp-content/uploads')) { mkdir(ABSPATH.'wp-content/uploads'); }

    Read the article

  • What to use for version control with Visual Studio 2008 for inhouse projects?

    - by Boog
    We want to put a number of our in-house projects under version control. Our projects are C# .NET applications and assemblies. We originally decided to go Microsoft all the way (as is the norm around here), and tried installing Visual Studio Team Foundation Server. To say the least, it was way more trouble of trying to get a successful install than it's worth, and I'm afraid we don't have a entire server to dedicate to TFS itself, as the installer seems to insist. We also considered a generic solution like Subversion or CVS, or possibly some kind of free online hosting that doesn't make our source publicly available under some license like Google Code appears to. Does anyone have any suggestions for us that would best fit our in-house MS environment? We'd also get some benefit out of some kind of project management tools if they were nicely integrated in to the solution, but this would only be a perk. I should also mention that we haven't entirely ruled out TFS, but it's looking like a pain, so anything you guys have to say for or against it would be helpful.

    Read the article

  • Java EE Website Planning Questions

    - by Tom Tresansky
    I'm a .NET programming who is soon moving to the Java EE world. I have plenty of experience with .NET web technologies, web services, WebForms and MVC. I am also very familiar with the Java language, and have written a few servlets and modified a couple of JSP pages, but I haven't touched EE yet. I'd like to set up a public website using Java EE so I can familiarize myself with whats current. I'm thinking just a technology playground at this point with no particular purpose in mind. What Java technologies are the current hotness for this sort of thing? (For example, if someone asked me what I'd recommend learning to set up a new .NET site, I'd say use ASP MVC instead of WebForms and recommend LINQ-to-SQL as a quick, simple and widely used ORM.) So, what I'd like to know is: Is there a recommended technology for the presentation layer? Is JSP considered a good approach, or is there anything cleaner/newer/more widespread? Is Hibernate still widely used for persistence? Is it obsolete? Is there anything better out there? (I've worked with NHibernate some, so I wouldn't be starting from scratch.) Is cheap Java EE web hosting available? What should I know being a .NET web developer moving to the Java world?

    Read the article

  • RegisterStartupScript doesn't appear to be working on page postback within update panel

    - by Jen
    OK - so am working on a system that uses a custom datepicker control (I know there are other ones out there.. but for consistency would like to understand why my current issue is happening and fix it). So its a custom user control with a textbox and on Page_PreRender does this: protected void Page_PreRender(object sender, EventArgs e) { string clientScript = @" $(function(){ $('#" + this.Date1.ClientID + @"').datepicker({dateFormat: 'dd/mm/yy', constrainInput: true}); });"; Page.ClientScript.RegisterStartupScript(this.GetType(), this.ClientID, clientScript, true); //Type t = this.GetType(); //if (!Page.ClientScript.IsStartupScriptRegistered(t, this.ClientID)) //{ // Page.ClientScript.RegisterStartupScript(t, this.ClientID, clientScript, true); //} } Ignore commented out stuff - that was me trying something different - didn't help. My issue is that this all works fine when I load the page. But if I select something from a dropdownlist causing a page postback - when I click into my date fields they stop working. As in I should be able to click into the textbox and a nice calendar control appears. But after postback there is no nice calendar control appearing! It's currently all wrapped (in the hosting page) inside an update panel. So I comment out the update panel stuff and the dates are working after page postback. So it appears to be something related to that update panel. Any suggestions please? Thanks!!

    Read the article

  • divert asp.net child controls

    - by Eric
    Hi Folks, I am trying to create an asp.net custom control that acts as a hosting container for any other controls, similar to the existing ‘Panel’ control. Basically, I need to build a web control that groups a bunch of other controls. It will consist of a header and a body pane, similar to a normal window in desktop application. The header will contain some simple text and some JavaScript driven code that shows/hides the body pane. The body pane simply hosts any number of other controls. +------------------------------------------------------+ | User Details Show/Hide | +------------------------------------------------------+ | Name: [Eric ] | | Address: [Some where] | | Date of Birth: [01/01/1980] | | | | (any other fields goes on) | | | | | +------------------------------------------------------+ Ideally I want to create a control that packs the whole thing together, so at design time I could use the following markup. <myCtl:SuperContainer runat=”server” Title=”User Details”> <asp:label id=”lblName” runat=”server” text=”Name:”/> <asp:textbox id=”txtName” runat=”server”/> <asp:label id=”lblDOB” runat=”server” text=”Date of Birth:”/> <asp:textbox id=”txtDOB” runat=”server”/> (…other controls definition…) </myCtl:SuperContainer> I plan to include two panels in my control, one for the header and another one for the body, but as you can see, the key issue is to find a way to ‘divert’ the child controls that are defined in the markup to the body panel, instead of the default parent container. I feel it can be some how simply override (manipulate) the control property, but don’t know how to properly do so. Can any one give some idea about how to implement this ‘SuperContainer’ control? Many thank, Eric

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >