Search Results

Search found 3752 results on 151 pages for 'offline caching'.

Page 18/151 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to disable caching in Rails by IP address?

    - by huacnlee
    I was used caches_page/caches_action for some page, it set expire in a time(1 hour or 1 day), I don't expire cache when the data updated. When the editors create or update the content them can't view the new result in the page. I want to disable the global caching when the visitor IP in my company. How to do it?

    Read the article

  • How do I stop Opera from caching a page?

    - by nishkarr
    I am trying to get Opera to re-request a page every time instead of just serving it from the cache. I'm sending the 'Cache-control=no-cache' and 'Pragma: no-cache' response headers but it seems as if Opera is just ignoring these headers. It works fine in other browsers - Chrome, IE, Firefox. How do I stop Opera from caching pages? What I want to be able to do is have Opera re-request a page when the user clicks the Back button on the browser.

    Read the article

  • Is there something like a "long running offline transaction" for NHibernate or any other ORM?

    - by Vilx-
    In essence this is a followup of this question. I'm beginning to feel that I should give up the whole idea, but I'll give it one more shot. What I want is pretty much like a DB transaction. It should track my changes to the DB and then in the end allow me to either commit or rollback them. If I insert an object, I should get it back in my next (appropriate) SELECT query. If I delete it, future SELECT queries should not return it. Etc. But there is one catch - this transaction would be very long running. It would start when the user opened a form (I'm talking about Windows Forms here), and the commit/rollback would be when the user closed it(with OK/Cancel). So it could take anywhere between seconds and days. This requirement rules out a standard DB transaction because that would lock the tables/rows it touched, and other users wouldn't be able to use the system. Also the transaction should not commit ANY changes to the DB until it was really committed. So if one user makes some changes, others don't see them until OK button is hit. This prevents errors in case the computer crashes or is disconnected from the network. I'm quite OK if the solution puts constraints on my model (I'm using MSSQL 2008, btw). I can design the DB/code any way I like. I'm also fine with the idea that a commit could fail because someone already modified one of the objects my transaction touched. Is there anything like this? I looked at NHibernate.Burrow, but I'm not sure that that's the thing I want. Added: It's the very beginning of the project so I'm not tied to NHibernate. I started out with it but I can still change easily.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Instant messanger capable of offline messaging & tolerant of network interruptions

    - by Terry
    I am looking for an instant messaging solution to facilitate communications between recovery vehicles in remote rural areas. All the vehicles have internet connections, but they are intermittent depending on location. Ideally we'd like something that has the following features: Offline messaging: messages sent to clients who are offline will be delivered when they next come online, regardless of whether the sender is still online or not. Lightweight: CPU cycles are limited on the machines in these vehicles. A bloated solution will be an issue. Client platform is primarily win32, but support for osx/linux/mobile devices would be a bonus. Non-chatty: Bandwidth is a precious commodity for us, so services which use a minimal amount are ideal. Fault tolerant: We see plenty of packetloss and high latency, so whatever we use needs to be able to function in trying network conditions. I'm not fussed if we use a hosted platform like gtalk/skype/msn/icq/whatever, and likewise I can run a server if need be. Suggestions would be appreciated!

    Read the article

  • Offline Outlook 2007 global address book slow to update

    - by munrobasher
    Outlook 2007 talking to an Exchange 2007 server. Usual set-up of personal contacts and a site wide global address book. Distribution lists are often created by IT in the global address book but it sometimes takes days for them to appear in the local offline copy. Performing a manual download of the address book doesn't help. Problem doesn't occur with non-offline mode of Outlook 2007. Any ideas? Server or client side issue? Cheers, Rob.

    Read the article

  • Fake domain doesn't resolve when offline

    - by Fletcher Moore
    I have a flimsy grasp of DNS. Nonetheless, in order to install a local development copy of Wordpress MU, I needed to create a fake domain, which I called local.dev. It and all subdomains simply resolve to 127.0.0.1. Apache then directs to the correct folder. I installed PowerDNS, and got it working properly with a MySQL backend. I didn't feel comfortable, but since it worked, I didn't ask any more questions. The bizarre thing is it requires an internet connection to resolve correctly, and now I need to use it offline. If I am offline, Chrome provies the error: Error 105 (net::ERR_NAME_NOT_RESOLVED): The server could not be found. If you need more information, I am happy to provide it.

    Read the article

  • HP Officejet 6000 E609n unexpectedly goes offline

    - by Sajee
    My local library has a number of Windows Vista SP1 PCs connected to two HP Officejet 6000 E609n wireless printers. Each PC can print to either of the two printers and one of the two printers is the default on each PC. This configuration has worked well over the last year w/o any trouble. Recently, the library staff is reporting that sometimes when patrons try to print, they can't. Closer inspection shows that the the default wireless printer is offline. In order to get the printer online again, the printer has to be restarted. In Control Panel Printers applet, under the Printer menu, the "Use Printer Offline" option is grayed out and there's no way to bring the printer back online w/o restarting it. Does anyone know what's going on here?

    Read the article

  • DB auto failover in c# does not work when the principal server physically goes offline

    - by user62521
    I'm setting up DB auto failover in C# with SQL Server 2008 and I have a 'high safety with automatic failover mirror' using a witness setup and my connection string looks like "Server=tcp:DC01; Failover Partner=tcp:DC02; database=dbname; uid=sewebsite;pwd=somerndpwd;Connect Timeout=10;Pooling=True;" During testing, when I turn off the SQL Server service on the principal server the auto failover works like a charm, but if I take the principal server offline (by shutting down the server or killing the network card) auto failover does not work and my website just times out. I found this article where the second last post suggests that its because we are using named pipes which does not work when the principal goes offline, but we force TCP in our connection string. What am I missing to get this DB auto failover working?

    Read the article

  • Exchange 2003 Event ID 9337 - Offline Address Book

    - by Creepycc
    I have a support issue with an Exchange 2003 SP2 server. Event ID: 9337 Description: OALGen did not find any recipients in address list '\Global Address List'. This offline address list will not be generated. - Default Offline Address List When you preview the Global Address list within Exchange Systems Manager all is fine. Turning off cached mode on Outllok clients still errors Public fiolders / System folders are fine OABINTEG detects no issues, Pfdavadmin has checked all DACL The GAL and OAB have been deleted / recreated several times (With differnt names) DCDIAG, NETDIAG, ExchangeBPA all run without error Exhausted Google links diagnosing this issue, any suggestions?

    Read the article

  • Cannot install wireless lan service on windows 2012 RTM offline

    - by user1763118
    I'm having trouble installing the wireless lan service offline with a fresh installed windows 2012 server RTM. I tried "install-windowsfeature wireless-networking" in the non-gui mode and using the server manager in the gui mode to enable the wireless lan service, but both of them show a "failure configuring windows updates" message after the installation restarted the system. I checked the event log and I think messages about "The WLAN Autoconfig service depends on the following service: nativewifip. This service might not be installed" are the source of the issue. Google shows it is a service called "native wifi filter", but I cannot find anywhere to install that service. I don't have a Ethernet adapter for that computer, so have to install everything offline before the wifi's working.

    Read the article

  • Issue with Administratively Assigned Offline Files

    - by ZnewmaN
    I need to use Administratively Assigned Offline files in conjunction with folder redirection, but user home folders live on 26 different shares. Do I just need to add 52 file paths similar to: \\server\shareA\%username%\Desktop \\server\shareA\%username%\My Documents \\server\shareB\%username%\Desktop \\server\shareB\%username%\My Documents ... and so on? Or do I need to create 26 GPOs, one for each share; or is there an easier way to do it? Edit: The solution provided by @berniewhite in the comments of using %homeshare% has resolved the issue and Administratively Assigned Offline Files is now working well.

    Read the article

  • How to use caching to increase render performance?

    - by Christian Ivicevic
    First of all I am going to cover the basic design of my 2d tile-based engine written with SDL in C++, then I will point out what I am up to and where I need some hints. Concept of my engine My engine uses the concept of GameScreens which are stored on a stack in the main game class. The main methods of a screen are usually LoadContent, Render, Update and InitMultithreading. (I use the last one because I am using v8 as a JavaScript bridge to the engine. The main game loop then renders the top screen on the stack (if there is one; otherwise, it exits the game) - actually it calls the render methods, but stores all items to be rendered in a list. After gathering all this information the methods like SDL_BlitSurface are called by my GameUIRenderer which draws the enqueued content and then draws some overlay. The code looks like this: while(Game is running) { Handle input if(Screens on stack == 0) exit Update timer etc. Clear the screen Peek the screen on the stack and collect information on what to render Actually render the enqueue screen stuff and some overlay etc. Flip the screen } The GameUIRenderer uses as hinted a std::vector<std::shared_ptr<ImageToRender>> to hold all necessary information described by this class: class ImageToRender { private: SDL_Surface* image; int x, y, w, h, xOffset, yOffset; }; This bunch of attributes is usually needed if I have a texture atlas with all tiles in one SDL_Surface and then the engine should crop one specific area and draw this to the screen. The GameUIRenderer::Render() method then just iterates over all elements and renders them something like this: std::for_each( this->m_vImageVector.begin(), this->m_vImageVector.end(), [this](std::shared_ptr<ImageToRender> pCurrentImage) { SDL_Rect rc = { pCurrentImage->x, pCurrentImage->y, 0, 0 }; // For the sake of simplicity ignore offsets... SDL_Rect srcRect = { 0, 0, pCurrentImage->w, pCurrentImage->h }; SDL_BlitSurface(pCurrentImage->pImage, &srcRect, g_pFramework->GetScreen(), &rc); } ); this->m_vImageVector.clear(); Current ideas which need to be reviewed The specified approach works really good and IMHO it is really has a good structure, however the performance could be definitely increased. I would like to know what do you suggest, how to implement efficient caching of surfaces etc so that there is no need to redraw the same scene over and over again? The map itself would be almost static, only when the player moves, we would need to move the map. Furthermore animated entities would either require updates of the whole map or updates of only the specific areas the entities are currently moving in. My first approaches were to include a flag IsTainted which should be used by the GameUIRenderer to decide whether to redraw everything or use cached version (or to not render anything so that we do not have to Clear the screen and let the last frame persist). However this seems to be quite messy if I have to manually handle in my Render method of the screen class if something has changed or not.

    Read the article

  • Adding Output Caching and Expire Header in IIS7 to improve performance

    - by Renso
    The problem: Images and other static files will not be cached unless you tell it to. In IIS7 it is remarkably easy to do this. Web pages are becoming increasingly complex with more scripts, style sheets, images, and Flash on them. A first-time visit to a page may require several HTTP requests to load all the components. By using Expires headers these components become cacheable, which avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often associated with images, but they can and should be used on all page components including scripts, style sheets, and Flash. Every time a page is loaded, every image and other static content like JavaScript files and CSS files will be reloaded on every page request. If the content does not change frequently why not cache it and avoid the network traffic?! The solution: In IIS7 there are two ways to cache content, using the web.config file to set caching for all static content, and in IIS7 itself setting aching by file extension that gives you that extra level of granularity. Web.config: In IIS7, Expires Headers can be enabled in the system.webServer section of the web.config file:   <staticContent>     <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" />   </staticContent> In the above example a cache expiration of 1 day was added. It will be a full day before the content is downloaded from the web server again. To expire the content on a specific date:   <staticContent>     <clientCache cacheControlMode="UseExpires" httpExpires="Sun, 31 Dec 2011 23:59:59 UTC" />   </staticContent> This will expire the content on December 31st 2011 one second before midnight. Issues/Challenges: Once the file has been set to be cached it wont be updated on the user's browser for the set cache expiration. So be careful here with content that may change frequently, like during development. Typically in development you don't want to cache at all for testing purposes. You could also suffix files with timestamp or versions to force a reload into the user's browser cache. IIS7 Expire Web Content Open up your web app in IIS. Open up the sub-folders until you find the folder or file you want to ad an expiration date to. In IIS6 you used to right-click and select properties, no such luck in IIS7, double click HTTP Response. Once the window loads for the HTTP Response Headers, look to the Actions navigation bar to the right, all the way at the top select SET COMMON HEADERS. The Enable HTTP keep-alive will already be pre-selected. Go ahead and add the appropriate expiration header to the file or folder. Note that if you selected a folder, it will apply that setting to all images inside that folder and all nested content, even subfolders. So, two approaches, depending on what level or granularity you need.

    Read the article

  • Is my hard drive about to fail?

    - by Cody Harlow
    I hear some squeaking noises sometimes when I use my computer so I ran smartctl. This is the results: === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: read failure 90% 5953 37922655 # 2 Extended offline Completed: read failure 90% 5953 37922655 # 3 Short offline Completed: read failure 90% 5953 37922655 # 4 Short offline Completed without error 00% 429 - # 5 Extended offline Aborted by host 90% 429 - # 6 Short offline Completed without error 00% 429 - # 7 Short offline Completed without error 00% 429 - Is this a bad sign?

    Read the article

  • How important is caching for a site's speed with PHP?

    - by benhowdle89
    I've just made a user-content orientated website http://www.humanisms.co.uk Its done in PHP, MySQL and jQuery's AJAX, at the moment there is only a dozen or so submissions and already i can feel it lagging slightly when it goes to a new page (therefore running a new mysql query) Is it most important for me to try and optimise my mysql queries (by prepared statements) or is it worth in looking at CDN's (amazon s3) and caching (much like the WordPress plugin WP Super Cache) which works by serving static HTML files when there hasnt been new content submitted. Which route is the most beneficial, for me as a developer, to take, ie. where am i better off concentrating my efforts to speed up the site?

    Read the article

  • Reduce Windows DNS Service caching on Window

    - by Nick G
    I'm struggling with DNS caching issues on a Windows based LAN. I've noticed that if I change a DNS record on a domain hosted by a 3rd party nameserver, that I always seem to be the very last person to see the change happen. I can often query the domain using a service which checks propagation around the world like www.whatsmydns.net but I usually find that all other DNS servers are correct and it's only my own server which has the old IP - even 8-12 hours later. This is an issue for us as we're website developers and often making changes to DNS records so these huge delays are frustrating. It seems to be because our primary domain controller server (+Active Directory & DNS) on our LAN (which is also our local DNS server) caches records for AGES (Way beyond it's published TTL). How can I stop the Windows DNS server from caching, or reduce the caching to only an hour or so?

    Read the article

  • A web app provider has asked for specific browser config

    - by Matthew
    They have asks to turn off caching on our browsers. I was aghast that they would ask such a thing. I said to them; To avoid caching it is best practice to use; <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="cache-control" content="no-cache" /> This should work across all browsers. Their reply was; We need to refresh javascript at runtime, this will not help us – any more ideas? I replied; Unsure what you mean by “refresh javascript at runtime”. If you are using ajax, browser caching can effect the XMLHttpRequest open method. Adding these meta tags to the source has fixed this for me in the past. Browser caching only caches resources, it should have no effect on site scripting. These meta tags will bypass browser caching. This is a reasonable request, isn't it?

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • How to disable server-side caching on IIS 7.5 (asp net mvc3)

    - by troebr
    I'm struggling with my IIS setup regarding caching, here's a brief description of my problem: I'm making a site for mobile and non-mobile, sharing the same controllers. IE: mysite/page will serve either mysite/page.cshtml, or mysite/M/page.cshtml, depending on the device. Here's the catch, it worked fine with my local and integration environment (cassiini and iis 6), but on another machine (2008r2/iis 7.5), apparently there is an aggressive server-side caching policy: If I access the website from a desktop machine, I have the correct pages (desktop version) If now I use my mobile phone to access the site, I will have the desktop version, (which implies a server-side cache, my phone is not using the same network). On the contrary, if I were to restart the server and access the site using my phone first, then I will get the mobile version on my desktop (only for the pages I already visited of course). I tried 2 solutions so far: Disabling OutputCache from my Web.config: <httpModules> [..] <remove name="OutputCache" /> </httpModules> And unchecking "Enable output cache" in "Output Caching" for my site in IIS. What's bugging me is that I do not have this problem with my other server (iis 6.0), although caching is enabled on this one, which leads me to think it is related to iis 7 caching addition. My question is simple: how does one disable server-side caching on IIS 7.5? Thanks in advance for your iis lights!

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • How to prevent google chrome from caching my inputs, esp hidden ones when user click back?

    - by melaos
    hi there, i have an asp.net mvc app which have quite a few hidden inputs to keep values around and formatting their names so that i can use the Model binding later when i submit the form. i stumble into a weird bug with chrome which i don't have with IE or Firefox when the user submits the form and click on the back button, i find that chrome will keep my hidden input values as well. this whole chunk is generated via javascript hence i believe chrome is caching this. function addProductRow(productId, productName) { if (productName != "") { //use guid to ensure that the row never repeats var guid = $.Guid.New(); var temp = parseFloat($(".tboProductCount").val()); //need the span to workaround for chrome var szHTML = "<tr valign=\"top\" id=\"productRow\"><td class=\"productIdCol\"><input type=\"hidden\" id=productRegsID" + temp + "\" name=\"productRegs[" + temp + "].productId\" value=\"" + productId + "\"/>" + "<span id=\"spanProdID" + temp + "\" name=\"spanProdID" + temp + "\" >" + productId + "</span>" + "</td>" //+ "<td><input type=\"text\" id=\"productRegName\" name=\"productRegs[" + temp + "].productName\" value=\"" + productName + "\" class=\"productRegName\" size=\"50\" readonly=\"readonly\"/></td>" + "<td><span id=\"productRegName\" name=\"productRegs[" + temp + "].productName\" class=\"productRegName\">"+ productName + "<\span></td>" + "<td id=\"" + guid + "\" class=\"productrowguid\" \>" + "<input type=\"text\" size=\"20\" id=\"productSerialNo" + temp + "\" name=\"productRegs[" + temp + "].serialNo\" value=\"" + "\" class=\"productSerialNo\" maxlength=\"18\" />" + "<a class=\"fancybox\" id=\"btnImgSerialNo" + temp + "\" href=\"#divSerialNo" + temp + "\"><img class=\"btnImgSerialNo\" src=\"Images/landing_14.gif\" /></a>" + "<span id=\"snFlag" + temp + "\" class=\"redWarning\"></span></td>" + "<td><input type=\"text\" id=\"productRegDate" + temp + "\" name=\"productRegs[" + temp + "].PurchaseDate\" readonly=\"readonly\" />" + "<span id=\"snRegDate" + temp + "\" class=\"redWarning\"></span></td>" + "<td align=\"center\"><img style=\"cursor:pointer\" id=\"btnImgDelete\" src=\"Images/btn_remove.gif\" onclick=\"javascript:removeProductRow('" + guid + "')\" /><div style=\"display:none;\"><div id=\"divSerialNo" + temp + "\" style=\"font-family:verdana;font-size:11px;width:600px\">" + serialnumbergeneral + "<br /><br />" + getSNImageByCategory(productId) + "</div></div></td>" + "</tr>"; $(".ProductRegistrationTable").append(szHTML); $("a.fancybox").fancybox(); //initialization $("#productRegDate" + temp).datepicker({ minDate: new Date(1996, 1 - 1, 1), maxDate: 0 }); //sanity check //s7test alert('1 '+$("#spanProdID" + temp)); alert('2 '+$("#productRegsID" + temp)); } //end function addNewProductRow i need the id to be refreshed when the user select a new product, but putting another span tag beside it shows that the span will have the new id will the hidden input will still have the previous id. is there an elegant way to workaround this issue? thanks

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >