Search Results

Search found 3760 results on 151 pages for 'mutlple entries'.

Page 27/151 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Multi-Document TOC showing in wrong order

    - by Jeremy DeStefano
    I had a large document that was having formatting issues, so I split it into 2 files. Chapters 1-7 are in the main doc with the TOC and a second doc has chapters 8-12. I have the following: {TOC \O "1-3" \H \Z \U} {RD \f "MCDPS Training Manual Part2.docx"} The TOC is created and has entries from both documents, however its showing the entries from Chapter 8-11 first and then Chapter 1-7. I've read that it should list them based on page numbers, but its not. Chapter 8 starts at page 121, yet its listing it first. How can I get it to show the TOC from the main doc first and then the RD?

    Read the article

  • Cross-match a number of worksheets to one master worksheet

    - by Carter
    Hopefully the title is not too confusing. Basically, I have a master list of addresses and those addresses are listed in multiple columns (Column A - street number, Column B - street name, Column C - street type etc) and I get a another set of addresses on a daily basis with the same address formatting. What I need to do is cross-match the daily changing list of addresses to the first list to remove any matching entries. So, for example, if the first list has 123 Main St on it, I have to ensure that there are no entries of 123 Main St on any subsequent daily lists. I'm using one address as an example but the lists contain upwards of 10000 addresses that have to be cross matched. I don't need them flagged or highlighted, just deleted from the daily lists (though if they have to be flagged or highlighted, I could work with that) Any help here would be much appreciated.

    Read the article

  • Eliminating Windows 7 user tracking registry writes

    - by caffiend
    Windows 7 continues the practice of saving user actions in the registry. I'd like to disable this practice both to avoid reg-file fragmentation and SSD wear, as well as being uncomfortable with programs being able to quickly analyze my usage habits. Even with the "Turn off user tracking" policy enabled, there are at least two areas that still contain user tracking: HKCU\Software\Classes\Local Settings\MuiCache This key stores a cache of most-recently accessed strings, including most-recently ran exe descriptions. MKCU\Software\Classes\Local Settings\Software\Microsoft Windows\Shell\BagMRU This directory stores the most recently viewed folders along with timestamps. Are there additional policy settings/registry entries to disable these writes? If not, is it possible to make these entries Volatile? Would it be practical to create a temporary hive (eg, on ramdisk) and map it over this location?

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Windows 7 automatically install usb driver

    - by Dan
    I have tried to install usb driver for Samsung arndale on Windows 7. Now it has two entries in Device Manager. I can not uninstall the one with question mark. If I uninstall both of them, when I plug in the hardware, Windows 7 will automatically install both of the entries when the circuit pulg in. I don't get the chance to choose the right driver. When I try to update the wrong driver, it says found the driver but can not install it.

    Read the article

  • Some strange things in the db table of mysql database

    - by 0al0
    I noticed some weird things in the db table of mysql database in a client's server, after having the Mysql service stopping for no reason what are the test, and test_% entries? Why are there two entries for the database AQUA? Why is there a entry with a blank name? Should I worry about any of those? What should I do for each specific case? Is it safe to just delete the ones that should not be there, after backing up?

    Read the article

  • How does Windows 7 determine that a system was not shut doen correctly (Kernel-Pwer Event ID 41)

    - by Erik
    I have a strange situation that Windows 7 thinks it was not shut down correctly, and gives me a warning in the system event log like this: http://support.microsoft.com/kb/2028504/de What the KB-article does not tell is how Windows actually determines that situation. Does it parse its own system event log after reboot? Or where does it get that information from? I am currently investigating an issue where I believe the system fails to write the system event log correctly (it stops having entries, although other logs like the application event log still have entries), and after a reboot, Windows thinks it has not been shut doen correctly. Does anybody have any experience with this? ANd can you confrim that Windows determines the previous correct system shutdown by parsing its own system event log on startup?

    Read the article

  • When does a cached website refresh?

    - by user142485
    If i go to example.com [placeholder for different website] it creates the numerous cache entries in the browser for different items on the page and two entries for example.com. One of which expires in 1.5hrs and the other has 'No expiration time.' What I am wondering is: when does the browser display the cached page and when does it get the newest version from the server (when re-visiting the site)? What do the two different expiration times for the top level domain mean? A web page I went to was temporarily redirecting to a different page. After it was back up, I was still getting redirected to the temp page even though the correct page was up. Would this have eventually resolved itself based on expiration times or does the cache need to be cleared?

    Read the article

  • Windows-Vista events: Diagnostics-Performance: How-To read this information?

    - by Ice
    Hi, i am wondering how long the bootprocess needs and looking in %SystemRoot%\system32\eventvwr.msc /s. Some entries marked as critical like : Starting needs : 184707ms, sometimes 211855ms or 269767ms Some Errors like: This process does many diskactivities and lower the performance of Windows: Filename : ntoskrnl.exe Why are the startups marked as critical? Are the values normal on a Dell Precision M90 (Intel Centrono Dual CPU, 2GB RAM, 80 GB Disk)? Some entries marked as Errors showing the time for shutdown, but there are some like this one i printed in this question, what is the meaning of this?

    Read the article

  • Problem with the hosts file in Windows XP

    - by waldev
    I have a computer with Windows XP SP2 with a weird problem. The hosts file doesn't work. No matter what I do, adding or removing entries in the file doesn't make any difference, pinging the added names times out. I tried flushing the DNS cache (using ipconfig /flushdns) but that didn't work, I even tried to restart the DNS client service but that made no difference too. Removing entries also has no effect, I ping the names and I get a reply. Help!!! Edit: Thanks for your answer guys, but the problem is more complicated than this. It seems I'll have to reinstall Windows.

    Read the article

  • Windows XP clients do not update server 2008 DNS forward lookup zone.

    - by whatsisname
    I have a Cisco 5505 working as a DHCP server, and a server 2008 DNS server running an AD domain. I am having problems with all XP computers not updating the forward lookup zone. The reverse lookup zone updates are working. Windows vista and 7 computers update just fine. Additionally the DNS server accepts both secure and non-secure updates. When people are connected through the Cisco's VPN, they cannot resolve to any machines that have reverse lookup zones, but they can resolve entries in the forward lookup zone. I have tried ipconfig /registerdns, but the forward lookup zone entries for the XP clients are not being populated. How can I get the XP Dynamic DNS client to make the updates, or what can I do to debug what's going on? Thanks

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • Add Free Google Apps to Your Website or Blog

    - by Matthew Guay
    Would you like to have an email address from your own domain, but prefer Gmail’s interface and integration with Google Docs?  Here’s how you can add the free Google Apps Standard to your site and get the best of both worlds. Note: To signup for Google Apps and get it setup on your domain, you will need to be able to add info to your WordPress blog or change Domain settings manually. Getting Started Head to the Google Apps signup page (link below), and click the Get Started button on the right.  Note that we are signing up for the free Google Apps which allows a max of 50 users; if you need more than 50 email addresses for your domain, you can choose Premiere Edition instead for $50/year. Select that you are the Administrator of the domain, and enter the domain or subdomain you want to use with Google Apps.  Here we’re adding Google Apps to the techinch.com site, but we could instead add Apps to mail.techinch.com if needed…click Get Started. Enter your name, phone number, an existing email address, and other Administrator information.  The Apps signup page also includes some survey questions about your organization, but you only have to fill in the required fields. On the next page, enter a username and password for the administrator account.  Note that the user name will also be the administrative email address as [email protected]. Now you’re ready to authenticate your Google Apps account with your domain.  The steps are slightly different depending on whether your site is on WordPress.com or on your own hosting service or server, so we’ll show how to do it both ways.   Authenticate and Integrate Google Apps with WordPress.com To add Google Apps to a domain you have linked to your WordPress.com blog, select Change yourdomain.com CNAME record and click Continue. Copy the code under #2, which should be something like googleabcdefg123456.  Do not click the button at the bottom; wait until we’ve completed the next step.   Now, in a separate browser window or tab, open your WordPress Dashboard.  Click the arrow beside Upgrades, and select Domains from the menu. Click the Edit DNS link beside the domain name you’re adding to Google Apps. Scroll down to the Google Apps section, and paste your code from Google Apps into the verification code field.  Click Generate DNS records when you’re done. This will add the needed DNS settings to your records in the box above the Google Apps section.  Click Save DNS records. Now, go back to the Google Apps signup page, and click I’ve completed the steps above. Authenticate Google Apps on Your Own Server If your website is hosted on your own server or hosting account, you’ll need to take a few more steps to add Google Apps to your domain.  You can add a CNAME record to your domain host using the same information that you would use with a WordPress account, or you can upload an HTML file to your site’s main directory.  In this test we’re going to upload an HTML file to our site for verification. Copy the code under #1, which should be something like googleabcdefg123456.  Do not click the button at the bottom; wait until we’ve completed the next step first. Create a new HTML file and paste the code in it.  You can do this easily in Notepad: create a new document, paste the code, and then save as googlehostedservice.html.  Make sure to select the type as All Files or otherwise the file will have a .txt extension. Upload this file to your web server via FTP or a web dashboard for your site.  Make sure it is in the top level of your site’s directory structure, and try visiting it at yoursite.com/googlehostedservice.html. Now, go back to the Google Apps signup page, and click I’ve completed the steps above. Setup Your Email on Google Apps When this is done, your Google Apps account should be activated and ready to finish setting up.  Google Apps will offer to launch a guide to step you through the rest of the process; you can click Launch guide if you want, or click Skip this guide to continue on your own and go directly to the Apps dashboard.   If you choose to open the guide, you’ll be able to easily learn the ropes of Google Apps administration.  Once you’ve completed the tutorial, you’ll be taken to the Google Apps dashboard. Most of the Google Apps will be available for immediate use, but Email may take a bit more setup.  Click Activate email to get your Gmail-powered email running on your domain.    Add Google MX Records to Your Server You will need to add Google MX records to your domain registrar in order to have your mail routed to Google.  If your domain is hosted on WordPress.com, you’ve already made these changes so simply click I have completed these steps.  Otherwise, you’ll need to manually add these records before clicking that button.   Adding MX Entries is fairly easy, but the steps may depend on your hosting company or registrar.  With some hosts, you may have to contact support to have them add the MX records for you.  Our site’s host uses the popular cPanel for website administration, so here’s how we added the MX Entries through cPanel. Add MX Entries through cPanel Login to your site’s cPanel, and click the MX Entry link under Mail. Delete any existing MX Records for your domain or subdomain first to avoid any complications or interactions with Google Apps.  If you think you may want to revert to your old email service in the future, save a copy of the records so you can switch back if you need. Now, enter the MX Records that Google listed.  Here’s our account after we added all of the entries to our account. Finally, return to your Google Apps Dashboard and click the I have completed these steps button at the bottom of the page. Activating Service You’re now officially finished activating and setting up your Google Apps account.  Google will first have to check the MX records for your domain; this only took around an hour in our test, but Google warns it can take up to 48 hours in some cases. You may then see that Google is updating its servers with your account information.  Once again, this took much less time than Google’s estimate. When everything’s finished, you can click the link to access the inbox of your new Administrator email account in Google Apps. Welcome to Gmail … at your own domain!  All of the Google Apps work just the same in this version as they do in the public @gmail.com version, so you should feel right at home. You can return to the Google Apps dashboard from the Administrative email account by clicking the Manage this domain at the top right. In the Dashboard, you can easily add new users and email accounts, as well as change settings in your Google Apps account and add your site’s branding to your Apps. Your Google Apps will work just like their standard @gmail.com counterparts.  Here’s an example of an inbox customized with the techinch logo and a Gmail theme. Links to Remember Here are the common links to your Google Apps online.  Substitute your domain or subdomain for yourdomain.com. Dashboard https://www.google.com/a/cpanel/yourdomain.com Email https://mail.google.com/a/yourdomain.com Calendar https://www.google.com/calendar/hosted/yourdomain.com Docs https://docs.google.com/a/yourdomain.com Sites https://sites.google.com/a/yourdomain.com Conclusion Google Apps offers you great webapps and webmail for your domain, and let’s you take advantage of Google’s services while still maintaining the professional look of your own domain.  Setting up your account can be slightly complicated, but once it’s finished, it will run seamlessly and you’ll never have to worry about email or collaboration with your team again. Signup for the free Google Apps Standard Similar Articles Productive Geek Tips Mysticgeek Blog: Create Your Own Simple iGoogle GadgetAccess Your Favorite Google Services in Chrome the Easy WayRevo Uninstaller Pro [REVIEW]Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPFind Similar Websites in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • Fixing NativeHR 0×80070002 Error When Retracting or Deploying SharePoint Apps from Visual Studio

    - by Damon Armstrong
    Sometimes when App deployment fails from Visual Studio you will get the following error when trying to retract or redeploy the app: <nativehr>0×80070002</nativehr><nativestack></nativestack> There seems to be some issue with the information in the content database.  To fix the problem I just deleted all of the information from the App tables in the content database.  We only have one app in testing at a time so this worked fine for me.  Doing this in a production environment or if you have multiple apps installed is not recommended, so if you are in either of those scenarios you will probably have to dig into those tables to find the offending entries.  You may also have to remove some of the App principal entries from the App Service Application database as well (I’ll try to update this post if we find that to be true).   I’m going to post the SQL to do the full deletion, but be careful when running this: DO NOT RUN THIS IN A PRODUCTION ENVIRONMENT: DELETE FROM [dbo].[AppDatabaseMetadata] DELETE FROM [dbo].[AppInstallationProperty] DELETE FROM [dbo].[AppInstallations] DELETE FROM [dbo].[AppJobs] DELETE FROM [dbo].[AppLifecycleErrors] DELETE FROM [dbo].[AppPackages] DELETE FROM [dbo].[AppPrincipalPerms] DELETE FROM [dbo].[AppPrincipals] DELETE FROM [dbo].[AppResources] DELETE FROM [dbo].[AppRuntimeIcons] DELETE FROM [dbo].[AppRuntimeMetadata] DELETE FROM [dbo].[AppRuntimeSubstitutionDictionary] DELETE FROM [dbo].[AppSourceInfo] DELETE FROM [dbo].[AppSubscriptionCosts] DELETE FROM [dbo].[AppTaskDependencies]

    Read the article

  • NTFS and NTFS-3G

    - by MestreLion
    I have a netbook with Ubuntu Netbook 10.04 and a desktop with Mint 10 (~ Ubuntu Desktop 10.10) Both of them have read/write NTFS partitions mounted via /etc/fstab. And it works fine. Ive read on net, google, forums, and several posts here, that NTFS-3G is the driver that allows you to have full access to an NTFS partition, that it is new, great, powerful, yada-yada. But... my entries use plain ntfs, no mention of -3g, and they still work perfectly as read and write. Am I already using ntfs-3g? Does 10.04 onwards use it "under the hood"? How can i check that in my system? Should my /etc/fstab entries should use "ntfs-3g" as the fs? Why some posts refer to mount ntfs, while others say mount ntfs-3g ? Im really confused about where should I use fs-type names (ntfs) or driver names (ntfs-3g?). Or is it irrelevant now, and ntfs is always an "alias" or something for ntfs-3g nowadays? Ive read some posts her, from Oct-10 and Nov-10 e that "announce" that ntfs-3g "finally arrived"... thats way post-Lucid 10.04. Could someone please undo this mess in my head, and explain the relation between ntfs and ntfs-3g, what is the current status (10.04 and 10.10), where should i use each, etc (regarding mount, fstab, etc)? Sorry for the long, confusing, redudant text... im really getting sleepy

    Read the article

  • How to boot live iso images?

    - by virpara
    I found that it can be done with loopback as follows menuentry "Lucid ISO" { loopback loop (hd0,1)/boot/iso/ubuntu-10.04-desktop-i386.iso linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/boot/iso/ubuntu-10.04-desktop-i386.iso noprompt noeject initrd (loop)/casper/initrd.lz } But it works only with ubuntu or its derivatives. How it should be written if I want to boot other live images like fedora, cent, opensuse etc. ? Edit: I found some other entries but all of them are probably debian based. menuentry "Linux Mint 10 Gnome ISO" { loopback loop /linuxmint10.iso linux (loop)/casper/vmlinuz file=/cdrom/preseed/mint.seed boot=casper initrd=/casper/initrd.lz iso-scan/filename=/linuxmint10.iso noeject noprompt splash -- initrd (loop)/casper/initrd.lz } menuentry "DBAN ISO" { loopback loop /dban.iso linux (loop)/DBAN.BZI nuke="dwipe" iso-scan/filename=/dban.iso silent -- } menuentry "Tinycore ISO" { loopback loop /tinycore.iso linux (loop)/boot/bzImage -- initrd (loop)/boot/tinycore.gz } menuentry "SystemRescueCd" { loopback loop /systemrescuecd.iso linux (loop)/isolinux/rescuecd isoloop=/systemrescuecd.iso setkmap=us docache dostartx initrd (loop)/isolinux/initram.igz } Edit2: How to chainload grub and syslinux from grub2? Edit3: I want to boot other live images without any removable devices and use grub2 so need menu entries specific to grub2.

    Read the article

  • Can I use Ubuntu One to sync data fiies between two remote computers

    - by Sleepy John
    I've got two computers, both running Ubuntu with files in their home folders sync'd in to Ubuntu One. I'd like to know if it's possible to make Ubuntu One automatically download data changes that have been uploaded automatically to Ubuntu One from one computer to the equivalent data file in the other. Clarifying a bit further, I've installed Red Notebook in both computers and so they each have their own /.rednotebook/data folder containing a series of .txt files corresponding to the monthly entries in each of them. These are sync'd to upload any changes to those .txt files to Ubuntu One. My question is can I, and if so how, do I make Ubuntu One automatically download and replace those .txt files in the other computer after they've been updated and uploaded from the first computer? I did labouriously manage to download all those text files which had been uploaded from the first computer, from Ubuntu One one-by-one to the second computer, but what I want to do is automate this process and that's where I'm stuck. I'm aware that things could get a bit complicated if both my computers were on-line at the same time and both were simultaneously making different Red Notebook entries, so that's not the scenario I'm trying to cover. All I want to achieve is that whatever updates to the files have been uploaded by one computer, will automatically be downloaded to the same-named files in the other computer as soon as that second computer appears on line and detects that Ubuntu One has matching but more recent sync'd files than the ones it's holding.

    Read the article

  • Does the "security" repository provides anything not found in the "updates" repository?

    - by netvope
    For the limited number of package I looked at (e.g. apache), I found that the package version in the updates repository is always newer than or equal to the version available in the security repository (provided that they exist). This gives me the impression that all security patches posted to the security repository are also posted to the updates repository. If this is true, I can remove all <release_name>-security entries in my apt sources.list and the <release_name>-updates entries will still give me the security patches. This will speed up apt-get update quite a bit. The best documentation I can found regarding the repositories is on the community help page "Important Security Updates (raring-security)". Patches for security vulnerabilities in Ubuntu packages. They are managed by the Ubuntu Security Team and are designed to change the behavior of the package as little as possible -- in fact, the minimum required to resolve the security problem. As a result, they tend to be very low-risk to apply and all users are urged to apply security updates. "Recommended Updates (raring-updates)". Updates for serious bugs in Ubuntu packaging that do not affect the security of the system. However, it does not mention whether the updates repository also includes everything in the security repository. Can anyone confirm (or disconfirm) this?

    Read the article

  • What's the difference between General Ledger Transfer Program, Create Accounting and Submit Accounting?

    - by Oracle_EBS
    In Release 12, the General Ledger Transfer Program is no longer used. Use Create Accounting or Submit Accounting instead. Submit Accounting spawns the Revenue Recognition Process. The Create Accounting program does not. So if you create transactions with rules, then you would want to run Submit Accounting Process to spawn Revenue Recognition to create the distribution rows, which Create Accounting is then spawned to process to the GL. Create Accounting Submit Accounting Short Name for Concurrent Program XLAACCPB ARACCPB Specific to Receivables No Yes Runs Revenue Recognition automatically No Yes Can be run real-time for one Transaction/Receipt at a time Yes No Spawns the following Programs 1) XLAACCPB module: Create Accounting 2) XLAACCUP module: Accounting Program 3) GLLEZL module: Journal Import 1) ARTERRPM module: Revenue Recognition Master Program 2) ARTERRPW module: Revenue Recognition with parallel workers - could be numerous 3) ARREVSWP - Revenue Contingency Analyzer 4) XLAACCPB module: Create Accounting 5) XLAACCUP module: Accounting Program 5) GLLEZL module: Journal Import Keep in mind, Reports owned by application 'Subledger Accounting' cannot be seen when running the report from Receivables responsibility. You may want to request your sysadmin to attach the following SLA reports/programs to your AR responsibility as you will need these for your AR closing process: XLAPEXRPT : Subledger Period Close Exception Report - shows transactions in status final, incomplete and unprocessed. XLAGLTRN : Transfer Journal Entries to GL - transfers transactions in final status and manually created transactions to GL To add reports/programs owned by application 'Subledger Accounting' (Subledger Period Close Exception Report and Transfer Journal Entries to GL_ Add to the request group as follows: Let's use Subledger Accounting Report XLATBRPT: Open Account Balances Listing Report as an example. Responsibility: System Administrator Navigation: Security > Responsibility > Define Query the name of your Receivables Responsibility and note the Request Group (ie. Receivables All) Navigation: Security > Responsibility > Request Query the Request Group Go to Request Zone and Click on Add Record Enter the following: Type: Program Name: Open Account Balances Listing Save Responsibility: Receivables Manager Navigation: Control > Requests > Run In the list of values you should now see 'Open Account Balances Listing' report References: Note: 748999.1 How to add reports for application subledger accounting to receivables responsibiilty Note: 759534.1 R12 ARGLTP General Ledger Transfer Program Errors Out Note: 1121944.1 Understanding and Troubleshooting Revenue Recognition in Oracle Receivables

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • Configure Dual Boot, Windows 7 and Ubuntu 12.04 with or without EFI

    - by Keroak
    I have just installed Ubuntu 12.04 on a laptop with Windows 7 but I don't get to boot from Ubuntu. First, during the installation I made these partitions (may be too many): /dev/sda1 FAT32 SYSTEM 200Mb boot (EFI boot, i guess) /dev/sda2 unknown file system 128 Mb msftres (Windows Boot Manager) /dev/sda3 NTFS OS 100 Gb (Windows 7) /dev/sda4 NTFS DATOS 315 Gb (Data partition) /dev/sda5 ext4 28 Gb (/home) /dev/sda8 unknown file system 1 Gb biog_grub (i'm not very sure why i made this one) /dev/sda6 ext4 17 Gb (/ Ubuntu 12.03 installed withou errors aparently) /dev/sda7 linex-swap 2 GB (swap) I can boot from Windows perfectly. Actually I tried to configure Windows Boot Manager with EasyBCD but it doesn't recognize any boot entry. Anyway, I added an Ubuntu Entry and it configured it automatically. Now I have boot entries the Windows 7 one that appear to work and the Ubuntu 12.04 that it prompt a "No application found" message. I re-started from a USB with Ubuntu and tried to fix GRUB from the command-line and with boot-repair. No results. As far as I understand I have to tell the Windows Boot Manager where my Ubuntu boot loader is. So I have two problems: Actually, I don't know where my Ubuntu boot loader, GRUB or GRUB2 or whatever, is. I don't know how to set my Ubuntu entry in Windows Boot Manager. I guess using BCDedit.exe as EasyBCD didn't show me the entries. Anyway, I don't know what parameters to use. I read several articles about it but i didn't find out anything useful.

    Read the article

  • How to install Oracle Database 11g Express Edition on Ubuntu 12.10?

    - by Praneeth Pj
    I installed the Oracle database following the steps mentioned in this blog. Downloaded 11g express edition Created a new user oracle under the group dba. Following steps are executed using this. Unzipped oracle-xe-11.2.0-1.0.x86_64.rpm.zip and then converted the rpm to the Ubuntu package by running: sudo alien --scripts -d oracle-xe-11.2.0-1.0.x86_64.rpm Created /sbin/chkconfig file and added the entries as specified there. Created /etc/sysctl.d/60-oracle.conf and added the entries as specified in the same link as above. Running the commands: ln -s /usr/bin/awk /bin/awk mkdir /var/lock/subsys touch /var/lock/subsys/listener .deb generated in step 3: sudo dpkg --install oracle-xe_11.2.0-2_amd64.deb Left the default values as it is: sudo /etc/init.d/oracle-xe configure Set the following env variables in ~/.bashrc file: export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe export ORACLE_SID=XE export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh` export ORACLE_BASE=/u01/app/oracle export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export PATH=$ORACLE_HOME/bin:$PATH Running the commands: chown -R oracle:dba /var/tmp/.oracle chmod -R 755 /var/tmp/.oracle chown -R oracle:dba /tmp/.oracle chmod -R 755 /tmp/.oracle Starting Oracle Database 11g Express Edition instance: sudo service oracle-xe start sqlplus / as sysdba and got the following: SQL*Plus: Release 11.2.0.2.0 Production on Thu Jan 3 09:41:58 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. Now when exectuting any SQL statements on SQLplus, I end up with the following error: SQL> select * from dual; select * from dual * ERROR at line 1: ORA-01034: ORACLE not available Process ID: 0 Session ID: 0 Serial number: 0 I have increased the swap memory as specified here $ free -m total used free shared buffers cached Mem: 3901 3428 473 0 182 1988 -/+ buffers/cache: 1258 2643 Swap: 5066 0 5066

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >