Search Results

Search found 3752 results on 151 pages for 'offline caching'.

Page 110/151 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Error opening hyperlinks in Excel 2003

    - by richardtallent
    When clicking to follow hyperlinks from Excel, I'm now getting this error: Unable to open http://blah... Cannot download the information you requested. The hyperlinks in the Excel file are created using the HYPERLINK() formula. I use Google Chrome as my default browser. The web site in question uses Basic Authentication, and I've entered correct credentials when prompted (the dialog looked like an IE auth box, not Chrome's, but it's always been that way, even when it was working properly). This hasn't been an issue until recently. I'm guessing our IT department made some lame change to IE's configuration that is causing Office to not be able to open the URLs, despite having Chrome as my browser. Things I've checked already: URLs are good, they work fine when pasted manually into Chrome, IE, or Firefox. IE is not set to Work Offline (already found that suggestion on Google). I checked Program Access and Defaults and verified that Chrome is selected. Nothing in the URL requires URLEncoding, so it's no goofy issue with encoding I've had reports from some other users now and then about the same problem, but this is the first time I've experienced it myself.

    Read the article

  • Transparent PNG menu item backgrounds in IE6 with rollovers

    - by evh
    Hi I have been trying to do this for what feels like all my life I have a list menu with block display links, each link has a sliding doors png background image. I have used this javascript (http://www.ideashower.com/our_solutions/png-hover/) to implement the alphaimageloader fix for ie6 using a transparent gif. When I test it for the first time it works but if I click to a different page and then click back it doesn't work anymore - the menu completely disappears, I can get it to work again by duplicating the transparent gif and changing it's name, but again if I go to another page and then come back to it, it stops working and the menu dissappears. Is this a server caching issue or something like that. Any thoughts on this would be much appreciated! Thanks

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • Windows 7 Apache Crashes on ANY request

    - by Dan
    I have XAMPP installed. I am running Windows 7. I have WordPress installed so that I may tweak it and test things locally before putting them 'live' on a remote server. I just installed BuddyPress. The installation was rather seamless. I activated the plugin and almost immediately, Apache crashed. I have Apache running as a service so it immediately restarted itself and was running BUT if I even so much as refresh the page (or create any other request), down it goes. Listed here is the error report as generated by Windows 7: Problem signature: Problem Event Name: APPCRASH Application Name: apache.exe Application Version: 2.2.4.0 Application Timestamp: 45ebef86 Fault Module Name: ZendOptimizer.dll Fault Module Version: 0.0.0.0 Fault Module Timestamp: 45ea8fee Exception Code: c0000005 Exception Offset: 0004dc22 OS Version: 6.1.7600.2.0.0.256.1 Locale ID: 1033 Additional Information 1: 1ec0 Additional Information 2: 1ec0fd70d07d060e5bfcf53c69ad1739 Additional Information 3: 2c48 Additional Information 4: 2c48940de5e7d1cb2e131ad6a0ca2feb Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Help?

    Read the article

  • NetApp FAS270 head doesn't see disks

    - by wfaulk
    I have an FAS270C. For months, I've been running it in a split-head manner (that is, with each head serving data totally independently, and without any clustering even being enabled) in order to facilitate moving some data around. I finally got everything situated, moved all the data to one of the heads, and was trying to get clustering set back up. Now when I try to install OnTap onto the "new" head, it cannot see any of the disks in the head shelf. (That is, the shelf into which the heads are inserted.) I've booted into maintenance mode, and it shows me that the 0b adapter, which should be the adapter that that shelf and its disks should be presented on, is in "OFFLINE (physical)" state. If I try to enable it with either "storage enable adapter 0b" or "fcadmin online 0b", it waits for about 30 seconds and then says: [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0b. [fci.adapter.online.failed:error]: Fibre Channel adapter 0b failed to come online. There is currently nothing attached to its external 0b port. I've tried it with and without an SFP plugged into it, and with and without its internal termination switch on. The currently active head can see those disks, and can see that two of them are assigned to the other head. Before I started reconfiguring, the "new" head could see disks on that shelf. They may even be the same disks that OnTap was installed on previously. Does anyone have any idea how to proceed?

    Read the article

  • ASP.NET MVC Response Filter + OutputCache Attribute

    - by Shane Andrade
    I'm not sure if this is an ASP.NET MVC specific thing or ASP.NET in general but here's what's happening. I have an action filter that removes whitespace by the use of a response filter: public class StripWhitespaceAttribute : ActionFilterAttribute { public StripWhitespaceAttribute () { } public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); filterContext.HttpContext.Response.Filter = new WhitespaceFilter(filterContext.HttpContext.Response.Filter); } } When used in conjunction with the OutputCache attribute, my calls to Response.WriteSubstitution for "donut hole caching" do not work. The first and second time the page loads the callback passed to WriteSubstitution get called, after that they are not called anymore until the output cache expires. I've noticed this with not just this particular filter but any filter used on Response.Filter... am I missing something? I also forgot to mention I've tried this without the use of an MVC action filter attribute by attaching to the PostReleaseRequestState event in the global.asax and setting the Response.Filter value there... but still no luck.

    Read the article

  • How is Entity Framework 4's POCO support compared to NHibernate?

    - by Kevin Pang
    Just wondering if anyone has had any experience using Entity Framework 4's POCO support and how it stands up compared to NHibernate. If they're the same, I'd be very interested in making Entity Framework 4 my ORM of choice if only because it would: Support both data first AND object first development Have a robust LINQ provider Be easier to pitch to clients (since it's developed by Microsoft) Come baked into the .NET framework rather than requiring 8 dlls to get up and running In other words, are there any major shortcomings to EF4? Does it support all of the basic functionality NHibernate supports (lazy-loading, eager-loading, 1st level caching, etc.) or is it still rough around the edges? Is the syntax for setting up the mappings as easy as NHibernate and/or Fluent NHibernate? Edit: Please don't bring up the vote of no confidence. That was ages ago and dealt with some serious shortcomings of EF1 that really don't seem to apply anymore to EF4.

    Read the article

  • Editing a windows XP installation's registry without being able to log in.

    - by Alain
    I've got a windows XP installation that has a corrupt registry. A worm (which was removed) had hijacked the HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon entry (which should have a value of Userinit=C:\windows\system32\userinit.exe When the worm was removed, the corrupt entry was deleted entirely, and now the system automatically logs off immediately after attempting to log in. Regardless of the user and boot mode, no accounts can be logged in to. The only thing required to correct this behavior is to restore the registry key, but I cannot come up with any ways of editing the registry without logging in to an account. I tried remotely connecting to the registry but the required services aren't enabled on the machine. I tried booting on the same machine using the BartPE boot CD but I could not find any way of editing the registry on the C:\Windows installation - running regedit only modifies the X:\I386\ registry in memory. So, what can I use modify the registry of an un-login-able Windows XP instance so that I can log in again? Thanks guys. EDIT: The fix worked. The solution to the auto-logoff problem was, as hoped, to simply add the value mentioned above to the appropriate registry entry. This can be done using the BartPE Boot CD, as described in the accepted answer below, but I used the Offline NT Registry Editor software mentioned in another answer. The steps were: Boot from the NT Registry Editor CD Follow the directions until the appropriate boot sector is loaded. Instead of using one of the default options for modifying passwords or user accounts, type "software" to edit that hive. Type '9' to enter the command line based registry editor. Type "cd Microsoft" (enter) "cd Windows NT" (enter) "cd CurrentVersion" (enter) "cd Winlogon" (enter) Type "nv 1 Userinit" to create a new value under the Winlogon key Type "ev Userinit" to edit the new value, and when prompted, type "C:\windows\system32\userinit.exe" (enter) Type 'q' to quit the registry editor, and as you back out of the system, follow directions to write the hive back to disk. Restart your computer and log in - problem solved. (generic 'warning: back up your registry' disclaimer)

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • Group Policy GPO not 'seen' at client

    - by fukawi2
    I have a new OU (natorg.local\NATO\Users) that I am trying to apply GP to. I have created a new user in this OU, and linked the 3 GPO's to this OU: DESKTOP - Folder Redirection (AppData) DESKTOP - Folder Redirection (Desktop) DESKTOP - Folder Redirection (Documents) Hopefully the names are sufficient to suggest what they do exactly. The settings are under User Settings so there is no Loopback processing required (if my understanding is correct). GP Modelling for the user and specific computer says that the GPOs will/should be applied, however on the client, gpresult doesn't even appear to see the GPOs under either "Applied" or "Not Applied": USER SETTINGS -------------- CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local Last time Group Policy was applied: 25/06/2012 at 11:07:13 AM Group Policy was applied from: svr-addc-01.natorg.local Group Policy slow link threshold: 500 kbps Applied Group Policy Objects ----------------------------- LAPTOPS - Power Settings WSUS - Set Server Address OUTLOOK - Auto Archive SECURITY - Lock Screen After Idle Default Domain Policy DESKTOP - Regional Settings NETWORK - Proxy Configuration NETWORK - IE General Config OFFICE - Trusted Locations OFFICE - Increase Privacy OUTLOOK - Disable Junk Filter DESKTOP - Disable Windows Error Reporting DESKTOP - Hide Language Bar NETWORK - Disable Skype DESKTOP - Disable Thumbs.db Creation WSUS - Set Server Address The following GPOs were not applied because they were filtered out ------------------------------------------------------------------- Local Group Policy Filtering: Not Applied (Empty) NETWORK - Google Chrome Configuration Filtering: Not Applied (Empty) SYSTEM - Event Log Configuration Filtering: Not Applied (Empty) SECURITY - Local Administrator Password Filtering: Not Applied (Empty) NETWORK - Disable Windows Messenger Filtering: Not Applied (Empty) SECURITY - Audit Policy Filtering: Not Applied (Empty) WSUS - Automatic Install Filtering: Not Applied (Empty) NETWORK - Firewall Configuration Filtering: Not Applied (Empty) DESKTOP - Enable Offline Files Filtering: Not Applied (Empty) I haven't altered permissions on the GPO's at all, no WMI filtering... As I said, GP Modelling says that they should be applied. GPResult on the client correctly identifies itself as being the correct OU (CN=Amir,OU=Users,OU=NATO,DC=natorg,DC=local) There are 2 x 2008R2 and a 2003 DC, domain is 2003 level, client is Windows XP SP3. Can anyone suggest why these GP Objects would be "invisible" to the client?

    Read the article

  • Member Status: Inquorate in RHEL 5.6

    - by Eugene S
    I've encountered a strange issue. I had to change the time on my Linux RHEL cluster system. I've done it using the following command from the root user: date +%T -s "10:13:13" After doing this, some message appeared relating to <emerg> #1: Quorum Dissolved (however I didn't capture the message completely). In order to investigate the issue I looked at /var/log/messages and I've discovered these errors. Below is the output of few commands I got when tried to investigate the issue, however I don't have enough knowledge to make use of this information. [root@system1a ~]# clustat Cluster Status for system4081 @ Sun Mar 25 11:45:48 2012 Member Status: Inquorate Member Name ID Status ------ ---- ---- ------ chb_sys1a 1 Online, Local chb_sys2a 2 Offline [root@system1a ~]# cman_tool nodes Node Sts Inc Joined Name 1 M 872 2012-03-25 08:43:07 chb_sys1a 2 X 0 chb_sys2a [root@system1a ~]# qdiskd -f -d [17654] debug: Loading configuration information [17654] debug: 0 heuristics loaded [17654] debug: Quorum Daemon: 0 heuristics, 1 interval, 10 tko, 0 votes [17654] debug: Run Flags: 00000035 [17654] info: Quorum Daemon Initializing stat: Bad address [17654] crit: Initialization failed I tried to search through the internet and found out a quite similar issue here. However, for some reason I am not able to access the bug on bugzilla. The link to the bug is here

    Read the article

  • Is there a better way than this to find out if an IsolatedStorage file exists or not?

    - by Edward Tanguay
    I'm using IsolatedStorage in a Silverlight application for caching, so I need to know if the file exists or not which I do with the following method. I couldn't find a FileExists method for IsolatedStorage so I'm just catching the exception, but it seems to be a quite general exception, I'm concerned it will catch more than if the file doesn't exist. Is there a better way to find out if a file exists in IsolatedStorage than this: public static string LoadTextFromIsolatedStorageFile(string fileName) { string text = String.Empty; using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication()) { try { using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(fileName, FileMode.Open, isf)) { using (StreamReader sr = new StreamReader(isfs)) { string lineOfData = String.Empty; while ((lineOfData = sr.ReadLine()) != null) text += lineOfData; } } return text; } catch (IsolatedStorageException ex) { return ""; } } }

    Read the article

  • Does Hyper-V support SCSI Pass-through discs in a Server 2003 R2 VM?

    - by Peter Bernier
    I'm running into some difficulties getting pass-through disks to be accessible to a Hyper-v server 2003 r2 virtual machine. Host OS : Server 2008 R2 full w/Hyper-V role Guest OS : Server 2003 R2 (Windows Home Server) The guest's OS disk is a pass-through disk on the IDE controller (not the best solution, but I can live with it). My storage disks will be pass-through disks on the SCSI controller. I'm able to see all of the disks that I'll be using for the VM on the host without issue. The problem that I'm having is that I can't seem to get the guest OS to be able to 'see' the storage drives (as pass-through disks on the SCSI controller). Here's what I'm doing : On the host, the storage drive is set to 'Offline' just like the OS disk (this is required for pass-through to work). In the VM, the storage drive is on the SCSI controller. Hyper-V Integration Tools are installed in guest. That's as far as I'm able to get. I don't see the drive in Computer Management, or in Windows Explorer (I've tried with an unformatted disk, as well as after formatting a partition). I am able to see a removable device that lists the disk's model number in the Guest, but I can't seem to access the storage. (I get an entry in Device Manager that needs drivers, but nothing on the Integration Tools disc works..) Trouble-shooting steps I've tried : If put the pass-through drive on the IDE controller, I can see it in the Guest. If put the storage drive 'Online' in the host and create a VHD on it on the SCSI controller, I can see it in the Guest. I suppose I could create a fixed-size VHD that consumes the entire disk, but I'd rather not have that overhead. I've also extracted the contents of the Integration Tools drivers (x86 and amd64) and tried pointing the disk controller to each of those, with no luck. Can anyone offer suggestions as to how I can get this to work properly?

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • AJAX call in for loop won't return values to correct array positions

    - by Heilemann
    I need to get a range of pages using AJAX and put them into an array, where their given place in the array is equal to the i of a for loop (it's a caching-like function for blog pages, and the range of the for loop is entirely variable). I'm doing something akin to the following: var bongo = new Array(); for (i = 0; i < 10; i++) { jQuery.ajax({ type: "GET", url: http://localhost, data: queryString, success: function(request) { bongo[i] = request } }) } The problem is, that unless I add async: false to the .ajax options (which would make it... SJAX?), which causes the requests to basically pause the browser, going against what I'm trying to do, the i in the success callback will always end up being 11, whereas I of course want it to pour the returned data into each slot of the array from 0 to 10. I've tried replacing the line with this: bongo[i] = jQuery.ajax({ type: "GET", url: http://localhost, data: queryString }).responseText But that made no difference.

    Read the article

  • Timezones and the DateTimeField - Django

    - by RadiantHex
    Hi folks, I'm trying to implement a "time ago" feature, for the displaying of items on a site. As I'm caching the pages I wish to use javascript in order to render the "time ago". Javascript knows local time and problably the Timezone of the local machine so I could play with that, but that would require to hard code the server's timezone. Therefore I'm trying to figure out a simple way to pass a ISO 8601 timestamp, in GMT time. Is there any simple and straight forward way for doing this? Help would be much appreciated! =)

    Read the article

  • Ajax cache control

    - by Brian
    Hello, I am having a problem with ajax requests in Internet Explorer and in Chrome - I cannot bust the cache. Normal pages don't have the problem - it's just the ajax requests. I know that one workaround is to append a random query string variable to the end of the URL. However, I don't want to lose all the benefits of caching, I just want the browser to pick up the new file if the version on the server is different from the cached version. I have tried manually setting the ajax POST header, to no avail: xmlHttp.setRequestHeader("Cache-Control", "must-revalidate"); Adding this to my .htaccess file doesn't work either: <FilesMatch "\.(js|css).*" Header set Cache-Control: "max-age=172800, public, must-revalidate" </FilesMatch Any help would be greatly appreciated. Thanks, Brian

    Read the article

  • Why is my asp:Substitution control suddenly not working in ASP.NET 4.0?

    - by Steve Wortham
    I just upgraded my site from ASP.NET 3.5 to 4.0. I've been working through some breaking changes and there were more than I expected. One I can't figure out, however, is why my <asp:Substitution /> control suddenly stopped working like it should. It's supposed to ignore the output cache settings of the parent page and update upon every request. For some reason that isn't happening. It's caching for the full 10 minutes (the OutputCache setting for my home page). Any ideas?

    Read the article

  • Grails / GORM, read-only cache and transient fields

    - by Stephen Swensen
    Suppose I have the following Domain object mapping to a legacy table, utilizing read-only second-level cache, and having a transient field: class DomainObject { static def transients = ['userId'] Long id Long userId static mapping = { cache usage: 'read-only' table 'SOME_TABLE' } } I have a problem, references to DomainObject instances seem to be shared due to the caching, and thus transient fields are writing over each other. For example, def r1 = DomainObject.get(1) r1.userId = 22 def r2 = DomainObject.get(1) r2.userId = 34 assert r1.userId == 34 That is, r1 and r2 are references to the same instance. This is undesirable, I would like to cache the table data without sharing references. Any ideas?

    Read the article

  • Moving domain and keeping IMAP email - Linux Evolution, Mac Mail

    - by Douglas Squirrel
    This question is about keeping email during a server move, where the clients are Linux (me) and Mac (my wife) using IMAP. I receive email at [email protected] using a webmail service that my hosting company (1and1) provides. I read it via IMAP in evolution, so I should have copies of all the emails on my local machine. I have just moved mydomain.com from one type of account to another, and the hosting company don't move my existing email on the server when I do this - I assume they move my account to a different mailserver, and don't choose to provide a migration path for the email to move too (yes, this is annoying). Before migrating, I backed up Evolution (File - Backup settings) and did a spot-check in the evolution-backup.tar.gz file to be sure that my mail was in there. After migrating, I restored (File - Restore settings) and had hoped that I would see all my mail again. Unfortunately, Evolution just shows me new mail sent to the account, not the old mail. Is there a way to get the old mail back in the mailserver, or at least displaying in Evolution, as it was before the move? If not, can I read it in some convenient way, e.g. in Evolution offline or in a text file (then I can pick the mails I really want to keep and resend them to myself)? Also, I am about to do a similar move for my wife's domain, [email protected]. She reads her mail on a Mac using IMAP to Apple Mail. Is there anything I can do to make the move smooth for her? (I have backed up [her user]/Library/Mail already, but not sure what to do once the move is done.)

    Read the article

  • Installing FIREFOX with extensions/addons manually? (not really auto install)

    - by BrownChiLD
    I've been reading around with regards to creating firefox installers, bundling it w/ addons, using scripts, and CLI lines and a whole bunch of stuffs ... but it seems that going through this route is just too complicated and time consuming.. Since i don't mind a bit of manually copying files and stuff, I was planning to do the following: on my test machine, 1) install firefox on a machine AND configure it the way i want it 2) install addons AND set the configurations for it 3) set advanced configurations for firefox (about:config) Then once i'm all set, I just simply copy the contents of the firefox/profiles folder (for this particular tests it's ....\AppData\Local\Mozilla\Firefox\Profiles\6m0mef0s.default for deployment, all i have to do is: 1) Install the same version (offline installer) of the Firefox i used.. 2) overwrite the contents of the new profiles folder (randomly named by Firefox installer as usual) .. This should set all my configs and addons right? or what other folders do i have to backup and copy manually into the new profiles folder? I don't think i need to tinker w/ any registries right? anyway, if this works, though it's a bit manual, it's a whole lot simplier, and straight forward than fiddling w/ Installers and Packages etc.. PS I do this a lot w/ other simple (and some complex) software that i use and they seem to work fine for years.. i'm just not sure with firefox and how it's structured..

    Read the article

  • Change which server mailbox is associated with in Exchange 2007

    - by tacos_tacos_tacos
    I have restored and mounted an EDB file onto a new Exchange 2007 Server. However, the old server is still online and although all the mailboxes I need are in the newly-mounted database, in Exchange 2007 System Manager it still shows that the mailbox is associated with the old server. If I try to "Move" the database it actually tries to copy the files from the old server to the new server, which is not necessary because they are already there - and produces and error about the mailbox on the destination already existing. How can I simply tell Exchange (AD?) to use the new server to find the mailbox rather than the old? Edit: I did the restore by taking the old server offline (turning off all Exchange services), copying EDB file to the new server, restoring it with eseutil, and mounting it to the new server. I did it this way in part because I didn't know a better way and in part because I couldn't use move-mailbox as the source location had a horrible Internet connection (which is why Exchange is being moved to the new location). I had to copy the EDB from the old server to a hard disk, go somewhere with a better Internet connection, upload the EDB to the new server.

    Read the article

  • using second level cache vs pushing objects into the session

    - by AhmetC
    I have some big entities which are frequently accessed in same session. For example, in my application there is a reporting page which consist of dynamically generated chart images. For each chart image on page, client makes requests to corresponding controller and the controller generates images using some entities. I can either use asp.net's session dictionary for "caching" those entities or rely on nhibernate's second level cache support with using cached queries for example. What is your opinion? By the way I will use shared hosting, is second level cache hosting friendly? Thanks.

    Read the article

  • Upgrading PEAR from 1.9.0 to 1.9.1 fails

    - by Skelton
    Hi All, I'm willing to install phpunit 5.3 with MAMP 1.9 and there for I need to upgrade PEAR to version 1.9.1. The current version installed is 1.9.0. When I try the to upgrade I get the following: sudo pear channel-update pear.php.net sudo pear upgrade pear Could not get contents of package "/Applications/MAMP/bin/php5.3/bin/pear". Invalid tgz file. upgrade failed When I force the upgrade It still doesn't work: sudo pear upgrade --force PEAR downloading PEAR-1.9.1.tgz ... Starting to download PEAR-1.9.1.tgz (293,587 bytes) .............................................................done: 293,587 bytes upgrade ok: channel://pear.php.net/PEAR-1.9.1 PEAR: Optional feature webinstaller available (PEAR's web-based installer) PEAR: Optional feature gtkinstaller available (PEAR's PHP-GTK-based installer) PEAR: Optional feature gtk2installer available (PEAR's PHP-GTK2-based installer) PEAR: To install optional features use "pear install pear/PEAR#featurename" sudo pear -V PEAR Version: 1.9.0 As bindbn suggested: sudo pear install --offline /Users/tom/Downloads/PEAR-1.9.1.tgz Ignoring installed package pear/PEAR Nothing to install sudo pear upgrade --force --alldeps PEAR downloading PEAR-1.9.1.tgz ... Starting to download PEAR-1.9.1.tgz (293,587 bytes) .............................................................done: 293,587 bytes upgrade ok: channel://pear.php.net/PEAR-1.9.1 PEAR: Optional feature webinstaller available (PEAR's web-based installer) PEAR: Optional feature gtkinstaller available (PEAR's PHP-GTK-based installer) PEAR: Optional feature gtk2installer available (PEAR's PHP-GTK2-based installer) PEAR: To install optional features use "pear install pear/PEAR#featurename" pear -V PEAR Version: 1.9.0 I hope someone can figure this out! Thanks!

    Read the article

  • Hook Response.Cache to memcache

    - by dvr
    Has anyone done this before? I have a 32 bit win 2003 server running 2.0 and have read the ms engineers' blog about min(60%, 1800mb) for cache limits and our site (asp.net 2.0 / 3.5) is caching alot. It throws system outofmemory exceptions when wp is around 1.3gb (unfortunately it is the 2.0 apps) and I would like to push alot over to memcache but worried that at the moment the site is efficient using response.cache as is (though memory is an issue). I want to move most items over to memcache and have concerns on a – how to do this (implementation of response.cache to read/write from memcache) and b – what will performance be like? Before I commit to doing this and possibly spending a few days running tests I would like to hear from you if this has been done already and get some feedback. (and please don’t tell me to buy a x64 machine – I have already requested this!), by the way I ran a test requesting a single image 1000 times and response.cache was over 50% quicker than using application cache. Does response.cache bypass the page lifecycle?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >