Search Results

Search found 90459 results on 3619 pages for 'server cache'.

Page 10/3619 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Changing Flash player of Firefox cache

    - by Prasenjit Chatterjee
    I want to relocate the Flash player plugin of Firefox cache so that it saves my C: drive space when I watch youtube videos. I successfully changed the firefox cache by opening about:config and then created a new string key "browser.cache.disk.parent_directory" where I put the value of the new cache location. But it doesn't work with online streaming contents such as youtube videos. Please guide me where does it get stored and how to change its cache into another drive.

    Read the article

  • Local network cache of PHP and Apache2 on Win Server 2008 R2

    - by Ahmed Benlahsen
    Software configuration : I have a new server with Windows Server 2008 R2 installed via VMWare. I have installed Apache2.2, PHP5.2 and MySQL5.5 as separate packages. Issue : On my first installation of my application, all works great. When I updated some JS and CSS files and accessed my application again from a PC on local network, I got the old JS and CSS versions. When I access the same application on local server I got the latest versions of those files. Link of my application on local server is : http://localhost/BADIL Link of my application from local network is : http://LOCAL_SERVER_IP/BADIL I think that must be some cache but I don't know where. Maybe on Win Server 2008 R2 or on VMWare? The question is: Why, when I access my application on the server, everything works fine, but when I access the same application from a local network, I do not see the updated versions of JS and CSS files?

    Read the article

  • Can't connect to Windows Server 2008 shared folders via VPN connection

    - by Pearl
    I set up an VPN connection on my 2008 server using RRAS. The VPN seems to work fine. I can connect from outside the network. I am also able to establish a remote access connection via the VPN-IP. However, I can't access my shared folders. After connecting to the VPN I can ping the server, but it is not shown in my networks. using \ip or \server-name doesn't work either, cannot be found. I checked ipconfig and this is what I found regarding the VPN: DNS-Suffix: Description: test Physical Adress: DHCP activated: No Auto-Config: Yes IPv4-Adress: 192.168.2.114 Sub: 255.255.255.255 Standard-gateway: DNS-Server: 192.168.0.1 NetBIOS: activated To clarify my IP-situation: server is connected to router with 192.168.0.x, the test-client is in an external network connected to a router with 192.168.1.x, server-client connection is using static ips with 192.168.2.x Can anyone help me with this one? The VPN should be ok since I am able to establish remote access.

    Read the article

  • How to invalidate cache when benchmarking?

    - by Michael Buen
    I have this code, that when swapping the order of UsingAs and UsingCast, their performance also swaps. using System; using System.Diagnostics; using System.Linq; using System.IO; class Test { const int Size = 30000000; static void Main() { object[] values = new MemoryStream[Size]; UsingAs(values); UsingCast(values); Console.ReadLine(); } static void UsingCast(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = (MemoryStream)o; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("Cast: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } static void UsingAs(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is MemoryStream) { var m = o as MemoryStream; sum += (int)m.Length; } } sw.Stop(); Console.WriteLine("As: {0} : {1}", sum, (long)sw.ElapsedMilliseconds); } } Outputs: As: 0 : 322 Cast: 0 : 281 When doing this... UsingCast(values); UsingAs(values); ...Results to this: Cast: 0 : 322 As: 0 : 281 When doing just this... UsingAs(values); ...Results to this: As: 0 : 322 When doing just this: UsingCast(values); ...Results to this: Cast: 0 : 322 Aside from running them independently, how to invalidate the cache so the second code being benchmarked won't receive the cached memory of first code? Benchmarking aside, just loved the fact that modern processors do this caching magic :-)

    Read the article

  • Issues configuring CUPS print server for Ubuntu Server 9.10

    - by Tone
    I have a 9.10 Ubuntu Server installed and I want to make it a print server and am trying to get access to the cups browser admin page from a windows client machine. I installed cups: sudo apt-get install cups then I edited the /etc/cups/cupsd.conf file and tried several different listen cominbations: Listen 192.168.1.109:631 #ip my router gives it3 Listen /var/run/cups/cups.sock #already in conf file Listen fileserver:631 #hostname of server Port 631 #listen for all incoming requests on 631? samba is also installed (which I think is necessary to share the printer out? and finally I added my user to the lpadmin group: sudo adduser tone lpadmin but when I try to navigate any of the following I get 403 forbidden http://fileserver:631/admin http://fileserver:631 http://192.168.1.109:631/admin http://192.168.1.109:631 What did I miss?

    Read the article

  • Enabling Session Directory under Terminal Server Configuration Tool and Server Settings

    - by LPE
    Yello, I'm trying to add up a Terminal Server Session Directory client to an already fully functional Session Directory cluster which today runs two clients as well as the server. I've been reading up on both Google, Microsoft KB's as well as old documentation from an earlier employee but to no avail. The step I'm stuck at is when I open up Terminal Server Configuration Tool (tscc.msc), chooses ServerSettings. I know there should be an option saying "Session Directory" on the right hand side along with Active Desktop, Licensing and whatnot, but it's not there. I've logged on to both the other already functional clients and checked the same list and there the Session Directory option sure is both visible as well as working good with the specified information. This picture is the same view that I'm looking at at the moment, but mine is missing the bottom option that says "Session Directory" http://www.inetnj.com/doc/images/TerminalServerConfiguration.jpg Any help would be greatly appriciated. Regards LPE

    Read the article

  • How to ignore query parameters in web cache?

    - by eduardocereto
    Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use.

    Read the article

  • How to ignore query parameters in web cache?

    - by eduardocereto
    Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use.

    Read the article

  • SQLAuthority News – Storage and SQL Server Capacity Planning and configuration – SharePoint Server 2

    - by pinaldave
    Just a day ago, I was asked how do you plan SQL Server Storage Capacity. Here is the excellent article published by Microsoft regarding SQL Server capacity planning for SharePoint 2010. This article touches all the vital areas of this subject. Here are the bullet points for the same. Gather storage and SQL Server space and I/O requirements Choose SQL Server version and edition Design storage architecture based on capacity and IO requirements Determine memory requirements Understand network topology requirements Configure SQL Server Validate storage performance and reliability Read the original article published by Microsoft here: Storage and SQL Server Capacity Planning and configuration – SharePoint Server 2010. The question to all the SharePoint developers and administrator that if they use the whitepapers and articles to decide the capacity or they just start with application and as they progress they plan the storage? Please let me know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology Tagged: SharePoint

    Read the article

  • Server Core: Best Practice for Applications on Windows Server

    - by The Official Microsoft IIS Site
    I have been talking with a number of customers, CSOs, CIOs and industry professionals over the past few weeks and I realized that the availability and benefits of using the Server Core option of Windows Server 2008 or Windows Server 2008 R2 was not as widely known as I think it should be. Windows Server Core provides a minimal installation environment for running specific server roles, which reduces the maintenance and management requirements and the attack surface for those server roles. The following...(read more)

    Read the article

  • SQL SERVER – How to Install SQL Server 2014 – A 99 Seconds Video

    - by Pinal Dave
    Last month I presented at 3 community and 5 corporate events. Every single time I have been asked by others what is my experience with SQL Server 2014. Every single time I have told the audience that they should try this out themselves, however, the response has been very lukewarm. Everybody wants to know how SQL Server 2014 works, but no one wants to try out themselves. Upon asking why users are not installing SQL Server 2014, pretty much the same answer I received from everyone – “The Fear of Uknown”. Everybody who have not installed SQL Server 2014 are not sure how the installation process works and what if they face any issue while installing SQL Server 2014. If you have installed an earlier version of SQL Server, installing SQL Server 2014 is very easy process. I have created a quick video of 99 seconds where I explain how we can easily install SQL Server 2014. This is a straight forward default installation of SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Video

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Throttling Cache Events

    - by dxfelcey
    The real-time eventing feature in Coherence is great for relaying state changes to other systems or to users. However, sometimes not all changes need to or can be sent to consumers. For instance; If rapid changes cannot be consumed or interpreted as fast as they are being sent. A user looking at changing Stock prices may only be able to interpret and react to 1 change per second. A client may be using low bandwidth connection, so rapidly sending events will only result in them being queued and delayed A large number of clients may need to be notified of state changes and sending 100 events p/s to 1000 clients cannot be supported with the available hardware, but 10 events p/s to 1000 clients can. Note this example assumes that many of the state changes are to the same value. One simple approach to throttling Coherence cache events is to use a cache store to capture changes to one cache (data cache) and insert those changes periodically in another cache (events cache). Consumers interested in state changes to entires in the first cache register an interest (event listener) against the second event cache. By using the cache store write-behind feature rapid updates to the same cache entry are coalesced so that updates are merged and written at the interval configured to the event cache. The time interval at which changes are written to the events cache can easily be configured using the write-behind delay time in the cache configuration, as shown below.   <caching-schemes>     <distributed-scheme>       <scheme-name>CustomDistributedCacheScheme</scheme-name>       <service-name>CustomDistributedCacheService</service-name>       <thread-count>1</thread-count>       <backing-map-scheme>         <read-write-backing-map-scheme>           <scheme-name>CustomRWBackingMapScheme</scheme-name>           <internal-cache-scheme>             <local-scheme />           </internal-cache-scheme>           <cachestore-scheme>             <class-scheme>               <scheme-name>CustomCacheStoreScheme</scheme-name>               <class-name>com.oracle.coherence.test.CustomCacheStore</class-name>               <init-params>                 <init-param>                   <param-type>java.lang.String</param-type>                   <param-value>{cache-name}</param-value>                 </init-param>                 <init-param>                   <param-type>java.lang.String</param-type>                   <!-- The name of the cache to write events to -->                   <param-value>cqc-test</param-value>                 </init-param>               </init-params>             </class-scheme>           </cachestore-scheme>           <write-delay>1s</write-delay>           <write-batch-factor>0</write-batch-factor>         </read-write-backing-map-scheme>       </backing-map-scheme>       <autostart>true</autostart>     </distributed-scheme>   </caching-schemes> The cache store implementation to perform this throttling is trivial and only involves overriding the basic cache store functions. public class CustomCacheStore implements CacheStore { private String publishingCacheName; private String sourceCacheName; public CustomCacheStore(String sourceCacheStore, String publishingCacheName) { this.publishingCacheName = publishingCacheName; this.sourceCacheName = sourceCacheName; } @Override public Object load(Object key) { return null; } @Override public Map loadAll(Collection keyCollection) { return null; } @Override public void erase(Object key) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing entry: " + key, CacheFactory.LOG_DEBUG); } } @Override public void eraseAll(Collection keyCollection) { if (sourceCacheName != publishingCacheName) { for (Object key : keyCollection) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing collection entry: " + key, CacheFactory.LOG_DEBUG); } } } @Override public void store(Object key, Object value) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).put(key, value); CacheFactory.log("Storing entry (key=value): " + key + "=" + value, CacheFactory.LOG_DEBUG); } } @Override public void storeAll(Map entryMap) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).putAll(entryMap); CacheFactory.log("Storing entries: " + entryMap, CacheFactory.LOG_DEBUG); } } }  As you can see each cache store operation on the data cache results in a similar operation on event cache. This is a very simple pattern which has a lot of additional possibilities, but it also has a few drawbacks you should be aware of: This event throttling implementation will use additional memory as a duplicate copy of entries held in the data cache need to be held in the events cache too - 2 if the event cache has backups A data cache may already use a cache store, so a "multiplexing cache store pattern" must also be used to send changes to the existing and throttling cache store.  If you would like to try out this throttling example you can download it here. I hope its useful and let me know if you spot any further optimizations.

    Read the article

  • SQL Server 2000 tables

    - by user40766
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • How do I install an HP home-use printer on Windows Home Server (Windows Server 2003)

    - by Rob Allen
    I have an HP DeskJet F4210 printer that I would like to share on my network via Windows Home Server. Unfortunately, the driver installation checks for supported OS's, detects Home Server as Windows Server 2003 and exits. The driver install supports WinXP, W2k, Vista, and Win98SE. In theory, drivers for XP or Windows 2000 should work fine with Home Server. When using the "Install Printer" tool in Home Server I am only able to select .inf files (there are serveral on the install media) but the driver folders for XP and 2000 have .sys and .dll files. How can I bypass HP's short-sighted install program and get this printer up and running on Home server? I'll be happy with basic print functionality and will save the task of enabling scanning for another time.

    Read the article

  • SQL Server 2008 Install fails error reading etwcls.mof

    - by YonahW
    I receive the following error when trying to install Sql Server 2008 Standard on a Windows Server 2008 box. Error reading from file D:\x64\setup\sql_engine_core_inst_msi\PFiles\SqlServr\MSSQL.X\MSSQL\Binn\etwcls.mof. Verify that the file exists and that you can access it. When searching the interwebs I only find information about compiling this file but not reading. The file exists in the location requested. I have run the WMIDiag tool and there doesn't seem to be any issues. I am not sure what else I can do to solve this issue and can't seem to find anything on the internet about it. Cross posted at: http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/ae47c277-e822-49c1-89b8-701e23702633

    Read the article

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Win 2008 R2 Server Not Recognizing Second Hard Drive

    - by Brian
    Hello, I just purchased a Dell server, which has two hard drives and no RAID setup. I can only currently see one hard drive... not sure how to get it to recognize the other, as I thought being a new machine that wouldn't be an issue. It has Windows Server 2008 R2 that I loaded on. I'm a n00b to all of this so I'm not sure why this is failing to work... Any help appreciated. Thanks.

    Read the article

  • Network transfer from host to VM very slow - VMWare Server & Windows 2003 Server

    - by barfoon
    Hey everyone, Im trying to transfer a file from a Windows 7 host running VMWare Server to a Windows 2003 server VM, and it's painfully slow. I've tried adding/adjusting registry keys and settings found on KB articles, and still nothing. Ive tried this: http://support.microsoft.com/kb/898468 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1619 Vmware tools are installed. Any ideas? Thanks,

    Read the article

  • SQL Server Column Level Encryption - Rotating Keys

    - by BarDev
    We are thinking about using SQL Server Column (cell) Level Encryption for sensitive data. There should be no problem when we initially encryption the column, but we have requirements that every year the Encryption Key needs to change. It seems that this requirement may be problem. Assumption: The table that includes the column that has sensitive data will have 500 million records. Below are the steps we have thought about implementing. During the encryption/decryption process is the data online, and also how long would this process take? Initially encrypt the column New Year Decrypt the column Encrypt the column with new key. Question : When the column is being decrypted/encrypted is the data online (available to be query)? Does SQL Server provide feature that allows for key changes while the data is online? BarDev

    Read the article

  • User's home drive permissions don't contain system or administrators on Windows Server 2008 R2

    - by JohnyV
    I have a user whose home drive has only that user in the permissions. No administrators, etc. I have tried to take ownership as a local administrator however I cant seem to apply settings to the child objects -- it still gives me a permission denied error. I know there are some handy CLI utils that can redo permissions. Any ideas? Or even a way to do it through Windows? The file server is a 2008 R2 server.

    Read the article

  • Server 2003 IAS RADIUS -> Server 2012 AD DS

    - by Jordan
    I have googled this extensively but have not been able to find a good answer. Does anyone know if ' Windows Server 2003 IAS RADIUS' will query a 'Windows Server 2012 AD DS' and be able to return the attributes correctly? This is just standard AD stuff (Remote dial-in for VPN authentication). I am hypothesizing that it will work OK, but I wanted to see if anyone had any first hand knowledge. Thanks.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >