Search Results

Search found 761 results on 31 pages for 'nfs perfomance'.

Page 25/31 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Windows 7 "Could not reconnect all network drives" on boot

    - by Thermionix
    Windows 7 won't reconnect to my network drives on startup. Once it is done booting opening Windows Explorer and clicking each share will mount them. Windows 7 Enterprise N Service Pack 1 I have attempted formatting the windows machine - first thing done to machine was to map the network drives, upon reboot they were disconnected. It is running on an Crucial M4 64gb SSD. The host of the network shares is a Ubuntu-Server machine connected through a gigabit switch. A modem provides dhcp, although both these machines have static IP's defined. It won't reconnect the drives regardless of whether they're SAMBA shares or NFS shares - therefore I believe it's an issue with the windows machine. Ubuntu 11.10 (GNU/Linux 3.0.0-12-server x86_64) I've tried using ip address instead of netbios name for mapping shares on the windows machine, Also tried setting EnableLinkedConnections=1 gpedit.msc Computer Configuration - Administrative Templates - System-Logon - always wait for the network at computer startup and logon = yes

    Read the article

  • SAN/NAS with high availability?

    - by netvope
    I have two servers that I plan to use for storage. Each of them has a few SATA disks directly attached. I want the storage to be available even if one of the storage servers is down (preferably the clients wouldn't even notice that the fail-over, although I'm not sure if this is possible). The clients may access the storage via NFS and samba, but this is not a must; I could use something else if needed. I found this guide, Installing and Configuring Openfiler with DRBD and Heartbeat, which apparently does the thing I want. It relies on three components, Openfiler, DRBD, and Heartbeat, and all three of them need to be configured separately. I'm wondering are there simpler solutions? Is using DRBD+Heartbeat the best practice for a situation like mine? I'm also interested to know if there are alternatives that don't depend on DRBD.

    Read the article

  • Debian Squeeze can't install php-pear

    - by Lennier
    I use Debian 6.0.6 sudo apt-get install php-pear results in: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: initscripts : Breaks: console-setup (< 1.74) but 1.68+squeeze2 is to be installed Breaks: initramfs-tools (< 0.104) but 0.98.8 is to be installed Breaks: nfs-common (< 1:1.2.5-3) but 1:1.2.2-4squeeze2 is to be installed keyboard-configuration : Breaks: console-setup (< 1.71) but 1.68+squeeze2 is to be installed klibc-utils : Breaks: initramfs-tools (< 0.103) but 0.98.8 is to be installed E: Broken packages How can i solve it?

    Read the article

  • Swap files in Cloud Infrastructures

    - by ffeldhaus
    At our company we set up an OpenStack Cloud and are currently creating internal guidelines for creation of OS templates / images. One controversial topic was if we should provide swap inside the VM templates. Therefore I'd like to ask the following questions From an elastic Cloud provider point of view, does it make sense to offer swap partitions / files in the VM templates or is swap not needed when a VM can be resized? Which scenarios necessarily demand a swap file to be present? What kind of Storage should be used for swap files (e.g. local / central, FC / iSCSI / NFS)? Are there any best practices for offering swap files in a performant way in Cloud Infrastructures?

    Read the article

  • How do I set up disk quotas over LDAP on CentOs?

    - by Noxshun
    I've been google-ing for some time and I haven't been able to find any resources or hints on the subject. I am wondering if it is possible to do so, if so how? Any nudge in right direction will be appricated. I do know that if you download and install "Linux Quota" from source, you'll get some perl scripts which are supposed to aid with the matter. But there is as far as I know absolutely no good documentation to help you along the way. I am also running a NFS server from the same machine. Note: This is for a university assignment, so I might be totally stupid for asking this question. I am trying to explore the options. If there is a better way of solving this, please do tell. Edit: Here is a link to the site of Linux Quota. They do include a LDAP schema, so it should be possible.

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • Ways to setup a ZFS pool on a device without possibility to create/manage partitions?

    - by Karl Richter
    I have a NAS where I don't have a possibility to create and manage partitions (maybe I could with some hacks that I don't want to make). What ways to setup multiple ZFS pools with one partition each (for starters - just want to use deduplication) exist? The setup should work with the NAS, i.e. over network (I'd mount the images via NFS or cifs). My ideas and associated issues so far: sparse files mounted over loop device (specifying sparse file directly as ZFS vdev doesn't work, see Can I choose a sparse file as vdev for a zfs pool?): problem that the name/number of the assigned loop device is anything but constant, not sure how increasing the number loop device with kernel parameter affects performance (there has to be a reason to limit it to 8 in the default value, right?)

    Read the article

  • Tell the linux kernel to put a file in the disk cache?

    - by Rory
    Is there any command to for a file to be read in and loaded into the linux disk cache? This is on an up-to-date debian system. I know in the general case, it's better to let the linux kernel figure this out. But I have an edge case. I have a laptop that has an NFS director mounted, and i want to play a long video file, but I don't want to have a network problem interrupt the playnig. I know that (largeish) file will be read in it's entirety later on. I know that nothing else (really) will be running while playing this video. There is enough free memory to store this file. (I know I could just copy the file into a new tmpfs filesystem, but I'm curious if there's an even shorter way to do it)

    Read the article

  • how to compare files/directories of 2 separate solaris boxes ?

    - by chz
    Hi Friends I have 2 solaris boxes and I need to check certain directories (on local filesystem and mounted nfs) to make sure that they match up on both boxes and to delete or move the other mismatches to elsewhere on the local filesystem. I investigated for unix commands like rsync, and tree but it appears that these commands are not supported on my Solaris boxes. What is the best approach to this problem with the least pain to solve it ? to use rsync, tree and then diff the outputs or find ? I have trouble limiting the find command to certain directories as there are mounted folders that contain too many xml files that I don't care to much in that directory. What's the find command to search multiple directory paths on a single find command. Thanks Sincerely

    Read the article

  • centos server shared files

    - by Kyle Hudson
    Hi, I am wanting to implement a multi-server specialised hosting environment. I currently have a cloud solution comprising of 3 centos boxes (2 lamp web servers, 1 mysql). What I am wanting to do is, implement a 5 server solution where they is 3 web servers, 1 mysql box and a fileshare. Basically I want the fileshare to host all the web files for the servers, the caching will remain on the individual servers and the sessions will be stored in mysql. So what I am asking is how do I map the servers to share the same "docroot"? Is it NFS? if so whats the best way about doing this? Thanks in advance.

    Read the article

  • Can mod_rewrite Conditions/Rules be executed in random order?

    - by Tom
    I have some mod rewrite rules that test for the presence of a file on various NFS mounts and I would like that the tests occur randomly, as a very rudimentary way to distribute load. For example: RewriteEngine on RewriteCond %{REQUEST_URI} ^/(.+)$ RewriteCond /mnt/mount1/%1 -f RewriteRule ^/(.+)$ /mnt/mount1/$1 [L] RewriteCond %{REQUEST_URI} ^/(.+)$ RewriteCond /mnt/mount2/%1 -f RewriteRule ^/(.+)$ /mount2/$1 [L] RewriteCond %{REQUEST_URI} ^/(.+)$ RewriteCond /mnt/mount3/%1 -f RewriteRule ^/(.+)$ /mnt/mount3/$1 [L] RewriteCond %{REQUEST_URI} ^/(.+)$ RewriteCond /mnt/mount4/%1 -f RewriteRule ^/(.+)$ /mnt/mount4/$1 [L] As far as I understand mod_rewrite Apache will look for the file on /mnt/mount1, then mount2, mount3 and so on. Can I randomize this on each request? I understand this is an odd request but I need a creative solution to some unforeseen downtime. On a side note, do I need to redeclare RewriteCond %{REQUEST_URI} ^/(.+)$ each time like I have done? Thanks

    Read the article

  • Setting up remote filesystem access without root privileges

    - by Luke Massa
    OK here's the situation. I have a computer A with complete admin access, and computer B (actually an account I login to) with very limited access. I am trying to make it so I can access a device on computer A (an external harddrive) on B. If I had more access to B, I would just mount the device on B, but I can't do that. I can ssh both directions, so theoretically I can copy data both directions, so it should be possible. I think a NFS might be helpful for me, but from what I've looked at, they all require the client to at some point perform a "mount" operation, something my client can't do. Thoughts?

    Read the article

  • OSX : Setup for filestorage in medium business

    - by Franatique
    In our office every machine runs OSX. In search of an ideal storage and sharing solution we decided to let OSX Server handle all account information and auth requests whereas an 7TB QNAP provides NFS shares. All shares are published as mounts in the companywide LDAP. As it turns out, handling permissions in this situation is very clumsy (e.g. inherit permissions on newly created files). Unfortunately using NFS4 in combination with ACLs did not solve the problem. As a possible solution I set up a iSCSI connection between QNAP and the machine running OSX Server which in turn serves the LUN as AFP share. Permission handling works like a charm for this setup. Although I am a bit concerned about the performance of this setup. As we are a fast growing company we expect the solution to serve at least 100 clients while using files aprox. above 100MB each. Are there any known drawbacks of this solution?

    Read the article

  • Running Ubuntu off a USB drive?

    - by Solignis
    I was wondering if a USB 2.0 Thumb drive has enough bandwidth to act as a primary system drive in an Ubuntu Linux server. More specifically an SAN server. I am running an iSCSI target, ZFS and NFS-kernel-server, BIND9 (Slave), and Openldap (Slave). I was thinking of resorting to a thumb drive because my new motherboard only has 4 SATA ports and I have 5 disks. 4 (ZFS Pool) 1 (System). And unless I get an expansion card there is no way to get more SATA ports. This "server" leans more twords a home server. I use in my lab with my VMware server. It provides storage, or atleast it did until it died. Would it still be better to go with the SATA hard disk?

    Read the article

  • How can I set up disk quotas on a remote server in a Samba environment?

    - by graf_ignotiev
    I've been working with an install of Ubuntu 8.04 and Samba. I'm trying to set up user disk space quotas for our shared drive. That is, each user will have their own folder on our Network Attached Storage (NAS) that has a size limit. I can successfully mount the shared drive, but I can't figure out how to set quotas on it (I think smbcquota might work, but I'm not sure). It's been suggested to me to give each user a share on the shared drive or to use NFS. Unfortunately, neither option works for my purposes. Does anyone know how I could give users a quota on my NAS?

    Read the article

  • Why is RAM usage so high on an idle server? [duplicate]

    - by DeeDee
    This question already has an answer here: Why is Linux reporting “free” memory strangely? 2 answers I'm investigating a server used for scientific data analysis. It's running RHEL 6.4 It has almost 200GB of RAM. It's been running very slowly for users via SSH, and after some poking around I quickly noticed that the RAM usage was sky-high. What's odd is that even in an idle state it's still using a ton of RAM: I also looked via htop and I can't see that any running process is using more than 0.1% of the RAM. So I wonder what's going on? Right now the only user-initiated process running is an rsync between two NFS-mounted shares. I tried rebooting the server and it was much more responsive for a few minutes, but then memory usage shot up again. Is there any way I can pinpoint why memory usage is so high?

    Read the article

  • ??????????·????????????Oracle Solaris Cluster 4.0?

    - by kazun
    ??????OS?Oracle Solaris 11???????????????·????????????Oracle Solaris Cluster 4.0???2011?12?6????????????Oracle Solaris 11 ???????????????????????????Oracle Solaris Cluster 4.0 ???????????????????????? Oracle Solaris Cluster 4.0 ?3?????? - Oracle Solaris Cluster ???????????????????????????????????????????????? - Oracle Solaris 11 ?????? - Oracle Solaris Cluster 4.0?????????????????????????????????????? Oracle Solaris Cluster 4.0?? Oracle Solaris Cluster? Oracle Solaris ??????????????????????????????????????(HA)??????·????(DR)????????? Oracle Solaris 11 ??????????????? Oracle Solaris 11 Image Packaging System (IPS)???????????????????????????????????????????????????????? Oracle Solaris 11 ?????????(Automated Installer)????Oracle Solaris???Oracle Solaris Cluster????????????????????????? Oracle Solaris 11??????? Oracle Solaris 11 Zone Cluster ?????????????????????????????Solaris 11 native ??????????????????????????????????????????????????? ?????????????????????????????????????????Solaris 10 ???? Solaris 11 native ????????? Oracle VM Server for SPARC 2.1????????? ?:Zone Cluster (?)?Failover Zone(?)???? ?????·?????? Oracle Solaris Cluster Geographic Edition ?????????????????????·????????????? ?????????????????????????????????/?????????????????????? Oracle Data Guard(Oracle ?????? 11.2.0.3 ?????)?Availability Suite Feature of Oracle Solaris(Oracle Solaris 11 SRU1 ?????)?Oracle Solaris Cluster Geographic Edition script-based plug-ins?????????·??????????????????? ?:?????·???? ?????????????? Oracle Solaris Cluster 4.0??Apache?Apache Tomcat?DHCP?DNS?NFS???Oracle Solaris 11?????????Oracle Database 11g(?????????????Oracle Real Application Clusters)?Oracle WebLogic Server???????·???????????????·????????????????????????????????????????????????API??????????????????? Oracle Solaris Cluster 4.0??????????????????????????????????????????????????? ??????????? IPS ?????(???????? 30 ??????) Oracle Software Delivery Cloud (IPS?????????) (???? (??????????) ???? 30 ??????) OTN (IPS?????????)(?????????? ) ?????? ??????????????????? ???? Oracle Solaris Cluster Oracle Solaris Oracle's Sun Server and Storage Systems ???? Oracle Solaris Cluste Oracle Solaris ?Oracle Solaris Cluster Oasis?Blog

    Read the article

  • Is it possible to store ObjectContext on the client when using WCF and Entity Framework?

    - by Sergey
    Hello, I have a WCF service which performs CRUD operation on my data model: Add, Get, Update and Delete. And I'm using Entity Framework for my data model. On the other side there is a client application which calls methods of the WCF service. For example I have a Customer class: [DataContract] public class Customer { public Guid CustomerId {get; set;} public string CustomerName {get; set;} } and WCF service defines this method: public void AddCustomer(Customer c) { MyEntities _entities = new MyEntities(); _entities.AddToCustomers(c); _entities.SaveChanges(); } and the client application passes objects to the WCF service: var customer = new Customer(){CustomerId = Guid.NewGuid, CustomerName="SomeName"}; MyService svc = new MyService(); svc.Add(customer); // or svc.Update(customer) for example But when I need to pass a great amount of objects to the WCF it could be a perfomance issue because of I need to create ObjectContext each time when I'm doing Add(), Update(), Get() or Delete(). What I'm thinking on is to keep ObjectContext on the client and pass ObjectContext to the wcf methods as additional parameter. Is it possible to create and keep ObjectContext on the client and don't recreate it for each operation? If it is not, how could speed up the passing of huge amount of data to the wcf service? Sergey

    Read the article

  • Easy way to observe user activity - how improve my database structure.

    - by Thomas
    Welcome, I need some advise to improve perfomence my web application. In the begin I had this structure of database: USER -id (Primary Key) -name -password -email .... PROFILE -user Primary Key, Foreign Key (USER) -birthday -region -photoFile ... PAGES -id (Primary Key) -user Foreign Key(USER) -page -date COMMENTS -id (Primary Key) -user Foreign Key(USER) -page Foreign Key(PAGE) -comment -date FAVOURITES_PAGES -id (Primary Key) -user Foreign Key(USER) -favourite_page Foreign Key(PAGE) -date but now one of the most important page of website is observatory, when everyone can observe activity others users. So I need select all pages, comments and favourites pages some users and display it in one list, sorted by date. For better perfomance (I think) I changed my structure to this: table USER and PROFILE without changes ACTIVITY (additional table- have common fields: user,date) -id (Primary Key) -user Foreign Key(USER) -date -page Foreign Key(PAGE) -comment Foreign Key(COMMENTS) -favourite_page Foreign Key(FAVOURITES_PAGES) PAGES -id (Primary Key) -page COMMENTS -id (Primary Key) -page Foreign Key(PAGE) -comment FAVOURITES_PAGES -id (Primary Key) -favourite_page Foreign Key(PAGE) So now it is very easy get sorted records from all tables. But I have no only foreign key to PAGES, COMMENTS and FAVOURITES_PAGES in ACTIVITY table - there is about ten Foreign Key fields and in one record only one have value, others have None: ACTIVITY id user date page comment ... 1 2 2010-02-23 None 1 2 1 2010-02-21 1 None .... It is corect solution. When I display about 40 records in one page (pagination) I must wait about one secound, but database is almost emty (a few users and about 100 records in others tables). It is depends on amount records per page - I have checked it, but why it takes too long time, becouse of relationships? The website is built in Python/Django. Any advices/opinion?

    Read the article

  • Problem with closing excel by c#

    - by phenevo
    Hi, I've got unit test with this code: Excel.Application objExcel = new Excel.Application(); Excel.Workbook objWorkbook = (Excel.Workbook)(objExcel.Workbooks._Open(@"D:\Selenium\wszystkieSeba2.xls", true, false, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value)); Excel.Worksheet ws = (Excel.Worksheet)objWorkbook.Sheets[1]; Excel.Range r = ws.get_Range("A1", "I2575"); DateTime dt = DateTime.Now; Excel.Range cellData = null; Excel.Range cellKwota = null; string cellValueData = null; string cellValueKwota = null; double dataTransakcji = 0; string dzien = null; string miesiac = null; int nrOperacji = 1; int wierszPoczatkowy = 11; int pozostalo = 526; cellData = r.Cells[wierszPoczatkowy, 1] as Excel.Range; cellKwota = r.Cells[wierszPoczatkowy, 6] as Excel.Range; if ((cellData != null) && (cellKwota != null)) { object valData = cellData.Value2; object valKwota = cellKwota.Value2; if ((valData != null) && (valKwota != null)) { cellValueData = valData.ToString(); dataTransakcji = Convert.ToDouble(cellValueData); Console.WriteLine("data transakcji to: " + dataTransakcji); dt = DateTime.FromOADate((double)dataTransakcji); dzien = dt.Day.ToString(); miesiac = dt.Month.ToString(); cellValueKwota = valKwota.ToString(); } } r.Cells[wierszPoczatkowy, 8] = "ok"; objWorkbook.Save(); objWorkbook.Close(true, @"C:\Documents and Settings\Administrator\Pulpit\Selenium\wszystkieSeba2.xls", true); objExcel.Quit(); Why after finish test I'm still having excel in process (it does'nt close) And : is there something I can improve to better perfomance ??

    Read the article

  • How can I detect message boxes popping up in another process?

    - by Frerich Raabe
    I'd like to execute some code whenever a (any!) message box (as spawned by the MessageBox Function) is shown in another process. I didn't start the process I'm monitoring. I can think of three approaches: Install a global CBT Hook procedure which tells me whenever a window is created on the desktop. Then, check whether the window belongs to the process I'm monitoring and whether the class name is #32770 (which is the class name of dialogs according to the About Window Classes page at the MSDN). This would probably work, but it would pull the DLL which contains the hook procedure into virtually every process on the desktop, and the hook procedure gets called a lot. It smells like a potential perfomance problem. Try to subclass the #32770 system window class (is this possible at all?) and look for WM_CREATE messages in my custom window procedure. Intercept the MessageBox Function API call (even though the remote process is running already!) and call my code from the hook function. So far, I only know that the first idea is feasible, but it seems really inefficient. Can anybody think of a simpler solution than that to this problem?

    Read the article

  • Strange C++ performance difference?

    - by STingRaySC
    I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior? Original code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); double dFreq = iFreq; if (iFreq != 0) { // do some stuff with iFreq... // do some calculations with dFreq... } } While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0. Chnaged code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); if (iFreq != 0) { // do some stuff with iFreq... double dFreq = iFreq; // do some stuff with dFreq... } } Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.

    Read the article

  • Best architecture for a social media app

    - by Sky
    Hey guys, Im working on promising project that develops a new social media app for web and mobile. We are at begin defining functionalities. Nevertheless, I'm thinking ahead on architecture. So I'm asking: 1 - Whats the best plataform to develop the core of this aplication that will have a Rest API interface. 2 - Whats the best database that will scale and grow with my application. As far as I researched, these were the answers I found most interesting: For database: Cassandra NoSQL DB, amazing scalabilty, amazing write performance, good read performance (will be improved on 0.6). I think i will choose that one. Zookeer for transactions on Cassandra. I think that 2 technologies rly good for that propose. What do you think guys? On the front end that will serve the REST API, i dont have a final candidate. For this one i have questions based on Perfomance X Scalabilty X Fast Development/Maintenance. Java or .Net As far as I researched, brings the best balance of this requisits. Python, pearl and Rail, has the best (Fast Development/Maintenance), but sux on all other. C or C++ I dont even consider, because its (Fast Development/Maintenance) sux... So what do you guy think about it?

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • Best way of showing more results with javascript/css

    - by Ricardo Neves
    I'm developing a website and i'm having troubles showing the search results to the user the way I want. Basically, after the user search, the page makes a couple of ajax requests and as soon as a response arrive it appends the info to a specific element on my page. Each results is shown as a line... The problem is that in most case there are going to be more than 1000 results and this would make the page have a large scroll. My idea was to show only the first 15 results and when the user clicks "show more" the element would expand and show the next 15 results and so on... This would be easier to do if the website wasn't responsive, but because it is I can't find the proper way of implementing what I want without lowering the website perfomance. I have "2 ideas": The first is by using something like #element .div:nth-child(-n+15) on my css and figure a way of changing the "15" to how much results I want to show... I don't know if this can be done. Is it possible to call css rules with parameters? Maybe with less css? The second option is probably a bad option if i don't want to lower the website performance. Using javascript I would check if there is a specific css class(like .show-15 .show30 .show45) and add that class to my element and if it don't exist, create it somehow.. Any help would be appreciated.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >