Search Results

Search found 8389 results on 336 pages for 'shared calendar'.

Page 129/336 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Leopard Network Shares and browsing are unreliable

    - by EvilChookie
    I have two macs, running Leopard 10.5.8. One is the 13" MBP connected via WiFi, and the other is a 24" 2008 iMac, connected via ethernet. There are at least another 6-10 machines (windows and mac) awake on the network (with shares) at any given time, yet there are plenty of times where I cannot see any devices/shares in either my "Shared" section in Finder, nor can I see any computers in "Network" in Finder. Restarting doesn't help. I've restarted all the networking gear in the house to no avail. Our network is a series of gigabit switches connected to a D-Link gaming router. I believe we use OpenDNS, and our provider is Cox. I hate having to use "Go - Connect to Server" to browse to commonly used file shares (by IP). I'd like to know why my shares do not always and consistently appear in Leopard. Edit: I ran OnyX this morning, and performed the cleaning and maintenance operations (including disk permissions) on both my Macs, and at least one of my macs has started showing network devices again. (the other is still going). No idea how long this will last. Any ideas as to what is causing this issue, and how to prevent it? Edit 2: Aaaand there the shares go again. So running OnyX is not a permanent or reliable fix for this issue. Edit 3: After a clean reinstall and update, network shares are still unreliable. The SMBClient command mentioned in comments shows me the information it's supposed to show, but the shares do not appear in the shared section. They'll also vanish at random and reappear at random throughout the day.

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Permission denied when running Rails app in VirtualBox Ubuntu guest with files on Windows host

    - by Ola Tuvesson
    I think I'm close to having my dev environment set up exactly the way I want, but one final snag remains. I'm running VirtualBox on a Windows 7 64bit host, with my dev enviroment inside a Ubuntu 12.04 guest. I want to keep the files for my projects on the host filesystem - partly so I can access them when the Ubuntu guest is not running, but also so I can use Tortoise and other Windows based tools (cough Photoshop), and it also eases my backup scheme somewhat. So I've got a folder "Rails" on my NTFS drive, which I've shared (Samba) from the host with a user specifically created for the Ubuntu guest. The mount point has been set up and an entry added to fstab (cifs), using a credentials file and the options iocharset=utf8,mode=0777,dir_mode=07??77 This mounts fine and my Ubuntu user has both read and write permissions to the contents. But when I try to start my Rails app I get permission errors on any files the app needs to write to (e.g. the log file) - why is that? Are there any major conceptual flaws with this approach? Would I be better off using the VBox "shared folders" function?

    Read the article

  • Write access from a Windows client via a ZFS SMB, to a file created on the host in OpenIndiana

    - by Gerald Kaszuba
    I've got an OpenIndiana server running ZFS that is shared using a nobody user and group. I don't fully understand Solaris ACL permissions, but I do know Linux style permissions. The client is Windows 8 and the server is OpenIndiana is oi_148. I'm failing to work out how to make write permission work correctly for the Windows client. It is able to make new files, but can not modify files created by the shell in OpenIndiana. When a file ("local file") is created locally as the user nobody in bash, and another file ("smb file") created remotely via SMB (as nobody also), they are quite different in permissions: # ls -V -rw-r--r-- 1 nobody nobody 0 Dec 2 12:24 local file owner@:rw-p--aARWcCos:-------:allow group@:r-----a-R-c--s:-------:allow everyone@:r-----a-R-c--s:-------:allow -rwx------+ 1 nobody nobody 0 Dec 2 12:24 smb file user:nobody:rwxpdDaARWcCos:-------:allow group:2147483648:rwxpdDaARWcCos:-------:allow In bash, I'm able to write to smb file, but vice versa, the Windows client is not able to write to local file. This is confusing to me because it appears that it should allow the SMB client to write to local file, because nobody is the owner and it has a w in the ACL. The sharesmb setting is is fairly boring, although I'm hoping there can something to set in here similar to a umask: sharesmb name=shared,guestok=true How can I make these two work together and have a symmetrical permission system, where both SMB and the local user produce the same permissions? Is there some sort of ACL that can set at the root of the file system to allow all files to be created in a similar manner?

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • Copying symbolic links and filenames with special characters to NAS

    - by Mr E
    I have a new Western Digital My Book Live NAS. I am trying to copy files from an old drive to the NAS. I'm using Ubuntu 12.04 and I've mounted the drive by browsing the network in Nautilus and choosing a shared folder configured on the NAS. The shared folder is then automatically mounted at .gvfs/files on mybooklive. There are two problems so far: File names and directory names containing certain characters (e.g. : or |). Attempting to copy these results in the error message: cp: cannot stat `/path/to/destination.filename': Invalid argument Symbolic links. In Nautilus I get the error message: Symlinks not supported by backend My questions are: Can I connect to the NAS or configure the NAS so that I can copy my files without this problem? (In case it matters, I don't need Windows compatibility.) If not, what can I do to identify all the problem files? Can I do anything to automatically fix my filenames Please let me know if any of this needs clarification. I'm not too familiar with all of this so I may have left out some useful information.

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Windows 7 - Windows XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • Nginx + uWSGI + Django performance stuck on 100rq/s

    - by dancio
    I have configured Nginx with uWSGI and Django on CentOS 6 x64 (3.06GHz i3 540, 4GB), which should easily handle 2500 rq/s but when I run ab test ( ab -n 1000 -c 100 ) performance stops at 92 - 100 rq/s. Nginx: user nginx; worker_processes 2; events { worker_connections 2048; use epoll; } uWSGI: Emperor /usr/sbin/uwsgi --master --no-orphans --pythonpath /var/python --emperor /var/python/*/uwsgi.ini [uwsgi] socket = 127.0.0.2:3031 master = true processes = 5 env = DJANGO_SETTINGS_MODULE=x.settings env = HTTPS=on module = django.core.handlers.wsgi:WSGIHandler() disable-logging = true catch-exceptions = false post-buffering = 8192 harakiri = 30 harakiri-verbose = true vacuum = true listen = 500 optimize = 2 sysclt changes: # Increase TCP max buffer size setable using setsockopt() net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_max_syn_backlog = 5000 net.ipv4.tcp_window_scaling = 1 net.core.somaxconn = 2048 # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Optimization for port usefor LBs # Increase system file descriptor limit fs.file-max = 65535 I did sysctl -p to enable changes. Idle server info: top - 13:34:58 up 102 days, 18:35, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3983068k total, 2125088k used, 1857980k free, 262528k buffers Swap: 2104504k total, 0k used, 2104504k free, 606996k cached free -m total used free shared buffers cached Mem: 3889 2075 1814 0 256 592 -/+ buffers/cache: 1226 2663 Swap: 2055 0 2055 **During the test:** top - 13:45:21 up 102 days, 18:46, 1 user, load average: 3.73, 1.51, 0.58 Tasks: 122 total, 8 running, 114 sleeping, 0 stopped, 0 zombie Cpu(s): 93.5%us, 5.2%sy, 0.0%ni, 0.2%id, 0.0%wa, 0.1%hi, 1.1%si, 0.0%st Mem: 3983068k total, 2127564k used, 1855504k free, 262580k buffers Swap: 2104504k total, 0k used, 2104504k free, 608760k cached free -m total used free shared buffers cached Mem: 3889 2125 1763 0 256 595 -/+ buffers/cache: 1274 2615 Swap: 2055 0 2055 iotop 30141 be/4 nginx 0.00 B/s 7.78 K/s 0.00 % 0.00 % nginx: wo~er process Where is the bottleneck ? Or what am I doing wrong ?

    Read the article

  • Multiple PHP SAPI configuration

    - by DTest
    I'm trying to build PHP for use as an apache shared module --with-apxs2 but also with the 'php-cgi' binary (fastcgi) on Mac OSX 10.6. I'm using this ./configure : /configure --prefix=/usr/local/PHP \ --with-apxs2=/usr/local/apache/bin/apxs \ --disable-ipv6 \ --enable-cgi \ --with-curl \ --with-mysqli=/usr/local/mysql/bin/mysql_config \ --with-openssl=/usr \ --enable-ftp \ --enable-shared \ --enable-soap \ --enable-sockets \ --enable-zip \ --with-zlib-dir It builds the apache php5.so module just fine, but in /usr/local/PHP/bin, there is no php-cgi file. If I build it without the --with-apxs2 option (and indeed, I don't even need the --enable-cgi option) the php-cgi file gets built with no problems. Background on my setup: PHP 5.3.4, Apache 2.2.14, Mac OSX 10.6, Tomcat with JavaBridge (which is why I need the php-cgi file) Without the apxs2 option, /usr/local/php/bin/php -v produces: PHP 5.3.4 (cli) (built: Dec 21 2010 21:35:14) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies and /usr/local/php/bin/php-cgi -v produces: PHP 5.3.4 (cgi-fcgi) (built: Dec 21 2010 21:35:12) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies My question is, what am I not understanding with php SAPIs that won't allow the building of the two modules at the same time? Also, can I build it --with-apxs2 the first time, then make clean and rebuild in the same PHP directory /usr/local/php for the php files without issue?

    Read the article

  • Windows Server 2003 SBS domain in multiple sites

    - by E3 Group
    We have about 25 employees in our current office and are looking to open up another office in another capital city housing about 15 employees. In our current office, we are running a domain hosted by a 2003 SBS server and I've been tasked by the boss to expand our infrastructure to the new office in the cheapest way possible (cheapest way in the short run that is, because my boss doesn't think more than 6 months ahead). So I'm looking to get a second hand server and have it run Server 2003 Std with exchange server 2003. These are the things that it needs to do: Replicate shared folders that are hosted in the parent LAN. Deliver emails hosted in the parent Exchange Server Somehow link up with the parent domain controller and push the AD to the remote site I'm pretty sure 3 is impossible but the DC would be available if a VPN connection is present, right? On that note, would I be looking at hardware VPN connections? I'm not sure how to deploy the new site as this is my first time doing it and i'm making it especially difficult for myself, seeing as the AD and DC is on an SBS server. Would I first start by establishing a VPN connection and then joining the new server to the domain? Will things 'just work' if I install exchange onto the new server and point outlooks to it? and how would I be able to replicate shared folders?

    Read the article

  • Mac Share Points automatically authenticate with matching Windows AD credentials from Windows

    - by Ron L
    I recently started administering an OS X server (10.8) that is on the same network as our AD domain. While setting up Mac Share Points, I encountered some odd behavior that I hope someone can explain. For the purposes of this example assume the following: 1) Local User on OS X Server: frank, password: Help.2012 2) AD Domain User: frank, password: Help.2012 3) AD Domain: mycompany 4) OS X Server hostname: macserver (not bound to AD, not running OD) When joined to the domain on a a Win 7 computer and logged in as frank and accessing the shares at \\macserver, it automatically authenticates using frank's OS X credentials (because they are the same). However, if I change frank's OS X password, the standard Windows authentication dialog pops-up preset to use frank's AD domain (my company\frank). However, after entering the new OS X password, it will not authenticate without changing the domain to local (.\frank). Basically, if a user in AD has the same User name and password in OS X, it will authenticate automatically regardless of the domain. If the passwords differ, authenticating to the OS X shares must be done from the local machine. (and slightly off topic - how come an OS X administrator can access the root drives on the Mac server from Windows when accessing the Mac shares even when they aren't shared? In other words, it will show all the shared folders from "File Sharing" plus whatever drives are mounted in OS X)

    Read the article

  • Mac WLAN 802.11b+g WPA1 connection issues

    - by Peto
    Hi, I have a Telewell TW-EA510v4 ADSL modem+WLAN router configured as follows: Mode: 802.11b+g Security Mode: WPA1 Pre-shared Key WPA Algorithms: TKIP Connections from only certain MAC addresses have been allowed and the MAC address of my Mac is in that list. The WLAN works just fine with iPhone and an old Acer laptop. It has worked for about two months or so with my MacBook Pro (year and a half or so old model). Ocassionally i've had minor problems with it, which have required either reboot of ADSL modem or reboot of my Mac. However now, for the last week or so I haven't been able to connect to it at all. This is what is what i get in the console when i try to connect: 5.5.2010 20.54.53 airportd[73731] Apple80211Associate() failed -3924 (Invalid PMK) 5.5.2010 20.54.53 Apple80211 framework[584] airportd MIG failed (Associate Event) = -3924 (Invalid PMK) (port = 104599) 5.5.2010 20.54.53 SystemUIServer[584] Error joining WLAN-M: Invalid password (-3924 Invalid master key) The pre-shared key I use is not incorrect. I'm 100% sure of that. The Error Log from the router only says this when I try to connect to it: May 05 21:09:54 home.gateway:i802_1x:none: <my mac address> associated May 05 21:10:00 home.gateway:i802_1x:none: <my mac address> disassociated May 05 21:10:01 home.gateway:i802_1x:none: <my mac address> disassociated Any ideas or tips to troubleshoot this further?

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Windows 2008 R2 AWS CloudFormation Elastic beanstalk configuration

    - by Webmonger
    I'm looking for some configuration advice. I have a need for a load balanced windows environment with shared media across all instances that are hosting the app. The best explanation i can give is that there will be multiple Windows 2008 server with IIS hosting the app going through an ELB to load balance. Users must be able to upload content (images, video etc...) to the site that will be hosted. When a user uploads media it needs to be kept on a shared location so all windows IIS instances can access the files, I can't host the files on S3 because of the app architecture so they need to be in a place where all IIS server will have access. In addition I need to run an update each IIS server instance that updates a local memory cache when SQL data is updated. I was thinking of a configuration like this: [ELB] - [Win 2008 IIS (multiple servers)] - [Win 2008 File & SQL Server(possibly RDS?)] Does this configuration make sense? If not could you provide an idea of how I should configure it. Thanks in advance

    Read the article

  • how to properly set environment variables

    - by avorum
    I've recently started using Windows (having used Ubuntu up until now) and I find myself unable to properly set environment variables. Whenever I set them they don't seem to work. I've been going to Start-Edit Environment Variables for your Account and editing the PATH value in the upper half of the GUI. Here's what I've got so far. ;C:\Chocolatey\bin;C:\tools\mysql\current\bin;C:\Program Files (x86)\Git\bin;C:\Program Files\MySQL\MySQL Server 5.6\bin\;C:\Python33\Scripts; These are each the parent directories of the executables I'd like to be able to run by name from CMD, but mysql, git, and pip aren't being recognized. Am I doing something wrong syntactically or at a general understanding level? I'd like to be able to run these commands without having to specify the full path to the executables every time. EDIT: The full PATH extracted from CMD PATH=C:\Python33\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\GTK2-Runtime\bin;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Program Files\WIDCOMM\Bluetooth Software\syswow64;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Common Files\Acronis\SnapAPI\;C:\Program Files (x86)\Java\jre7\bin;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files (x86)\Microsoft SDKs\TypeScript\;C:\Program Files (x86)\MySQL\MySQL Utilities 1.3.4\; ;C:\Chocolatey\bin;C:\tools\mysql\current\bin I'm being forced to use Windows by my work environment, I don't enjoy the state of affairs.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Setup.exe called from a batch file crashes with error 0x0000006

    - by Alex
    We're going to be installing some new software on pretty much all of our computers and I'm trying to setup a GPO to do it. We're running a Windows Server 2008 R2 domain controller and all of our machines are Windows 7. The GPO calls the following script which sits on a network share on our file server. The script it self calls an executable that sits on another network share on another server. The executable will imediatelly crash with an error 0x0000006. The event log just says this: Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Setup.exe because of this error. Here's the script (which is stored on \\WIN2K8R2-F-01\Remote Applications): @ECHO OFF IF DEFINED ProgramFiles(x86) ( ECHO DEBUG: 64-bit platform SET _path="C:\Program Files (x86)\Canam" ) ELSE ( ECHO DEBUG: 32-bit platform SET _path="C:\Program Files\Canam" ) IF NOT EXIST %_path% ( ECHO DEBUG: Folder does not exist PUSHD \\WIN2K8R2-PSA-01\PSA Data\Client START "" "Setup.exe" "/q" POPD ) ELSE ( ECHO DEBUG: Folder exists ) Running the script manually as administrator also results in the same error. Setting up a shortcut with the same target and parameters works perfectly. Manually calling the executable also works. Not sure if it matters, but the installer is based on dotNETInstaller. I don't know what version though. I'd appreciate any suggestions on fixing this. Thanks in advance! UPDATE I highly doubt this matters, but the network share that the script is hosted in is a shared drive, while the network share the script references for the executable is a shared folder. Also, both shares have Domain Computers listed with full access for the sharing and security tabs. And PUSHD works without wrapping the path in quotes.

    Read the article

  • Windows 7 - Windwos XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • iis 7.5 - WFF and ARR farm management

    - by smackaysmith
    We have two test web farms (IIS 7.5). The Florida web farm has two ARR servers and two content servers. The ARR servers have WFF and NLB installed. The ARR setup uses a shared config located on a file share. The content servers do not have WFF installed. There is one web farm, and it's managed on an ARR server. The Illinois web farm also has two ARR servers and two content servers. ARR servers have WFF and NLB installed, and they use a shared config located on a share. One of the content servers has WFF installed, which makes it the controller; it's also the primary content server. Apparently, Illinois isn't properly configured. From what we've pieced together from various IIS.net articles and this post (http://ruslany.net/2010/07/web-farm-framework-2-0-overview/), the controller should be one of the ARR servers (like our Florida setup). The thing is Florida's controller doesn't have a Primary server nor can you set one of the content servers as Primary. It doesn't have the management piece showing the Trace messages when you click the Servers node (from iis console, Server Farms/FLFarm/Servers http://ruslany.net/wp-content/uploads/2010/07/WebFarm8.png). That management piece does exist in the Illinois farm, but that's a bad configuration. What are we missing that our Florida configuration doesn't have the Primary and Secondary content servers, and the management piece? I have looked for IIS role differences, but there are none.

    Read the article

  • High IOWait executing JBoss 3.2.7

    - by user64205
    Server Details: Kernel: Linux wiq31 2.4.21-9.ELsmp #1 SMP Thu Jan 8 17:08:56 EST 2004 i686 i686 i386 GNU/Linux CPU: 4 x Intel(R) Xeon(TM) CPU 3.06GHz Memory: 1028520 kB JBoss version: 3.2.7 Every time i try to start JBoss, in all CPU's, the iowait values starts to raise and the idle values starts to fall. Before executing my JBoss application, the free command returns the following output: *total used free shared buffers cached Mem: 1028520 966400 62120 0 187756 538928 -/+ buffers/cache: 239716 788804 Swap: 2044072 790672 1253400* After starting my JBoss application, the free command returns the following output: *total used free shared buffers cached Mem: 1028520 1007648 20872 0 187116 524084 -/+ buffers/cache: 296448 732072 Swap: 2044072 819096 1224976* After starting my JBoss application, without answering any request, the java process /proc/PID/status file have the following values: State: S (sleeping) SleepAVG: 27% Tgid: 24022 Pid: 24022 PPid: 21011 TracerPid: 0 Uid: 500 500 500 500 Gid: 500 500 500 500 FDSize: 256 Groups: 500 VmSize: 775200 kB VmLck: 0 kB VmRSS: 156752 kB VmData: 696752 kB VmStk: 36 kB VmExe: 21 kB VmLib: 710375 kB StaBrk: 0804f000 kB Brk: 095bb000 kB StaStk: bffff8c0 kB ExecLim: ffffffff Threads: 62 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 1000000180015ccf CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 Is this behavior being caused by memory swapping, or the short memory available in the server is enough to run my application?

    Read the article

  • How to configure multiple virtual hosts for multiple users on Linux/Apache2.2

    - by authentictech
    I want to set up a virtual hosting server on Linux/Apache2.2 that allows multiple users to set up multiple website domains as would be appropriate for commercial shared hosting. I have seen examples (from my then perspective as a shared hosting customer) that allow users to store their web files in their user home directory with directories to correspond to the virtual host domain, e.g.: /home/user1/www/example1.com /home/user2/www/example2.com instead of using /var/www Questions: How would you configure this in your Apache configuration files? (Don't worry about DNS) Is this the best way to manage multiple virtual hosts? Are there others? What safety or security issues do you think I should be aware of in doing this? Many thanks, folks. Edit: If you want to only answer question 1, please feel free, as that is the most urgent to me at this moment and I would consider that an answer to the question. I have done it for myself since posting, but I am not confident that it's the best solution and I would like to know how an experienced sysadmin would do it. Thanks.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >