Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 131/186 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Help about pure-ftp

    - by hai
    I setup pure-ftp on freebsd behind firewall. On pure-ftp setuped passsi mode ftp(rangle port 50400-50600) and firewall open port from 50400-50600 (include mode IN and out). But i try use ftp client connect but not connect. Nofinication error status: Connecting to 210.245.89.95:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 13:20. Server port: 21. Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER bk Response: 331 User bk OK. Password required Command: PASS Response: 230 OK. Current directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTA Response: PASV Response: EPSV Response: SPSV Response: ESTP Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (210,245,88,98,138,1) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing Status: Connecting to 210.245.88.98:21... Status: Connection established, waiting for welcome message... Help me.

    Read the article

  • arrays in puppet

    - by paweloque
    I'm wondering how to solve the following puppet problem: I want to create several files based on an array of strings. The complication is that I want to create multiple directories with the files: dir1/ fileA fileB dir2/ fileA fileB fileC The problem is that the file resource titles must be unique. So if I keep the file names in an array, I need to iterate over the array in a custom way to be able to postfix the file names with the directory name: $file_names = ['fileA', 'fileB'] $file_names_2 = [$file_names, 'fileC'] file {'dir1': ensure => directory } file {'dir2': ensure => directory } file { $file_names: path = 'dir1', ensure =>present, } file { $file_names_2: path = 'dir2', ensure =>present, } This wont work because the file resource titles clash. So I need to append e.g. the dir name to the file title, however, this will cause the array of files to be concatenated and not treated as multiple files... arghh.. file { "${file_names}-dir1": path = 'dir1', ensure =>present, } file { "${file_names_2}-dir2": path = 'dir1', ensure =>present, } How to solve this problem without the necessity of repeating the file resource itself. Thanks

    Read the article

  • Why we can change our IP address?

    - by iamstupid
    I across some websites that offer change of our IP addresses. It says, we can surf net anonymously, including changing our IP address and location. Most of the softwares are not free, so I have not try it out yet. But my question is, so, IP addresses will no longer be unique or valid for identify which computer were sending/request the information? I though only the ISP can determine our IP, so we can change our IP from some commercial softwares huh? Case: If I change my IP address, I go to a website which is supposed to be banned by my country, will the ISP let me pass the check and I will be able to browse the website which should be blocked? another question: From what I know, if we want to go to certain website, here is the flow: My Computer = ISP = Website = ISP = My computer I am not sure, if its the correct flow, but I am sure that, whichever website I want to visit, I need to go through my ISP, isnt it?. So if we change out IP, our ISP will record our new IP or the original(assigned-by-ISP) IP? Sorry for my bad English.

    Read the article

  • Transferring domain from one registrar to another

    - by Macha
    I have a domain from my old web host, which was free with my hosting account. After a few years, I am moving to a VPS. Most of my other domains were registered with Namecheap, so it was just a matter of changing a few DNS records. However, given that my old host does not provide me with a DNS control panel, and I don't want to be paying a full hosting bill for just domains, I'm now looking into transferring it. My old host says there will be a charge of $15 to them. NameCheap's page seems to imply you don't need the current registrar to do anything, but it also seems to be based on sending an email to the one listed in whois. Of course, my old host have whoisguard on the domain so the only email on it is [email protected] (and not a unique [email protected], just [email protected]) which doesn't go to me. Again, there doesn't seem to be an option to disable this. So, is it a case of paying my old host's fee, and paying again for the domain from NameCheap, or is there some other way to transfer my domain? (I'm not really sure which of the trilogy sites this is best for.)

    Read the article

  • Problems with 5.1 digital out on Ubuntu 12.04

    - by user895319
    I've recently bought a new PC, installed Ubuntu and am now unable to get 5.1 digital sound working. Simple analogue stereo works fine on both the front and rear connectors. On my old box I connected the coax connection from my soundcard to my surround sound amplifier, set Settings-Sound to "Digital Stereo Duplex" and it worked. My old soundcard doesn't fit in my new machine so I'm using the built-in sound hardware. I'm connecting the combination output socket on the back of the PC via the same cable to my surround amp as before. The MB is an MSI Global H61M-P31 with an RealTek ALC887 sound chip. When I go to Settings-Sound I only see "Headphone Built-in Audio" and "Analogue Output Built-in Audio" - no digitial options. The output from aplay -l is: default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC887-VD Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Hardware device with all software conversions While googling for ALC887 I've seen some references to "ALC887 -VD Analog" and some to "ALC887 -VD Digital". Does anyone know if I need to force it to chance mode somehow? It's worth mentioning that when I set the output to 5.1 digital surround in Windows 7 on the same machine I still don't get any sound so it's not a unique Linux problem. Thanks for any help.

    Read the article

  • How to find out what is causing a slow down of the application on this server?

    - by Jan P.
    This is not the typical serverfault question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks. Situation We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching. The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens. Problem In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour. We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in: Can you help me figure out what's wrong and maybe how to fix it? Additional information The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11. If any configuration information would help analyze the problem, just comment and I will add it. Graphs info phpinfo() apc status memcache status collectd Processes CPU Apache Load MySQL Vmem Disk New Relic Application performance Server overview Processes Network Disks (Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Outlook 2010 on WinXP runs once then refuses to run again until reboot

    - by msorens
    Since I installed Outlook 2010 on a new machine (WinXP Pro SP3) a couple months back I have had an issue that is quite annoying: If I close Outlook then attempt to restart it I get a small pop-up saying only: "Cannot start Microsoft Outlook". I found one workaround, but not a terribly practical one: reboot. If I reboot then launch Outlook, it opens fine. Here is what I know: Since I can run Outlook just fine after a reboot, I do not see that a system restore, an OS reinstall, or the like would help. I tried "outlook.exe /resetnavpane" and "outlook.exe /safe" but those give the same error. There are no entries in the event log. There is no instance of Outlook appearing in the process list once I close the program, so it does not seem to be an alias for "outlook is already running". As far as I have found, my situation is unique among reports of similar incidents: I have uncovered no other reports saying Outlook would run fine the first launch or that a reboot would again allow it to run. Suggestions?

    Read the article

  • Drupal 7: One-time user account

    - by Noob
    I'm going to create a survey in Drupal 7 with the webform module, installed on a debian system which may be adapted in every way. The users (personally known, approx. 120) doing that survey will walk into a room and complete the survey in browsers on different computers. After that, they'll leave the room and other persons will enter, complete the survey on the same computers and so on. Each user may enter only one submission. The process needs to be anonymous, i. e. I mustn't have any idea of who did wich submission. My current solution is to generate random one-time-passwords and hand out one password per user (without noting who got which password). Within the survey there will be a password field where the one-time-password is entered. The value is checked by webform to be unique. I'll get the data via csv or Excel and verify the passwords manually in excel by comparing them to the list of valid passwords. The problem is: I don't like the idea of manually generating the password list, copying it to excel and doing a manual check. That's a good idea for one-time-use, but we're going to repeat the survey every once in a while. I'd rather generate one-time-logins (like user0001/fdlkjewf, user0002/dfrefnnr, ...) for each survey, hand them out to the users and let drupal/debian/whatever check whether a submission is valid or not. Do you have any idea how to batch-generate about 120 users with one-time-passwords in Drupal 7 and verify that each user may submit the form only once? Do you even have a better idea how to accomplish the task within the intranet? Thank you for your help.

    Read the article

  • Servers / ram for social network- how many?

    - by Marty
    I am launching my social network soon an looking into hosting. The question i am lost is: Do i need separate servers for web vs database vs image handling since there is photo sharing? Or does 1 server handle it all? Also is more ram better? If i get 50GB ram is that better than having 8 gb ram? EDIT: It is PHP codeignitor and MySQL for now. (switch to NoSQL DB later if demand calls fr it.) I will be using memcache also. Concept wise it is similar to yelp, so geographic based with lots of user content and image sharing + live feeds an privacy levels. User plan is open question. Without testing the demand for this i cant give a number. But the concept is unique, no one out there with the set of features i am releasing so it could grow. Ideally I want to plan for handling about 1-2 million views / month from launch. If it goes more than that then I will upgrade.

    Read the article

  • Reliable custom Windows shortcut keys?

    - by Peter Baer
    I have global Windows shortcut keys assigned to several different cmd.exe instances. I do this by creating shortcuts to cmd.exe on my desktop, and assigning each one a unique shortcut key (for example, CTRL + SHIFT + U). Pretty basic stuff. I'm using Win2K8 (R1 and R2). This works just fine... most of the time. But with infuriating regularity, sometimes it doesn't. Or it will work with a long delay (many seconds). It doesn't matter what app currently has focus (it can even be one of the command prompts). It doesn't matter what keys I assign (I've tried a few variations of WIN, CTRL and SHIFT). I did notice that this is often, but not always, correlated with explorer.exe struggling in some way or another (say, an explorer window opened to a file share that's unavailable, or an app being unresponsive, or whatever). In other words the shortcut key handling appears to be very sensitive to unrelated system activity. Note that whenever I have this problem I can always successfully ALT + TAB to the window I want to get to, but that's tedious. I use the shortcuts to these command windows hundreds of times a day so even a 1% failure rate becomes really annoying. Is there a way to fix this, or is there some third-party utility out there that will RELIABLY intercept custom key combinations to bring focus to whatever apps I want, in a way that is independent of other system activity? ADDENDUM: There is a property of the Windows shortcuts that I would not want to lose if switching to a third-party hotkey tool: Windows shortcuts are idempotent. Once you've launched a shortcut to some app, pressing the shortcut key combo again takes you to the already launched process - it does not launch a new process.

    Read the article

  • Where is '/host' declared for mount in Wubi (Ubuntu 9.10)?

    - by Pedro
    I'm using Wubi (ubuntu 9.10), and I couldn't find where '/host' mountpoint is declared for mounting. There's no entry in fstab, but it's listed in /proc/mount and mounted at boot time. Any ideas? pedroel@ubuntu:~$ cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,relatime,mode=755 0 0 /dev/sda1 /host fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 /dev/loop0 / ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0 none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0 none /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0 /dev/loop1 /home/pedroel/Downloads ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 gvfs-fuse-daemon /home/pedroel/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0 /dev/mapper/isw_efhafcifi_RAID_Volume01 /media/RAID_D fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 pedroel@ubuntu:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 /host/ubuntu/disks/root.disk / ext4 loop,errors=remount-ro 0 1 /host/ubuntu/disks/pedro.disk /home/pedroel/Downloads ext4 loop,errors=remount-ro 0 1 /host/ubuntu/disks/swap.disk none swap loop,sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 Thanks in advance, Pedro

    Read the article

  • Automated Linux VMs on Hyper-V 2012

    - by Mick
    I have a requirement to create a ton of linux VMs for our customers (we run managed infrastructure) on Hyper-V 2012 in the coming months and I have an issue with automating it. Here is how I need it to work: User accesses their web page and creates a VM. VM is created with a unique IP and name User logs in over SSH I know Hyper-V quite well and can work with powershell and am a C# programmer so the development side of things is taken care of. I also know enough about Linux to be at least competent: I have used it on and off for a number of years but not done anything Enterprise-level with it. All this can be done easily by manual processes but I need to be able to script or program this to automate it as there could be hundreds of them being created but I don't know how. My first thought is to have a database with random-generated names and IPs already created but I don't know how to get a Linux VM to boot up and grab one from the database... I suppose a Kickstart script would take care of it but I don't know what to do from there. Here is what is bouncing around in my head: Create a std linux build. - Easy to do Someone clicks "Create VM" and I pull a name and IP from the database and write it to a kickstart script. - Easy to do I could then open the template VHDX file and copy in the script and then save it. - Not sure if possible User boots up new VM and the kickstart script gives it the name and IP I assigned it. My problem is that I don't know how to open a VHDX file and insert a kickstart script into it... can't figure it out. I am reaching here and this solution may be miles off... I am more used to creating Windows VMs with scripts and so on which i am more familiar with... any help would be appreciated. Thanks Mick

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • How can I redirect/forward all the UDP/TCP traffic on one interface to another interface in OpenWrt

    - by Sina Sou
    I am new to networking and I have a measurement device (D) that periodically sends all its readings over few UDP multicast sockets (with different multicast IP addresses and different port numbers). That device even listens to a TCP socket simultaneously to modify its configuration on port 7234. Since the device has just a Ethernet interface for communication and I want to make it work wireless, I decided to use a very small wireless open-wrt based router that attaches to the device (D) and redirect/forward all the network traffic(Both UDP/TCP) to the router wireless interface. In order to simplify the problem assume that the Device (D) establishes following sockets (at the same time) UM_SOCK1: UDP mcast socket on 239.1.2.3 port# 50620 UM_SOCK2: UDP mcast socket on 239.1.2.4 port# 50640 TC_SOCK3: TCP DHCP/STATIC ip address 192.168.1.200 port 7234 And (D) is connected to Open-Wrt router (R) via interface en01 (Ethernet) the router has it own wireless interface on (wlan0) I want all the traffic from interface pass through wlan01 and vice versa (bi-directional) en01 <---- wlan01 What would be the minimum iptables or ... commands that I need to make this possible? Even I am wondering if traffic directing can be made easier like if the direction is not going to be based on IP addresses(not desired if the device is connected via DHCP) I would rather redirection to be Interface(en0) based or on MAC address (The best solution since my device has unique MAC address)? Thanks

    Read the article

  • How to manage unprivileged administration of system services using Debian?

    - by ypnos
    At our lab, we have several services handled by different phd students (like myself). Fluctuation is high and people do the job next to their research duties. Until now, services were running on different machines, with different OS setups that can result in administration hell quickly. We want to consolidate our service setup. Our main idea is that the guys responsible for the services should not meddle with the underlying system anymore. Apart from core systems like NFS and kerberos, a typical service is able to run as non-root already. I'm talking about apache, mysql, subversion, mail with openxchange, and so on. Redirecting privileged ports is also no issue (source). What is left is the configuration of the service and its payload. One scenario we envisioned is that every service has its own user and home directory, accessable by the corresponding admins. Backup and fallback of the service is easy, as everything needed for the service to run is found in one place. Are there established ways to create such a setup? Does a mostly unique method exist to make services find their files (other than in system directories) while still using the corresponding debian packages? Are there any catches with our idea that we may have overlooked? Would you maybe claim that virtualization is the answer to our problem? (In our POV, it wouldn't help us keeping system setup strictly separated from service setup.) Thank you for any advice!

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • Windows/IIS Hosting :: How much is too much?

    - by bsisupport
    I have 4 Windows 2003 servers running IIS 6. These servers host a bunch of unique web sites (in that they are all different in build/architecture/etc). The code behind these sites range from straight HTML, classic ASP, and 1.1/2.0/3.x flavors of .NET. Some (most) of the sites use a SQL backend, which is hosted on one or two different servers – not the IIS servers themselves. No virtualization on these servers and no load balancing for these particular sites. The problem I’m running into is coming up with some baseline metrics to determine, or basically come up with a “baseline score” to know when a web server has reached its hosting limit. Today, some basic information about each server is used: how much bandwidth does the server pump out, hard drive space availability, and basic (very basic) RAM & CPU utilization (what it looks like at peak traffic times.) I would be grateful if those of you that are 1000x smarter than I am could indulge me with your methods of managing IIS environments. Whether performance monitoring specifics, “score” determination as I’m trying to determine, or the obvious combination of both. Thanks in advance.

    Read the article

  • IIS 7 and ASP.NET State Service Configuration

    - by Shawn
    We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that. I've followed these instructions to no avail. The session still isn't being shared between the two servers. I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue? Update: I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

    Read the article

  • Win/Bios showing wrong RAM in Gateway Netbook?

    - by Ael
    I've seen similar problems, but this seems slightly unique, maybe... I have a Gateway LT21 netbook, Win7. I've upgraded RAM from 1gb to 2gb. It didn't work. So I updated to the latest Bios 1.25, then it worked. 2gb was recognized in the Bios and in Windows. Every was fine. Now today it seemed slow and, to my surprise, both the Bios and Win show only 1gb. :/ I've run memory diagnostic, no error. I entered bios and hit Exit and Save. Still 1gb. I took out the ram, put it back. Still 1gb. :/ CPU-Z shows 2048mb/2gb of RAM. Further testing: if i put in the old 1gb ram, turn on, then put in the new 2gb ram again, the Bios and Win show 2gb of ram. BUT, once restarted at all (even from Bios) it seems to go back to showing incorrect 1gb ram again. :// (There are very few options in the bios, none appear memory-related.) Any ideas?

    Read the article

  • Office 2010 OCT Outlook Filepaths

    - by vlannoob
    I'm playing around with customizing Office 2010 installs on my network, normally I just do a full manual install, but as the environment grows and the lazier I get its becoming a pain to do it manually every time. I've read up and downloaded the Office 2010 OCT tool and it looks relatively straight forward - with one exception - the Outlook Profile. I can 'get around it' by just leaving it all as default (or not enabling offline use) but I'd like to customise it slightly so that its all setup no matter who logs onto the PC. The only issue I have, and my question is: In the OCT - Outlook section What do you enter into the Path and Filename for the OST file and the Offline Address book seetings under Enable Offline Use section? I'm sweet with everything else - just that one section, and I think if I bugger that one it will kill the whole Outlook Profile?? It would need to go into each users unique filepath for their profile correct? I have a fair idea of what should be there but I'm struggling with the correct syntax. I know this is a stupid question....but its late in the day and my brain is fried ;) As usual - any and all help/assistance is appreciated ;)

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Ways to parse NCSA combined based log files

    - by Kyle
    I've done a bit of site: searching with Google on Server Fault, Super User and Stack Overflow. I also checked non site specific results and and didn't really see a question like this, so here goes... I did spot this question, related to grep and awk which has some great knowledge but I don't feel the text qualification challenge was addressed. This question also broadens the scope to any platform and any program. I've got squid or apache logs based on the NCSA combined format. When I say based, meaning the first n col's in the file are per NCSA combined standards, there might be more col's with custom stuff. Here is an example line from a squid combined log: 1.1.1.1 - - [11/Dec/2010:03:41:46 -0500] "GET http://yourdomain.com:8080/en/some-page.html HTTP/1.1" 200 2142 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; C) AppleWebKit/532.4 (KHTML, like Gecko)" TCP_MEM_HIT:NONE I'd like to be able to parse n logs and output specific columns, for sorting, counting, finding unique values etc The main challenge and what makes it a little tricky and also why I feel this question hasn't yet been asked or answered, is the text qualification conundrum. When I spotted asql from the grep/awk question, I was very excited but then realised that it didn't support combined out of the box, something I'll look at extending I guess. Looking forward to answers, and learning new stuff! Answers doesn't have to be limited to platform or program/language. For the context of this question, the platforms I use the most are Linux or OSX. Cheers

    Read the article

  • How to eliminate the domain suffix off my user profile folder when migrating to a new domain?

    - by Jerry Dodge
    We have just upgraded a decade old SBS 2003 server to a brand new SBS 2011 machine. During the process, over 30 other client/server machines on that domain also needed to be dis-joined and re-joined from the old domain to the new one. These domains have different names and is not migrated in any way. It's built from scratch. Since each client machine had very unique user profiles under this domain, we needed to make sure these were all backed up and migrated over to the new domain. For the most part, profiles were migrated with no hassle, just by renaming the user profile folder names. However, in one case, when I log in to my domain account, it creates a profile folder with a suffix of the new domain name. I have replaced all the files in the profile's root which begin with "ntuser" with the files of the new profile. The only problem is half the applications can't find their data, because the folder name is different. How can I change this folder name and maintain this profile on the new domain? I have deleted every user account (except admin), deleted their profiles/folders, removed them from the registry, and made sure every trace of this account was gone. The computer was basically a dummy with only an admin account. Then, I log into the machine under my new domain user account (same username as the old domain). It creates a profile folder with my username plus a suffix extension of the new domain name. The client machine is Windows 7 Ultimate, the old server was SBS 2003, and the new server is SBS 2011.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >