Search Results

Search found 1785 results on 72 pages for 'a round tuit'.

Page 49/72 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Debian, How to convert filesystem from ISO-8859-1 into UTF-8?

    - by Johan
    I have a old pc that is running Debian stable, that is in need of a upgrade. The problem is that it is using latin1 (ISO-8859-1) for everything, and since the rest of the world has moved to UTF-8 I plan to convert this computer as well. And for this question I will focus in on the files that are served with Samba, and some has some latin1 characters in the filenames (like åäö). Now my plan is to move all data of this old computer onto and a brand new one that is running Debian stable (but with UTF-8). Does anybody have a good idea? Thanks Johan Note: later I plan to use iconv to convert the content of some files with something like this: iconv --from-code=ISO-8859-1 --to-code=UTF-8 iso.txt > utf.txt However I don't know of a good way to convert the filesystem it self. Note: Normally I usaly just scp from one computer to the next, but then I end up with latin1 characters in the utf-8 filesystem... Update: Did a small test round with a hand full of files (with funny chars) in the filenames, and that seemed like it could work. convmv -r -f ISO-8859-1 -t UTF-8 * So it was only to execute with the --notest convmv -r -f ISO-8859-1 -t UTF-8 --notest * Nothing more to it.

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • New SSD, is the MBR broken? DISK BOOT FAILURE

    - by Shevek
    I've been running Windows 7 on a WD 500gb SATA single drive, single partition setup for some time with no issues. I've just installed a new Kingston V Series 64gb SSD and performed a clean install of Win7 to it, deleting the partitions on the 500gb and using that as a data drive. All was well for a few reboots but then I started to get "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER" messages. If I put the Win7 install DVD back in the drive it boots fine. Tried a clean install again, after replacing SATA cables and swapping SATA ports, with a complete partition wipe of both drives. Again, rebooted fine a few times then back with the "DISK BOOT FAILURE" error. Looked on the web and found some discussions about it so I then started from scratch again. This time I wiped the MBR on both drives using MBRWork, disconnected the 500gb and reinstalled to the SSD. Removed the install DVD and installed all the drivers which involved many reboots, all with no problem. To make sure I also did a few cold boots as well. Reconnected the 500gb, initialised, partitioned and formatted it. Copied data to it and did some more reboots and shutdowns. All was ok. Then out of the blue comes another "DISK BOOT FAILURE" and again, if the Win7 install DVD is in the drive it boots fine. So, is the SSD a bad'un? TIA UPDATE: It was a BIOS issue! I found a hidden away option for HDD boot order, which was separate from the usual HDD/CDRom/FDD boot order option. The WD was set to boot before the SSD... Swapped them round and all is well. Still don't understand how it worked at first though... Thanks Solaris

    Read the article

  • Exchange 2010 CAS Removal == Broken???

    - by Doug
    Hi there, I recently upgraded to exchange 2010 and have a setup with 2 of my servers running CAS roles - EXCH01, EXCH02 EXCH02 just happens to also have a mailbox role where a lot of the users sit EXCH01 is my front facing CAS server, and is facing the net with SSL etc and incoming mail moving through it as a hub transport layer server as well. As i was trying to lean things out in my VM environment i removed the CAS role from EXCH02 and all hell broke loose. All the mail users that have a mailbox on EXCH02 had their homeMTA set to a deleted items folder in AD and so did their msExchHomeServer properties. After a complete battle i manually fixed these issues to the oldvalues, and in the mean time reinstalled CAS on EXCH02 (management was going nuts with out OUTLOOK working so i just put things back the way they were in a hurry.) I must add as a strange thing on the side, that before i reset these to point at EXCH02 i tried EXCH01 and it failed. I still want to remove the CAS role from EXCH02 as it should really not have it (error on install/planning on my part) and would have thought that this would not cause the issues it did, i assumed that the fact that there was another CAS server in the admin group all would be good. Was i wrong in my assumption? and what can i do to complete this successfully the second time round? Do i need to rehome all the mailboxes to the CAS server? is this a bug in the role uninstall?

    Read the article

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • How do I upgrade Windows Server 2008 R2 Standard (OEM Key) to Enterprise (MSDN Key) using DISM?

    - by Tom Crane
    (Originally asked as After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB but now I know what the question really is...) My Dell server came preinstalled with 2008 R2 Standard. I upgraded to Enterprise to take advantage of more than 32GB RAM. This server is purely for dev and testing, so I want to use my MSDN product key for the upgrade. I originally tried to uprade using the MSDN Enterprise key, but it wouldn't have it: dism /online /Set-Edition:ServerEnterprise /ProductKey:[MSDN key] => Error DISM DISM Transmog Provider: PID=5728 Product key is keyed to [], but user requested transmog to [ServerEnterprise] - CTransmogManager::ValidateTransmogrify I tried several things, including changing the current product key to the MSDN one. Eventually I used a KMS generic key which can be found in several technet forum posts. dism /online /Set-Edition:ServerEnterprise /ProductKey:[KMS Generic Key] ... and this appeared to work. I then changed the product key again (using the control panel) to the MSDN key, thinking that was the end of the matter. Only later when tried to start up VMs did I realise I only had 4GB of usable RAM. I didn't make the connection with the licensing changes at this point and went off on a wild goose chase of BIOS settings, memory configurations and the like. Only later when I saw this... http://social.technet.microsoft.com/Forums/en/winserverTS/thread/6debc586-0977-4731-b418-ca1edb34fe8b ...did I make the connection and reapply the KMS Generic key - which gave me all the RAM back. But now I have a system that isn't properly licensed, presumably I won't be able to activate it as it is, so I've got 2 days to enjoy it. With the MSDN key applied, only 4GB RAM is usable. Is there a way round this without a) rebuilding the server from scratch with the MSDN key from the start or b) buying a retail Enterprise license

    Read the article

  • General name for Macs' operating system

    - by andy124
    First of all, I hope that my question is fairly suitable for this site. I have a website where I would like to write articles about some operating systems. Therefore, I have created a main category called "Operating systems". Within a subcategory, I would like to write articles about Apple's operating system that is running on Macs. However, I do not know what to name this category. I have always thought the name was just OS X, but come to think about it, the "X" is actually part of the version (10). Therefore I cannot exactly call my category OS X, because what about when OS 11 is released in a few years? And since Apple has gone from Mac OS X to just OS X, then I cannot use "Mac OS". And, if I remove the X from OS X, then I only have "OS" left, which does not seem so proper. I am really looking for a meaningful all-round name for the Macs' operating system that does not involve the versioning. I was thinking about just calling the category "Mac", but that is not precise either - but perhaps the closest I can get?

    Read the article

  • Apache2: Limit simultaneous requests & throttle bandwidth per IP/client?

    - by xentek
    I want to limit simultaneous requests & throttle bandwidth per IP/Client on a single apache vhost. In other words, I want to ensure that this site, which hosts large media files, doesn't get hammered by someone trying to download everything all at once (just happened the other night). I'd like to limit the outgoing transfer speed overall for this site, as well as limit the number of connections a single IP can make to the server to a sane default (i.e. within normal browser limits for multiple requests so page loads aren't effected too much). Bonus points if I can actually scope it to file types (i.e. leave web files alone, but apply these rules to just the media files). We're running Ubuntu 9.04 on all the servers, and have two apache/php servers being load balanced via Round Robin by a squid proxy server. MySQL is running on its own box as well. We've got plenty of bandwidth to give them, so I don't really want overall caps, but just want to throttle the amount of memory/CPU it takes to serve this site. There other sites on these servers that we don't want to apply these rules too, just want to keep this one from hogging all the resources. Let me know if you need more info! Thanks in advance for your suggestions!

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • Move every 3 rows into a column in excel

    - by Eliane El Asmr
    Please i need your help. I need to move every 3 rows into a new colomn. --Let's suppose i have this: Ambassade de France S.E. M. Patrice PAOLI 01-420000-420150 Ambassade de France Mme. Jamilé Anan 01-420000-420150 Ambassade de France Mme . Marie Maamari 01-420000-420150 --I need them to be Like this: Ambassade de France S.E. M. Patrice PAOLI 01-420000-420150 Ambassade de France Mme. Jamilé Anan 01-420000-420150 Ambassade de France Mme . Marie Maamari 01-420000-420150 I have this code. Can you help me Please. It's giving me error. Out of range. What should i change? It's urgent:(the code is for every 7, i need for every 3) Sub Every7() Dim i As Integer, j As Integer, cl As Range Dim myarray(100, 6) As Integer 'I don't know what your data is. Mine is integer data 'Change 100 to however many rows you have in your original data, divided by seven, round up 'remember arrays start at zero, so 6 really is 7 If MsgBox("Is your entire data selected?", vbYesNo, "Data selected?") <> vbYes Then MsgBox ("First select all your data") End If 'Read data into array For Each cl In Selection.Cells Debug.Print cl.Value myarray(i, j) = cl.Value If j = 6 Then i = i + 1 j = 0 Else j = j + 1 End If Next 'Now paste the array for your data into a new worksheet Worksheets.Add Range(Cells(1, 1), Cells(101, 7)) = myarray End Sub Thank you.

    Read the article

  • Can't get DNS Alias work on Ubuntu 10.04 with Apache 2

    - by Johnny
    I want to use the DNS Alias to configure one of my domain pointing to a specific directory on the server. Here is what I've done: Change the IP address in domain setting, and it works $ ping www.example.com PING example.com (124.205.62.xxx): 56 data bytes 64 bytes from 124.205.62.xxx: icmp_seq=0 ttl=48 time=53.088 ms 64 bytes from 124.205.62.xxx: icmp_seq=1 ttl=48 time=52.125 ms ^C --- example.com ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 52.125/52.606/53.088/0.482 ms Add sites-available and sites-enabled $ ls -l /etc/apache2/sites-available/ total 16 -rw-r--r-- 1 root root 948 2010-04-14 03:27 default -rw-r--r-- 1 root root 7467 2010-04-14 03:27 default-ssl -rw-r--r-- 1 root root 365 2010-06-09 18:27 example.com $ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 26 2010-06-09 15:46 000-default -> ../sites-available/default lrwxrwxrwx 1 root root 33 2010-06-09 18:17 001-example.com -> ../sites-available/example.com But it doesn't work and when I open the browser for www.example.com, it shows an 111 error: The following error was encountered: Connection to 124.205.62.48 Failed The system returned: (111) Connection refused Here is how example.com's config: $ cat /etc/apache2/sites-enabled/001-example.com <virtualhost *:80> DocumentRoot "/vhosts/example.com/htdocs/" ServerName www.example.com ServerAlias example.com <Location /> Order Deny,Allow Deny from None Allow from all </Location> #Include /etc/phpmyadmin/apache.conf ErrorLog /vhosts/example.com/logs/error.log CustomLog /vhosts/example.com/logs/access.log combined Could you please tell me how to solve this?

    Read the article

  • Good Enough Failover Strategy for DNS / MySQL / Email

    - by IMB
    I've asked and read a lot questions regarding DNS failover but the more I read the more complicated it becomes, some people say it's good enough some say it isn't. No clear answers from what I read. I was wondering if we can set it straight once and for all, at least for the requirements of most websites out there. Right now let's assume the following: We don't need really need load-balancing, what we need is a failover solution. We are running a website based on LAMP on a VPS. We need to make sure that the Web Server, MySQL, Email are always accessible if not 99%. Basically here's my idea and questions about it: Web Server: We need at least one failover server (another VPS on a separate data center). Is DNS Failover via Round Robin good, if not, what's the best? And how do you exactly implement it? How do you make the files you upload/delete on Server A is also on Server B? MySQL: I've only read a brief intro to MySQL replication and I assume that I can replicate Server A to Server B and vice versa on the fly right? So just it case Server A fails and Server B is now running, it will continue to work and replicate to Server A when it becomes available. So in essence Server B is now the primary server, and will later on failover to Server A, should a failure happen again. Email: If we are gonna use DNS Failover, using webmail or relying on emails stored on the server is probably not a good idea right? Since some emails might be on Server A while some might be on Server B? I assume a basic email forwarder to a 3rdparty is good enough (like Gmail for example) to ensure all emails are kept in one place. Here's a basic diagram for a better picture: http://i.stack.imgur.com/KWSIi.png

    Read the article

  • Unable to connect via NetBIOS Name

    - by grom
    I can't connect to machines/shares by NetBIOS names. Below is console output showing the problem. C:\>nbtstat -n Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] NetBIOS Local Name Table Name Type Status --------------------------------------------- BEAST <00> UNIQUE Registered WORKGROUP <00> GROUP Registered BEAST <20> UNIQUE Registered WORKGROUP <1E> GROUP Registered WORKGROUP <1D> UNIQUE Registered ..__MSBROWSE__.<01> GROUP Registered C:\>nbtstat -A 192.168.1.3 Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- BRCLAPTOP <00> UNIQUE Registered WORKGROUP <00> GROUP Registered BRCLAPTOP <20> UNIQUE Registered WORKGROUP <1E> GROUP Registered MAC Address = 00-1C-BF-14-B8-6E C:\>ping beast Pinging beast [fe80::59b8:179f:b90b:a63f%11] with 32 bytes of data: Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Ping statistics for fe80::59b8:179f:b90b:a63f%11: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\>ping brclaptop Ping request could not find host brclaptop. Please check the name and try again. C:\>nbtstat -a brclaptop Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] Host not found.

    Read the article

  • Not able to access Silverlight.net and ONLY Silverlight.net - All other domains work!

    - by Sootah
    Alrighty folks, I have an extremely odd problem. I am able to surf the web fine with one odd (and really annoying at the moment) exception: Microsoft's Silverlight.net. Every other site that I go to works just fine. This is quite frustrating because I'm in the middle of programming a web app in Silverlight 4.0, and whenever I do a search for any code examples, tutorials, or whatnot at least 50% of the results are hosted in the silverlight.net forums. The error message that I get is: Oops! Google Chrome could not find www.silverlight.net It doesn't work in my other browsers either (both IE and FireFox). What's odd, is that while the error message would lead me to assume it's a DNS error, I can ping the URL just fine. C:\Users\The Doot>ping silverlight.net Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Ping statistics for 206.72.125.201: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 103ms, Maximum = 110ms, Average = 106ms I've checked my HOSTS file, and there's nothing that refers to ANY Microsoft URL in there. What could be causing this!?? More importantly, how do I fix it? Just for kicks, I've even included the results of a traceroute here for your enjoyment. OS: Windows 7 Ultimate Thanks in advance! -Sootah

    Read the article

  • Webcam security camera software that runs as a service

    - by hurfdurf
    I've been looking for Windows webcam software that will run as a Windows service without any user login. The goal is to use the webcam as a cheap security camera and log the results to secure networked storage (windows share, not FTP). The requirements are: Motion detection Video capture Runs as a service (should start recording immediately after reboot) Nice to have: Round-robin storage, e.g. 10Gb limit, oldest files overwritten/deleted when space gets low I've read the other webcam questions but still haven't stumbled across anything suitable. Evaluations thus far: Title MotionDetect Service Snapshots Video SpaceLimit License Yawcam Yes Yes Yes No No GPL WebCam ZoneTrigger Yes No Yes Yes No Commercial Dorgem Yes No Yes Yes No GPL AbelCam Yes No Yes Yes No Commercial Logitech Yes No Yes Yes No Paired with camera IspyConnect Yes No Yes Yes Yes Free SecureCam (SourcefoYes No Yes Yes No GPL AbelCam Yes No Yes Yes No Commercial Active WebCam Yes Yes(?) Yes Yes Volume Free Commercial WebCam Surveyor Yes No Yes Yes No Commercial WebCamsPy NA NA NA NA NA GPL Camera: Logitech Webcam Pro 9000 Windows 7 32-bit WebCamsPy failed to initialize so couldn't be tested So far, the contenders: Active Webcam comes the closest, and claims to run as a service, but i haven't been able to get it to record after a cold boot even though a service is running. Yawcam can be set up as a service but doesn't record video. IspyConnect has exactly the type of space limit I want and looks great, but doesn't run as a service (seems also to be a bit of a cpu hog) Any other suggestions? I'm locked into Windows so can't use linux Motion, which looks almost perfect. Any pointers to rich Windows webcam/motion detection libraries out there that could easily be turned into a command line program would also be appreciated.

    Read the article

  • CIFS Mounting Permissions

    - by malco
    I have an issue that I;m going round in circles with, I hope you can help. The Set up: Server 1 (CIFS Client) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad Server 2 (CIFS Server) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad All users (apart from root) are AD authenticated and this, including groups, etc works happily. What's working: I have created a share on Server 2: [share2] path = /srv/samba/share2 writeable = yes Permissions on the share: drwxrwx---. 2 root domain users 4096 Oct 12 09:21 share2 I can log into a Windows machine as user5 (member of domain users) and everything works as it should, for example: If I create a file it shows the correct permissions and attributes on both the MS and the Linux sides. Where I Fall Down: I mount the share on Server 1 using: # mount //server2/share2 /mnt/share2/ -o username=cifsmount,password=blah,domain=blah Or using fstab: //server2/share2 /mnt/share2 cifs credentials=/blah/.creds 0 0 This mounts fine, but.... If I log su, or log onto server 1 as a normal user (say user5) and try to create a file I get: #touch test touch test touch: cannot touch `test': Permission denied Then if I check the folder the file was created but as the cifsmount user: -rw-r--r--. 1 cifsmount domain users 0 Oct 12 09:21 test I can rename, delete, move or copy stuff around as user5, I just can't create anything, what am I doing wrong? I'm guessing it's something to do with the mount action as when I log onto server2 as user5 and access the folder locally it all works as it should. Can anyone point me in the right direction?

    Read the article

  • TEMP environment variable occasionally set incorrectly

    - by Roger Lipscombe
    Occasionally, I find my TEMP and TMP environment variables set to C:\Windows\TEMP. They should be set to %USERPROFILE%\AppData\Local\Temp, and are configured correctly in System Properties. This manifests itself as error messages like the following: ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\Windows\TEMP\gb_pz65v.0.cs' could not be found error CS2008: No inputs specified ...which occurs in various .NET applications (in particular Visual Studio 2010 or SQL Server Management Studio). Alternatively, SQL Server Management Studio will report: Value cannot be null. Parameter name: viewInfo (Microsoft.SqlServer.Management.SqlStudio.Explorer) If I run PowerShell elevated, then $env:TEMP is set correctly. If I run PowerShell non-elevated, then it's not. I believe that it should be set correctly in both cases. If not, it's the wrong way round. The same is true for CMD.EXE. Rebooting fixes it, temporarily, until something breaks it again. Presumably something loaded into Explorer.exe is messing with its environment variables, but what? The values in the registry are correct, even while this is happening: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment has TEMP = %SYSTEMROOT%\Temp HKCU\Environment has TEMP = %USERPROFILE%\AppData\Local\Temp By setting a breakpoint on shell32!RegenerateUserEnvironment, I'm able to trap it when it happens, but I still don't know why explorer.exe is reading the wrong environment variables. I can reproduce it consistently by broadcasting a WM_SETTINGCHANGE message (I wrote a one-line C++ program to do this). Watching the activity in Process Monitor shows that explorer.exe doesn't even look at HKCU\Environment. What is going on?

    Read the article

  • ftp.exe does not convert end of line characters while transferring to FreeBSD ftp server

    - by Jagger
    I am having problems transferring a text file from Windows 7 using ftp.exe to a FreeBSD server. After the file transfer the end-of-line characters are not changed from \r\n to \n, Instead they remain with the carriage return character which can be seen in for example mcedit as ^M. The file is transferred in ascii mode. Has anybody run into similar problems in the past? As far as I know using the ascii mode during FTP transfer should convert those characters automatically. Does it depend on the server configuration? EDIT: The file can be seen here. EDIT: I have also tried with ncftp.exe under Cygwin but the result is the same. The carriage return character has not been removed even if the transfer type was ASCII. EDIT: It does not work the other way round either. I created a text file in FreeBSD and then downloaded it is ASCII mode to my Windows machine. The end of line characters remained LF as they were in FreeBSD.

    Read the article

  • Nginx proxy upstream cached?

    - by Julian H. Lam
    Attempting to resolve an issue that's been annoying me for a bit. I've distilled the symptoms into a set of reproducible steps: I have two sites, siteA, and siteB. They are both Node.js applications running on different ports (for the sake of example, 4567 and 4568) Both applications have their own file in sites_available (plus a symlink from sites_enabled), which contain the directives proxy_pass http://node_siteA/ and proxy_pass http://node_siteB/ respectively, inside of a location block. They also each have an upstream block (defined globally?): upstream node_siteA { upstream node_siteB { server 127.0.0.1:4567; server 127.0.0.1:4568; } } Site A and Site B have nothing to do with each other. Yes, I am restarting (reloading, actually) nginx every time I make a change. If I take down site B and attempt to access it via the web, I am served site A. Why is this? Thoughts Other times, when I create a new Site C, for example, nginx refuses to show me anything except "Welcome to nginx!" for ~5 minutes. This suggests a resolver timeout, perhaps? When I access Site B after its config has been deleted, and it sends me to Site A, this sounds like nginx sending me to servers in a round-robin fashion...

    Read the article

  • Apache Server with memcache, varnish and php slow request times

    - by coolestdude1
    My issue is that these servers are taking rather long for request about 2 seconds on average just to serve files. When we had just one server doing everything it was noticeably faster even with the same web app (Drupal 6 and Drupal 7). I want to get this number down to a reasonable level and so I need some help getting to the bottom of why the request times are so slow. This can cause the webapp to hang on post or put and generally leads to a bad user experience on my sites. PS: I am more of a server newbie so this has confounded me for quite some time. The domains: collabornation.net nptrainingworks.com (they run off the same two webservers using vhost configs) The Gear: Two Rackspace 4 Gig servers running CentOS 6.2 Final They have a mounted file system (gluster) that is used to keep files the same on both machines. They are behind a rackspace load balancer running round robin. Mysql is run using php-pdo and php-mysql as such mysql is run on another instance running memcache on that machine with phpMyAdmin located there as well. Apache version number 2.2.15-15.el6.centos.1 (httpd.x86_64) Varnish version number 3.0.2-1.el5 (varnish.x86_64) PHP version number 5.3.14-1.el6.remi (php.x86_64) Configs Linked Below Apache Conf Vhost Conf Varnish Backends Varnish Defaults Varnish Acl PHP INI Again need some help, much appreciated!

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • Windows Server doesn't connect to a network share

    - by Dmitriy N. Laykom
    Windows Server doesn't connect to a network share. Network share is working. Blockquote Pinging 109.123.146.223 with 32 bytes of data: Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Ping statistics for 109.123.146.223: Packets: Sent = 3, Received = 3, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms net view \shareaddress Blockquote System error 53 has occurred. The network path was not found. When I connected the network share I observed this error message: Blockquote \ "Mapped disk letter" refers to a location that is unavailable. It could be on a hard drive on this computer, or on a network. Check to make sure that the disk is properly inserted, or that you are connected to the Internet or your network, and then try again. If it still cannot be located, the information might have been moved to a different location The network share was mounted via Group Policy. Perchance anyone knows how I can avoid this error? When the OS has been restored from the disk problem has been solved

    Read the article

  • Apache load balancer with https real servers and client certificates

    - by Jack Scheible
    Our network requirements state that ALL network traffic must be encrypted. The network configuration looks like this: ------------ /-- https --> | server 1 | / ------------ |------------| |---------------|/ ------------ | Client | --- https --> | Load Balancer | ---- https --> | server 2 | |------------| |---------------|\ ------------ \ ------------ \-- https --> | server 3 | ------------ And it has to pass client certificates. I've got a config that can do load balancing with in-the-clear real servers: <VirtualHost *:8666> DocumentRoot "/usr/local/apache/ssl_html" ServerName vmbigip1 ServerAdmin [email protected] DirectoryIndex index.html <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLProxyEngine On SSLCertificateFile /usr/local/apache/conf/server.crt SSLCertificateKeyFile /usr/local/apache/conf/server.key <Proxy balancer://mycluster> BalancerMember http://1.2.3.1:80 BalancerMember http://1.2.3.2:80 # technically we aren't blocking anyone, but could here Order Deny,Allow Deny from none Allow from all # Load Balancer Settings # A simple Round Robin load balancer. ProxySet lbmethod=byrequests </Proxy> # balancer-manager # This tool is built into the mod_proxy_balancer module allows you # to do simple mods to the balanced group via a gui web interface. <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from all </Location> ProxyRequests Off ProxyPreserveHost On # Point of Balance # Allows you to explicitly name the location in the site to be # balanced, here we will balance "/" or everything in the site. ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID </VirtualHost> What I need is for the servers in my load balancer to be BalancerMember https://1.2.3.1:443 BalancerMember https://1.2.3.2:443 But that does not work. I get SSL negotiation errors. Even when I do get that to work, I will need to pass client certificates. Any help would be appreciated.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >