Search Results

Search found 16530 results on 662 pages for 'off shore'.

Page 482/662 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • Can't write to samba share

    - by Tiddo
    I try to setup a samba file server, but whatever I do I can't get write access to work (reading works fine). This is my current situation: I have a local fileserver with 3 harddisks mounted at /mnt/share/disk<nr>. 2 of these use the ext4 filesystem, the third one is ntfs. This file server runs Fedora 18 32-bit. The root folders of these harddisks are owned by superman:superman, and testparm outputs the following: [global] workgroup = WORKGROUP netbios name = FILE_SERVER server string = Samba Server Version %v interfaces = lo, eth0, 192.168.123.191/8 log file = /var/log/samba/log.%m max log size = 50 unix extensions = No load printers = No idmap config * : backend = tdb hosts allow = 192.168.123. cups options = raw wide links = Yes [share] comment = Home Directories path = /home/share/ write list = superman, @users force user = superman read only = No create mask = 0777 directory mask = 0777 inherit permissions = Yes guest ok = Yes I've tried a lot to get this to work: the disk are chmodded to 777, I've tried turning off selinux, I've added the samba_share_t label to the disks and as can be seen in the above output I tried to make the smb config as permissive as I could, but still I cannot write to the share (tried from Windows 7 and another Fedora installation). What can I try to be able to write to the shares? EDIT: The replies I got so far are mostly concerned with the smb.conf. I have however tried a lot of different setup, ready made configs, and solutions to similar problems for the smb.conf file, so I suspect that the real problem is somewhere else.

    Read the article

  • Ubuntu 12.04 suddenly cannot connect to WPA2/WPA Personal protected connection. Windows 7 can

    - by d4ryl3
    I have a laptop with Windows 7 and Ubuntu 12.04. I have a Cisco E1200 and when I set it up, it created 2 SSIDs. Let's name them: MyConnection (WPA/WPA2 personal), and MyConnection-Guest (no authentication, guest password entered via web browser). I had no problem connecting to MyConnection before, either in Windows 7 and Ubuntu. But now, I can't access MyConnection on Ubuntu. It just says "connecting..." then disconnects after a while. But I'm able to access the internet (on Ubuntu) when I connect to MyConnection-Guest. MAC filtering is off (even if it's on its MAC address is in the white list). Any idea why I'm unable to connect to MyConnection in Ubuntu? Thanks. Update: My Ubuntu installation can connect to ANY WiFi connection (WPA/WEP/no auth), except for MyConnection. Update2: This is what "The not so easy way" returned: Initializing interface 'eth1' conf '/etc/wpa_supplicant.conf' driver 'default' ctrl_interface 'N/A' bridge 'N/A' Configuration file '/etc/wpa_supplicant.conf' -> '/etc/wpa_supplicant.conf' Reading configuration file '/etc/wpa_supplicant.conf' Priority group 0 id=0 ssid='MyConnection' id=1 ssid='MyConnection' id=2 ssid='MyConnection' id=3 ssid='MyConnection' WEXT: cfg80211-based driver detected SIOCGIWRANGE: WE(compiled)=22 WE(source)=21 enc_capa=0xf capabilities: key_mgmt 0xf enc 0xf flags 0x0 netlink: Operstate: linkmode=1, operstate=5 Own MAC address: xx:xx:xx:xx:xx:xx wpa_driver_wext_set_key: alg=0 key_idx=0 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=1 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=2 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=3 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=4 set_tx=0 seq_len=0 key_len=0 ioctl[SIOCSIWENCODEEXT]: Invalid argument Driver did not support SIOCSIWENCODEEXT wpa_driver_wext_set_key: alg=0 key_idx=5 set_tx=0 seq_len=0 key_len=0 ioctl[SIOCSIWENCODEEXT]: Invalid argument Driver did not support SIOCSIWENCODEEXT wpa_driver_wext_set_countermeasures RSN: flushing PMKID list in the driver Setting scan request: 0 sec 100000 usec WPS: UUID based on MAC address - hexdump(len=16): 16 3b d8 47 9e 24 50 89 96 16 6d 66 35 f3 58 37 EAPOL: SUPP_PAE entering state DISCONNECTED EAPOL: Supplicant port status: Unauthorized EAPOL: KEY_RX entering state NO_KEY_RECEIVE EAPOL: SUPP_BE entering state INITIALIZE EAP: EAP entering state DISABLED EAPOL: Supplicant port status: Unauthorized EAPOL: Supplicant port status: Unauthorized Added interface eth1

    Read the article

  • ssl port didnt work on nginx

    - by Jin Lin
    I set up the unicorn and nginx on one of my ec2 machine. and my request are loading ok with nginx listen to port 80. but when I enable it to ssl, which listen to port 443. It doesn't work. and it can still work with port 80, https. server { listen 443 ssl; # replace with your domain name server_name domain.com; # replace this with your static Sinatra app files, root + public root /home/ubuntu/domain/public; ssl on; ssl_certificate /etc/ssl/domain.crt; ssl_certificate_key /etc/ssl/domain.key; # maximum accepted body size of client request client_max_body_size 4G; # the server will close connections after this time keepalive_timeout 5; location ~ ^/assets/ { add_header ETag ""; gzip_static on; expires max; add_header Cache-Control public; } location / { proxy_set_header X-Forwarded-Proto https; try_files $uri @app; } location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # pass to the upstream unicorn server mentioned above proxy_pass http://unicorn_server; } }

    Read the article

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

  • There are currently no logon servers available

    - by Ian Robinson
    I am running a Windows 7 laptop that is joined to my company's domain. When I installed Windows 7, I created an account for myself, joined to the domain, and it had been working quite well even though I'm physically remote most of the time, and not actually on the network. However, today I created a new local user account (non-admin) for my little brother. While he was using it, he decided he wanted to install a program, because his account is not an admin, he was prompted to enter Administrator credentials to allow the program to make changes to his computer. I entered my credentials, and this is the first time I ran into the error message: There are currently no logon servers available to service the logon request. I tried logging off and loggin back in, rebooting, etc etc, and no matter what, every time I try to authenticate as my "normal" domain account - I get that message. I can no longer access my computer as an administrator. I no longer know how to log in to my machine using any other account aside from my little brother's non-admin account. I don't have any other local accounts created, and the default local admin account was never enabled. I'd appreciate any ideas on how I can recover access to my account. Let me know if I can provide any more information. FYI - This is a similar question but not sure any of the answers help me in my case. http://serverfault.com/questions/71632/there-are-currently-no-logon-servers-available-to-service-the-logon-request

    Read the article

  • IIS and ASP.NET

    - by sam
    i'm trying to add asp.net feature on windows 7 i tried to turn it on using turn windows features on or off but it fails every time so i download web platform installer and try it that way and it fails also next i uninstall .net framework 4 restart again! and reinstall it and try again the previous steps but it fails the same i need this installed so i can view it on iis7 anyone know what i can do with this to get it working i've searched and searched and everything fails i get this error on the web platform installer Failed with 0x80070643 – Fatal Error during installation please help i cant do my work with out it working :( ok i did a few things now get this error Server Error in '/pulse' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'pulsesite.MvcApplication'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.vb" Inherits="pulsesite.MvcApplication" Language="VB" % Source File: /pulse/global.asax Line: 1 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 i know its ust about changing the code but i'm not good with c# anyone know how?

    Read the article

  • Apache Reverse proxy Http to https

    - by Coppes
    I have a website which is fully running on Https. For some reason i did get the task to find a way to convert a url for example: http://www.domain.com/a/e-nc/youless to a https version of it, without losing HTTP POST header such as the POST values which are in it. So i thought (not even sure) let's try to make a reversed proxy in apache and see how that works. Anyway after a lot of struggling i came to the point to ask it here. So to be speicific my goal is: Convert the http://www.domain.com/a/e-nc/youless to https://www.domain.com/a/e-nc/youless without losing the POST conditions. What i have tried until now is the following: Created a file called: proxiedhosts in my apache2/sites-enabled folder with the following contents: SSLProxyEngine On SSLProxyCACertificateFile /etc/apache2/ssl/certificate****.pem ProxyRequests Off ProxyPreserveHost On <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /a/e-nc/youless/ https://www.domain.com/a/e-nc/youless/ ProxyPassReverse /a/e-nc/youless/ https://www.domain.com/a/e-nc/youless/ Thanks in advance!

    Read the article

  • how to go about scaling a web-application ?

    - by phoenix24
    for someone whoes been primarily a web-application developer, and know not much about scaling/scalability techniques. I'll start by stating my application is written in Python, using Django; a fairly standard setup. I currently use Apache 2.2 for my webserver, and MySql for my database server; both running on the same vps server. Up until now, it was basically a prototype and merely 15-30 concurrent users at any given time; so I had no issues, but now since we'll be adding more users we'll have severe performance issues. So my question is how do i go about scaling my web-application? and my plan is as follows. Now I have just one vps server running, apache + mysql. Next, I plan to add another vps server, to run only MySql; so i'll have one web-server and one db server. Next, I'll add Memcache to the webserver for caching data; and taking some load off mysql. Next, another web-server for serving all the static content; Next, a vps server for load-balancing (nginx/varnish) behind which would be my two web-servers and then db-server. Does that sound like a workable strategy, please guide me around here.

    Read the article

  • Is this processor burned?

    - by Jhonnytunes
    I've recently exchange the processors in two PCChips boards. Both boards are LGA775. Board A is P17G(Pentium 4 HyperThreading 3GHz) and board B is P49G(Pentium Dual Core 3GHz). I use board A to watch videos, and some of them are 3GB size and this is why I exchanged the CPU. I installed Dual Core in board A and it worked out of the box, now 3GB videos use 5% of CPU instead of 50%. When I installed the pentium in the board B, I forgot to connect the 4pin power and, when i powered on the PC, the CPU fan stay off. Then, I connected all right this time, and now the board doesnt show video. I think the CPU is not working but im not sure about that. The PC turns on and the HD spins, the CPU fan spins, network socket blinking, but not video and case power led is neither blinking. I tried with other PSU and everything was the same. I figure out that CPU have that paste above. IDK really what's happening, I hope I dont have to buy another CPU. Is it Burned?

    Read the article

  • Translating debian network configuration to gentoo

    - by thpetrus
    I just got rid off Debian on my VPS (OpenVZ) and installed Gentoo on it, however it is a plain Gentoo image without further configuration, i.e. no working network. I'm not familiar with Debian and coulnd't figure out how to get the network set up, these are the debian network files /etc/network/interfaces: auto venet0 iface venet0 inet manual up ifconfig venet0 up up ifconfig venet0 127.0.0.2 up route add default dev venet0 down route del default dev venet0 down ifconfig venet0 down iface venet0 inet6 manual up ifconfig venet0 add ipv6addr/128 down ifconfig venet0 del ipv6addr/128 up route -A inet6 add default dev venet0 down route -A inet6 del default dev venet0 auto venet0:0 iface venet0:0 inet static address external_ip netmask 255.255.255.255 auto venet0:1 iface venet0:1 inet static address internal_ip netmask 255.255.255.255 Please note that external_ip, internal_ip and ipv6addr are placeholders. I copied the /etc/resolv.conf, know the gateway_ip and also have another ouput of ifconfig, if necessary. This is what I came up with, /etc/conf.d/net: config_venet0="127.0.0.2 netmask 255.255.255.255 brd 0.0.0.0" config_venet0:0="external_ip netmask 255.255.255.255 brd 0.0.0.0" route_venet0:0="default via gateway_ip" config_venet0:1="internal_ip netmask 255.255.255.255 brd 0.0.0.0" Broadcast IP is taken from ifconfig debian output - however it doesn't work. A symbolic link net.venet0:0 -> net.lo in /etc/init.d/ was created and I added net.venet0:0 to the boot runlevel.

    Read the article

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

  • SQL Server Remote Connections

    - by Barry
    Hi, I am at my wits end with trying to access a remote SQL Server 2008 R2 Express instance. Here are the following that I have tried. 1) I enabled remote connections in the instance properties. 2) I enabled sql server and windows authentication mode and created an account to log in using sql server authentication. 3) I started the SQL Server Browser service 4) I forwarded ports 1433 and 1434 on the router to the IP address of the machine hosting SQL Server. 5) I turned off firewalls on both the Machine running the instance and the router. 6) http://www.yougetsignal.com/tools/open-ports/ I used this to check whether or not both ports were open and it says that they are closed. I have the SQL Server Express instance running and the browser running. I have configured it to allow remote connections yet, it tells me they are both closed. I'm pretty confused at this stage. On the client Machine I am trying to connect using the following format machineip\SQLEXPRESS with SQL Server Management Studio Express. Thanks in advance

    Read the article

  • Linux on HP Envy

    - by Oscar Godson
    OK, the Ubuntu forums aren't helping and I thought maybe you guys here could help. First off, does anyone know the best flavor of Linux to use on an HP Envy? what has the best support out of the box? If not, does anyone know how the hell to get the following to work on Ubuntu 10.04: The touchpad to work at all? Right now, right clicking doesnt work at at all, and left clicks dont work while you have another finger on the pad at all. It jumps all over. ALSO, the multi-touch isn't clickable, but it's for sure a multi-touch touchpad. Works in W7 and can do things like a MBP in W7 The computer feels like it's on fire... i think im missing some driver. Seems odd that the random meta keys like calc, email, brightness, right click, etc work, but not the touchpad? The video card seems fine, but i haven't tested compiz fully yet... Thanks so much to anyone who helps. i want to get back to linux after a couple years on Mac. :)

    Read the article

  • SQL Server Remote Connections

    - by Barry
    Hi, I am at my wits end with trying to access a remote SQL Server 2008 R2 Express instance. Here are the following that I have tried. 1) I enabled remote connections in the instance properties. 2) I enabled sql server and windows authentication mode and created an account to log in using sql server authentication. 3) I started the SQL Server Browser service 4) I forwarded ports 1433 and 1434 on the router to the IP address of the machine hosting SQL Server. 5) I turned off firewalls on both the Machine running the instance and the router. 6) http://www.yougetsignal.com/tools/open-ports/ I used this to check whether or not both ports were open and it says that they are closed. I have the SQL Server Express instance running and the browser running. I have configured it to allow remote connections yet, it tells me they are both closed. I'm pretty confused at this stage. On the client Machine I am trying to connect using the following format machineip\SQLEXPRESS with SQL Server Management Studio Express. Thanks in advance

    Read the article

  • Laptop battery holds charge, but won't charge any more.

    - by Jeff
    Ok, I'm sure I will need to replace either my battery or my AC adapter, but would rather not buy one if the other is the problem. My problem is. I have a Sager laptop that gets quite a bit of use. The charging has always been a little bit odd. If I was in the process of using it, it would charge just fine and stay On AC power. If I left it alone, however(power settings to ONLY turn off the monitor) in either Ubuntu or Windows 7 it decides that it didn't want to use AC power anymore and would just start draining the battery until it died. Now, suddenly, it won't charge at all. The capacity was great up to this point which happened in an instant. It will recognize the battery but won't see the AC power if plugged in while the battery is in. I can power up the laptop without the battery and it works fine. If I plug in the battery while powered up it will claim it's charging it, but it stays at the same percentage. If I unplug the power, it will switch over to Battery fine, but I have to power down and unplug the battery to get it back on AC power. I've had dying/dead batteries before but they typically won't hold a full charge anymore but it still winds up to 100% then drops quickly when unplugged. This seems more like a chip problem in the battery to me, but I'm not sure. Any ideas?

    Read the article

  • Best video codec to store my own collection

    - by Jack
    Hello! I think this question has already been asked but with different flavours. My problem resised in the fact that my camera (Canon G9) creates video with almost raw codec (I think it's plain old MPEG) so a 10 minutes video is almost 900mb. I would like to convert them in a format that has a good trade-off between space and quality, but I would prefer having the quality as good as the original (of course this is not possible because of lossy compression) just saving as much space is possible with a minimal lose of quality. Which codec should I look for? H264? It seems to be the champion of the moment.. otherwise which other ones could I try? XviD? Which parameters should I use? I mean how many kbits/s is a fair good bitrate to keep high quality? And what about audio codec? video specs are 640x480 at 30fps or 1024x768 at 15fps.. thanks in advance!

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • Load Balancing Rails on Apache 2.x

    - by revgum
    My situation is that I need to proxy traffic to the root of my web server to port 81 for IIS, and then any traffic to a sub-directory needs to be directed to the rails app. my-server.com/ - needs to proxy to port 81 my-server.com/myapp - needs to point to the rails app This seems to be working alright for the rails application but the images, javascripts, and stylesheets are not actually working (proxied). I've tried to fiddle with the proxypass lines but it still doesn't work for me..can anyone help? Here's my complete VirtualHost portion of the config; LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyRequests off <Proxy balancer://myapp_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 </Proxy> <VirtualHost *:80> DocumentRoot "c:\ruby\apps\myapp\public" <Directory /myapp > Options FollowSymLinks AllowOverride None </Directory> ProxyPass /myapp/images ! ProxyPass /myapp/stylesheets ! ProxyPass /myapp/javascripts ! ProxyPass /myapp/ balancer://myapp_cluster/ ProxyPassReverse /myapp/ balancer://myapp_cluster/ ProxyPreserveHost on ProxyPass / http://localhost:81/ ErrorLog "c:\ruby\apps\myapp\log\error.log" # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog "c:\ruby\apps\myapp\log\access.log" combined </VirtualHost>

    Read the article

  • Triple monitor setting in Linux with USB-HDMI adapter

    - by Oscar Carballal
    I'm trying to set up a triple monitor desktop at my office using Fedora 17, but it seems impossible, let me explain the setting: Laptop ASUS K53SD with 2 graphic cards, Intel and nVidia (Screen controled by Intel card) 24" Full HD monitor connected to the HDMI output (controlled by Intel card) 23" Full HD monitor connected to an USB-HDMI adapter (via framebuffer in /dev/fb2, apparently) VGA output (not used) controlled by nVidia card First of all, the USB-HDMI adapter works perfectly, it gives me a green screen (which means the communication is OK) and I can make it work if I set up a single monitor setting via framebuffer in Xorg. Here I leave the page where I got the instructions: http://plugable.com/2011/12/23/usb-graphics-and-linux Now I'm trying to set up the the two main monitors (laptop and 24") with the intel driver and the 23" with the framebuffer, but the most succesful configuration I get is the two main monitors working and the third disconnected. Do you have any idea what can I do to make this work? Here I leave my xRandr output and my Xorg conf: -> xrandr Screen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA2 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 531mm x 299mm 1920x1080 60.0*+ 50.0 25.0 30.0 1680x1050 59.9 1680x945 60.0 1400x1050 74.9 59.9 1600x900 60.0 1280x1024 75.0 60.0 1440x900 75.0 59.9 1280x960 60.0 1366x768 60.0 1360x768 60.0 1280x800 74.9 59.9 1152x864 75.0 1280x768 74.9 60.0 1280x720 50.0 60.0 1440x576 25.0 1024x768 75.1 70.1 60.0 1440x480 30.0 1024x576 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 720x576 50.0 848x480 60.0 720x480 59.9 640x480 72.8 75.0 66.7 60.0 59.9 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) 1920x1080_60.00 60.0 The Xorg file: # Xorg configuration file for using a tri-head display Section "ServerLayout" Identifier "Layout0" Screen 0 "HDMI" 0 0 Screen 1 "USB" RightOf "HDMI" Option "Xinerama" "on" EndSection ########### MONITORS ################ Section "Monitor" Identifier "USB1" VendorName "Unknown" ModelName "Acer 24as" Option "DPMS" EndSection Section "Monitor" Identifier "HDMI1" VendorName "Unknown" ModelName "Acer 23SH" Option "DPMS" EndSection ########### DEVICES ################## Section "Device" Identifier "Device 0" Driver "intel" BoardName "GeForce" BusID "PCI:0:02:0" Screen 0 EndSection Section "Device" Identifier "USB Device 0" driver "fbdev" Option "fbdev" "/dev/fb2" Option "ShadowFB" "off" EndSection ############## SCREENS ###################### Section "Screen" Identifier "HDMI" Device "Device 0" Monitor "HDMI1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "USB" Device "USB Device 0" Monitor "USB1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Router not connecting to the internet

    - by Peter
    I had a weird problem this morning with my router - it's an Alice Gate VoIP 2 Plus WI-FI - not connecting to the internet after more or less 1 hour online (I'm always connected with the ethernet cable). The led status lights for Power, WI-FI, ADSL, Internet, Service should be on in order to be connected and navigate online. The problem was that the leds ADSL, Internet were off and I did not know why because it never happened before. I looked at the stats in the settings and the numbers for Bytes/Packages for both Sent/Received were there and increasing but I couldn't connect to the internet. I called tech support, they checked and told me to keep the router on for 48 hours because they were checking it. I've reset it twice before and after I called tech support and it still did not work so after about 2 hours of waiting I tried connecting using WI-FI and the leds 'magically' turned on, first the ADSL and Internet(Internet led always turns on last). At this point I'm curious what could of caused this and I'm doubting that the tech-support guys did something. What could of been the problem with the ethernet cable not connecting in the first place? It always works. What do the tech support guys normally do when they tell me to keep the router on so they can 'check it'? PS: I'm using ubuntu 32 bit

    Read the article

  • Distributing processing for an application that wasn't designed with that in mind

    - by Tim
    We've got the application at work that just sits and does a whole bunch of iterative processing on some data files to perform some simulations. This is done by an "old" Win32 application that isn't multi-processor aware, so new(ish) computers and workstations are mostly sitting idle running this application. However, since it's installed by a typical Windows Install Shield installer, I can't seem to install and run multiple copies of the application. The work can be split up manually before processing, enabling the work to be distributed across multiple machines, but we still can't take advantage of multiple core CPUs. The results can be joined back together after processing to make a complete simulation. Is there a product out there that would let me "compartmentalize" an installation (or 4) so I can take advantage of a multi-core CPU? I had thought of using MS Softgrid, but I believe that still depends on a remote server to do the heavy lifting (though please correct me if I'm wrong). Furthermore, is there a way I can distribute the workload off the one machine? So an input could be split into 50 chunks, handed out to 50 machines, and worked on? All without really changing the initial application? In a perfect world, I'd get the application to take advantage of a DesktopGrid (BOINC), but like most "mission critical corporate applications", the need is there, but the money is not. Thank you in advance (and sorry if this isn't appropriate for serverfault).

    Read the article

  • My .htaccess file re-directed problems?

    - by Glenn Curtis
    I am hoping you can help me! Below is my .htaccess files for my Apache server running on top of Ubuntu server. This is my local server which I installed so I can develop my site on this instead of using my live site! However i have all my files and the database on my localhost now but each time I access my server, vaio-server (its a sony laptop), it just takes me to my live site! Now eveything is in the root of Apache, /var/www - its the only site I will develop on this system so I don't need to config this to look at any many than this one site! I think thats all, all the Apache files, site-available/default ect are as standard. - Please Help!! Many Thanks Glenn Curtis. DirectoryIndex index.php index.html # Upload sizes php_value upload_max_filesize 25M php_value post_max_size 25M # Avoid folder listings Options -Indexes <IfModule mod_rewrite.c> Options +FollowSymlinks RewriteEngine on RewriteBase / # Maintenance #RewriteCond %{REQUEST_URI} !/maintenance.html$ #RewriteRule $ /maintenance.html [R=302,L] #Redirects to www #RewriteCond %{HTTP_HOST} !^vaio-server [NC] #RewriteCond %{HTTPS}s ^on(s)|off #RewriteRule ^(.*)$ glenns-showcase.net/$1 [R=301,QSA,L] #Empty string RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule>

    Read the article

  • Can I use a Drobo FS over the internet? -Or alternative?

    - by SeniorShizzle
    I have a lot of files. A HUGE Aperture library, lots of design work from Photoshop, and a rather large iTunes library too. I really want to get a Drobo FS, the networked one, to store all of my stuff on, so that I can get to it from my MacBook Air (which obviously with its minuscule 64GB drive can't hold my Aperture library by itself) and my iMac which is my main powerhouse. The dealbreaker for me is that I NEED to be able to access my Aperture library, and especially my iTunes library from across the internet. I understand that it will probably be slow and everything, but there's nothing else I can do besides hauling around a huge hard drive with me. So, is there any way I can somehow share my Drobo across the internet, on a VPN or something? My other alternative is to upload everything to my web host, FatCow, which offers unlimited disk space (something I hope to make them regret) and then access it using Expandrive. My only thoughts with this are that with the Drobo, any work that I do locally will be much snappier than if I have to work everything off the cloud. Any suggestions about alternatives would also be welcome.

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >