Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 66/609 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • RAID0 PROBLEM WITH A SONY SPORTING A NEW HDD

    - by redrock
    Sony Windows 7 PC. Originally had 2 x 300Gb HDD. One HDD completely pancaked so have replaced with a new 500Gb HDD. When both drives are connected the 300GB doesn't appear to be recognised as a 300Gb HDD as a seperate entity. BIOS sees it but the operating system only sees a total of 465GB of HD space. When both disks are attached unde disk management it shows one 465Gb as RAID 0 and the new drive as STxxxxxx 465Gb. My question I guess is what should I see in total HDD space and is this configured correctly as I thought I would see 2 seperate drives 1x500Gb and 1x300Gb. My customer insisted that prior to the HDD crash he saw 2 drives both registering as 300Gb (a c: and d: drive).

    Read the article

  • .ashx cannot find type error on IIS7 , no problems on webdev server

    - by Aivan Monceller
    I am trying to make AspNetComet.zip work on IIS7 (a simple comet chat implementation) Here is a portion of my web.config. <system.web> <httpHandlers> <add verb="POST" path="DefaultChannel.ashx" type="Server.Channels.DefaultChannelHandler, Server"/> </httpHandlers> </system.web> <system.webServer> <handlers> <add name="DefaultChannelHandler" verb="POST" path="DefaultChannel.ashx" type="Server.Channels.DefaultChannelHandler, Server"/> </handlers> </system.webServer> When I publish the website on my localhost IIS7 I receive an error: POST http://localhost/DefaultChannel.ashx 500 Internal Server Error Could not load type 'Server.Channels.DefaultChannelHandler The target framework of this project is .Net 2.0 I tried the Classic and Integrated Mode application pool for .Net 2.0 with no luck. I also tried converting the project to 4.0 and tried the Classic and Integrated Mode application pool for .Net 4.0 with no luck. I also tried adding the managed handler through IIS Manager's Handler Mappings. If you have time please download the source (184kb) to reproduce the problem on your own machine. The zip contains a VS2010 solution (.Net 2.0). You could also try to convert this to .Net 4.0 I am using Windows 7 anyway if that matters. If you need more details, please drop your comments below. This is working fine by the way on my webdev server.

    Read the article

  • Installing PHP with iconv on Mac OSX 10.6 gives error: "Undefined symbols for architecture x86_64"

    - by Jason
    I am setting up PHP on my local web server with iconv, which should be there by default, but I am getting the following error: Undefined symbols for architecture x86_64: "_iconv", referenced from: __php_iconv_strlen in iconv.o _php_iconv_string in iconv.o __php_iconv_strpos in iconv.o __php_iconv_appendl in iconv.o _zif_iconv_substr in iconv.o _zif_iconv_mime_encode in iconv.o _php_iconv_stream_filter_append_bucket in iconv.o ... (maybe you meant: _zif_iconv_strlen, _zif_iconv_strpos , _zif_iconv_mime_decode , _php_iconv_string , _zif_iconv_set_encoding , _zif_iconv_get_encoding , _zif_iconv_mime_decode_headers , _iconv_functions , _zif_iconv_mime_encode , _zif_iconv_strrpos , _iconv_module_entry , _iconv_globals , _zif_iconv_substr , _php_if_iconv , _zif_ob_iconv_handler ) ld: symbol(s) not found for architecture x86_64 Now, presumably this means that my iconv is not 64 bit. I tried to download libiconv and manually reinstall it with 64 bit, as such: CFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' CCFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' CXXFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' ./configure But I get the following error: configure: error: in `/Users/jason/Downloads/libiconv-1.13.1': configure: error: C compiler cannot create executables See `config.log' for more details. In my config.lob file, it says at the bottom: configure: exit 77 Any suggestions?

    Read the article

  • Is there anyway to build a raid system without all drives?

    - by xenoterracide
    I'm building a raid1 (ok it will probably be a raid10,f2 but the difference with 2 drives... isn't much) system with 2 1TB drives. However, 1 of the drives I've ordered is bad so I'm RMA-ing it. I'm wondering if I could partition and install to the 1 drive and then rebuild the array when I get the second drive (after I test it of course) My initial investigation doesn't show me a way of creating the array without specifying all devices... and the device the second drive will be is one that has data that I will need to migrate (plus it's not big enough). Is it possible that I could create an array without specifying all devices? or specify false ones and reconfigure to the right ones later? Or some other method I'm not thinking of.

    Read the article

  • failing to achive tunneling to fresh ubuntu 10.04 server

    - by user65297
    I've just set up a new 10.04 server and can't get the tunneling to work. local machine > ssh -L 9090:localhost:9090 [email protected] login success, but thereafter trying tunnel from local browser, http://127.0.0.1:9090 echo at server terminal: channel 3: open failed: connect failed: Connection refused auth.log sshd[24502]: error: connect_to localhost port 9090: failed. iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Trying 9090 at server (links http://xx.xxx.xx.xx:9090 works) sshd_config is identical to previous 8.04 server, working fine. What's going on? Thankful for any input. Regards, //t

    Read the article

  • Mac OS X: Change default Name when connecting to server

    - by Kami
    When trying to connect to a server I get the following prompt : By default Snow Leopard fill the Name field with Firstname Lastname found in System Pref - Account - My Account - Full Name ! I don't what to change my Full Name to the username I use to login to server ! How to you change the default Name Snow Leopard is using when connecting to server ?

    Read the article

  • How to configure IIS 7.5 to allow special chars in Url for ASP.NET 3.5?

    - by Sebastian P.R. Gingter
    I'm trying to configure my IIS 7.5 to allow specials chars in the url for ASP.NET. This is important to support wide-spread legacy url's on a new system. Sample url: http://mydomain.com/FileWith%inTheName.html This would be encoded in the url and requested as http://mydomain.com/FileWith25%inTheName.html This simply works, when creating a new web in IIS 7.5, placing a file with the percentage sign in the file name in the web root and pointing the browser to it. This does not work, however, when the web site is an ASP.NET application. ASP.NET always returns a 400.0 - Bad Request error in the WindowsAuthentication module from the StaticFile handler, when pointing to that url. It however displays the requested url correctly and also resolves correctly to the correct physical file (the information from the field 'Physical Path' from the Server error page points to the physically available file). There are hints on how to enable this, so I followed the instructions on these websites step by step: http://dirk.net/2008/06/09/ampersand-the-request-url-in-iis7/ http://adorr.net/2010/01/configure-iis-to-accept-url-with-special-characters.html The second one actually sums up the information from the first post and adds some more information about x64 systems (we're running x64) and on an additional web.config change for this. I tried all that, and still can't get this running from an asp.net web application. And yes: I rebooted after applying the registry changes. So, what do I have to do in addition to the settings described in above posts, to support the legacy url's which contain percentage characters? Additional info: Application Pool mode is integrated. Push after some days. No idea anyone?

    Read the article

  • If two separate PATH directories contain a same-named executable, how does Windows choose?

    - by Coldblackice
    I'm in the process of upgrading PEAR (PHP) on my system. The upgrade script is encouraging me to add "..\PHP\PEAR" to my PATH so that I can use "pear.bat". However, I already am able to use pear.bat. Looking in my PATH, I see that I don't have any PEAR directories, only my PHP directory. Opening my PHP directory, I see that there's a "pear.bat" in the base. But there's also a pear.bat in the PEAR subfolder of PHP. I'm wondering if I borked a PEAR install. I digress. So if I leave ..\PHP in my path, but also add ..\PHP\PEAR -- both of which have a "pear.bat" in them -- which one will Windows "choose"? How does Windows decide?

    Read the article

  • In Tripwire For Servers policy what is the difference between ACL and permissions?

    - by this.josh
    I am configuring a policy file for Tripwire For Servers for GNU/Linux (x86) version 4.8.0.167 My system has ext2 and ext3 filesystems. In the policy file the properties include "ACL settings", "permission and file mode bits", and "Flags (additional permissions on object)". What is the difference between ACL settings and permissions for ext2 and ext3 filesystems, and what additional checking does the Flags property provide?

    Read the article

  • Vim convert tabs to spaces for one folder

    - by Macha
    I'm working on a project with a friend soon. Given that he cares about these things more than me, I let him choose the indentation. Him being a Ruby fan, he chose two spaces. My usual is to use tabs. Is there anyway I can set vim to, only in this specific folder, have it as two spaces when I press the tab button?

    Read the article

  • Tomcat memory usage grows until crash with no GC run

    - by Phil
    I'm administrating a server running Tomcat that is getting a lot of traffic lately. If I monitor memory usage in Task Manager I can see the memory usage growing and eventually tomcat crashes around the 1GB mark. Here's the memory relevent bits I've set in Tomcat Properties (this is a Windows Server): Intial memory pool: 1024 MB Maximum memory pool: 1024 MB -XX:MaxPermSize=256M The weird thing is since these problems arose I've deployed Lambda Probe to the Tomcat instance and the memory usage values I see there are much lower, for example Task Manager might show 467MB used while the "Total" used in Probe is 212 MB. Also, the Maximum Total listed in Probe is 1.29GB, when I would have expected 1GB, the maximum memory set above. If I force the garbage collector to run using Probe, I can keep Tomcat from crashing for a while (indefinitely, AFAIK). So why doesn't the GC run automatically and stop Tomcat from crashing? Thanks.

    Read the article

  • Postfix - how to redirect email if they will rejecting?

    - by Bartosz Kowalczyk
    I have problem with spam and postfix + postgray. It generally good works but I have false-positive still and reject good email. And now I have problems. Can I configure postfix (and postgray) that: if_reject than redirect to [email protected] (change recipients). Or I don't know maybe: Each email have to copy and send to [email protected] Then filtering? If hit restriction than just reject (another copy is in [email protected]) How to do it? Sorry for my english. Can you help me? Thank you

    Read the article

  • Nginx: Serve static files out of a given directory - one level too deep

    - by Joe J
    I'm pretty new to nginx configs. I'm having some difficulty with a pretty basic problem. I'd like to host some static files at /doc (index.html, some images, etc). The files are located in a directory called /sites/mysite/proj/doc/. The problem is, is that with the nginx config below, nginx tries to look for a directory called "/sites/mysite/proj/doc/doc". Perhaps this can be fixed by setting the root to /sites/mysite/proj/, but I don't want to potentially expose other (non-static) assets in the proj/ directory. And for various reasons, I can't really move the doc/ directory from where it is. I think there is a way to use a Rewrite rule to solve this situation, but I don't really understand all the parts, so having some difficulty formulating the rule. rewrite ^/doc/(.*)$ /$1 permanent; I've also included a working example of hosting files out of a /sites/mysite/htdocs/static/ directory. > vim locations.conf location /static { root /sites/mysite/htdocs/; access_log off; autoindex on; } location /doc { root /sites/mysite/proj/doc/; access_log on; autoindex on; } 2011/11/19 23:49:00 [error] 2314#0: *42 open() "/sites/mysite/proj/doc/doc" failed (2: No such file or directory), client: 100.100.100.100, server: , request: "GET /doc HTTP/1.1", host: "myhost.com" Does anyone have any ideas how I might go about serving this static content? Any help is much appreciated. Thanks, Joe

    Read the article

  • lighttpd config and rewriting/disabling attempts to access favicon.ico

    - by Kyle
    I've got lighttpd and apache working together on an app I'm building. lighty is serving out static content. However, each time a static asset is requested, I see a not found: favicon.ico message in the logs. I have added the following url rewrite: url.rewrite-once = ( "^/favicon.ico$" => "/assets/images/favicon.png" ) But to no avail; still getting the message. Any ideas?

    Read the article

  • How can I fix puppet refusing to start and asking for "master.pp"?

    - by cwd
    I'm using the very latest version of puppet and have been following the Apress "Pro Puppet" guide step by step. I have installed puppet sudo aptitude install ruby libshadow-ruby1.8 sudo aptitude install puppet puppetmaster facter I have edited /etc/puppet/puppet.conf to include certname [master] certname=puppet.mydomain.com I have edited /etc/hosts and added the following line 127.0.0.1 puppet.mydomain.com puppet I have set the hostname of the server echo "puppet.mydomain.com" > /etc/hostname hostname -F /etc/hostname And then I try and run puppet from the command line. puppet master --verbose --no-daemonize And puppet gives me this error: Could not parse for environment production: Could not find file /master.pp I'm running all commands with sudo and the last line of the error message always says that it can't find master.pp and the path before it is to my current working directory. What am I doing wrong? I should also mention that I don't have a DNS record set up for puppet.mydomain.com - I saw some online documentation mentioning this might be a problem - however I was fairly sure that the hosts file would let me get around that.

    Read the article

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

  • FTP issue - VSFTPd and connecting to from FileZilla

    - by B Tyndall
    I'm trying to connect to a CentOS Linux box that I have hosted on EC2 and I think I have everything configured correctly but when I try to connect I get this series of messages Status: Connection established, waiting for welcome message... Response: 220 (vsFTPd 2.0.5) Command: USER tyndall Response: 331 Please specify the password. Command: PASS ********* Response: 230 Login successful. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/home/tyndall" Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing Not sure where to start troubleshooting this issue. Any ideas? Do I need to change any permissions? I would think this ID has the ability to see my own home directory. I am able to push/pull files from the command line version of FTP client working on Windows.

    Read the article

  • What are the possible disadvantages of enabling the "data access" server option in sys.servers for t

    - by Corp. Hicks
    We plan to change the default server options of an SQL2k5 server instance by enabling data access. The reason is that we want to run "SELECT * FROM OPENQUERY(LOCALSERVER, '...')" -like statements on the server. What are the possible disadvantages of enabling server option "data access" (alias sys.servers.is_data_access_enabled) for the local server (sys.servers.server_id = 0)? (There must be a reason for MS setting this option to disabled by default...) EDIT: it turns out that I'm not the first person to ask this question: http://sqlblogcasts.com/blogs/piotr_rodak/archive/2009/11/22/data-access-setting-on-local-server.aspx "The DATA ACCESS server option is not very well documented in my opinion - the Books On Line say it is a property of linked servers. It doesn't mention at all that you actually can have it enabled on your local server to enable OPENQUERY calls. I noticed that when you disable DATA ACCESS on a linked server, you can't query any table located on it (I tested it on my loopback server) neither using OPENQUERY nor four-part naming convention. You can still call procedures (with four-part naming) that return rowsets. Well, the interesting question is why it is disabled by default on local server - I suppose to discourage users from using OPENQUERY against it." It also seems that the author of the post (Pjotr Rodak) is a Stack Overflow user :-)

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • Problem in listening to multicast in multihomed Linux server

    - by Lior
    I am trying to write a multicast client on a machine with two NICs, and I can't make it work. I can see with a sniffer that once I start the program the NIC (eth4) start receiving the multicast datagrams: y.y.y.y. (some ip) - z.z.z.z (multicast ip, not my eth4 NIC IP) UDP Source port: kkk (some other port) Destination port: xxx (multicast port) However, I can't get those packets using my program (listening to port xxx on eth4). I also added: route add 224.0.0.0 netmask 240.0.0.0 dev eth4 Searched the web for some examples/explanations, but it seems like I do what everybody else does. Any help will be appreciated. is there anything else to do with route/iptables?

    Read the article

  • How can I display additional boot and shutdown information on the Windows 7 welcome screen?

    - by Daniel Saner
    There is a small tweak, I believe it is a registry key, that allows to display additional information on the Welcome and Shutting down screens of Windows 7 (and most likely Vista, too). I have activated this tweak on one of my systems; unfortunately I forgot how I did it, and I can't seem to find the website that originally gave me that information. Usually, the Windows 7 welcome screen will just display "Welcome" when logging in. With the tweak activated, my Welcome screen gives status information such as "Loading user settings" or "Preparing desktop". When shutting down, the default screen simply says "Shutting down". With the tweak activated, it gives additional status information such as "Stopping Windows services". This appears the same way that Windows gives information when updates are installed or configured during the startup or shutdown procedure, and I find them quite helpful in getting a feel for what task takes how long during that process. The only setting I was able to find is the Boot log checkbox on the Boot tab of the msconfig application. However, this results in Windows displaying console logs of drivers it is loading, etc., instead of the animated Windows title. This is NOT the setting I am looking for. The "additional boot information" setting that I have activated on this system still displays the regular animated Windows logo, and only replaces the strings displayed on the blue Welcome and Shutdown screens. Could someone direct me to the registry key (or whatever setting) that is used to get this behaviour? Edit: Here are a few pictures of the enhanced Welcome and Shutdown screens taken with my mobile phone—they're in German though. Login screens "Waiting for User Profile Service" and "Preparing desktop": Logout screen "Stopping Windows services":

    Read the article

  • Can't bring NAT to work

    - by user31738
    Hello, I bought a D-link DIR-300 wireless router and i can't bring NAT to work, i have an ssh and http service i need to forward to the internet. My connection is as follows: I have an ADSL connection, i'm using a ADSL ethernet modem connected and working, it doesnt let me put it on bridge mode. I have my router connected to my adsl modem through ethernet, it gets its ip through DHCP (and i'ts always the same) I have a desktop computer running linux with apache and openssh configured and working, it has fixed ip. I configured the NAT in the modem forwarding port 22 from the router ip to the internet. In the router i setup NAT forwarding port 22 from the desktop computer fixed ip to out there. This setup already worked with a fonera i had before, can anyone help me with this or tell me what kind of tests do i need to do? How can i test if the router is forwarding ports correctly before the modem?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >