Search Results

Search found 1087 results on 44 pages for 'serving'.

Page 39/44 | < Previous Page | 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Connecting multiple ColdFusion 10 instances to a single Apache 2.2 server

    - by Adam Cameron
    This is on Windows 7 Home Premium edition. I have got two ColdFusion 10 (updater 2) instances: "cfusion" (the default one), and "scratch". I have got a single instance of Apache 2.2 running. Within Apache, I have set up two virtual hosts, each of which needs to be served by a different ColdFusion instance. Each of the CF instances serves files fine via Tomcat's internal web server. Apache serves vanilla HTML files fine too. So both CF instances, and both virtual hosts separately work OK. I can get wsconfig.exe to connect either one of the CF instances to the Apache server, and serve CF files via Apache & that instance. However I cannot find a way of connecting the second CF instance to Apache as well, so that both CF instances are conected, each serving one of the virtual hosts. WSConfig doesn't seem to understand the notion of "multiple CF instances", and the changes it makes to the httpd.conf (via mod_jk.conf) does not seem to be implemented in such a way as to accommodate multiple CF instances talking to a single Apache instance, or multiple virtual hosts. I freely admit to not being confident enough with how mod_jk (or even really httpd.conf) works to be able to guess if I can change stuff to make it work. If I try to add the second CF instance using WSConfig, I just get a message "the web server is already configured for ColdFusion". Be that as it may... not the instance of ColdFusion I want to connect it to! If I remove the existing connector to whichever instance is already connected, I can then connect the other one no problems. Not that this helps, but it demonstrates that the CF instance can connect to Apache. This all used to be fairly straight fwd under older versions of CF and JRun :-( The only docs I have found are on the "Connect multiple Apache virtual hosts on a web server to a single ColdFusion server" page, but that specifically only deals with a single CF instance. There is no equivalent page for multiple CF instances. I'm kinda hoping I can move some of the mod_jk config into my virtual host entries in httpd-vhosts.conf (this is how it used to work for JRun), but I've no idea what to put where. I think I've covered all the necessary info here? If not, sing out and I'll add more. Thanks. PS: tried to specifically tag this as "ColdFusion-10" as the answer will be different from previous CF versions, but it won't let me cos my rep on this site is too low (odd how it doesn't consider my rep from other S/O sites...). If someone with sufficient rep can add it, that'd be cool: it's probably a valid tag to have. Ta.

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • ruby on rails gitorious setup on ubuntu

    - by dogmatic69
    Ive been trying to install gitorious for a while now which required ruby and rails etc. Ive finally got rails pages serving but cant finish the installation of gitorious because the gem version is too new. the error logs say please run 'rake ultrasphinx:configure' and that gives rake ultrasphinx:configure (in /var/www/apps/gitorious) rake aborted! uninitialized constant ActiveSupport::Dependencies::Mutex /var/www/apps/gitorious/Rakefile:10:in `require' (See full trace by running task with --trace) From searching google this is beacuse of the wrong gem verison. Cant find a way to down grade it. apparently sudo gem update --system 1.4.2 should do the trick but ubuntu 10.10 does not like this. ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian, because it will overwrite the content of the rubygems Debian package, and might break your Debian system in subtle ways. The Debian-supported way to update rubygems is through apt-get, using Debian official repositories. If you really know what you are doing, you can still update rubygems by setting the REALLY_GEM_UPDATE_SYSTEM environment variable, but please remember that this is completely unsupported by Debian. So i added export REALLY_GEM_UPDATE_SYSTEM=1 to .bashrc and reloaded it with . ~/.bashrc and still the same. ive tried various forms of setting this environmental variable with no luck. Ive also been told on #gitorious irc channel to add the file config/initializers/rubygems.rb with the line require "thread" to it. This has done nothing. EDIT: Just found another way which was rvm install rubygems 1.4.2 and it gave: Removing old Rubygems files... Installing rubygems dedicated to ruby-1.8.7-p334... Retrieving rubygems-1.4.2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 288k 100 288k 0 0 282k 0 0:00:01 0:00:01 --:--:-- 414k Extracting rubygems-1.4.2 ... Installing rubygems for /home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby ERROR: Error running 'GEM_PATH="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global" GEM_HOME="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334" "/home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby" "/home/ubuntu/.rvm/src/rubygems-1.4.2/setup.rb"', please read /home/ubuntu/.rvm/log/ruby-1.8.7-p334/rubygems.install.log WARN: Installation of rubygems did not complete successfully. TL;DR please tell me how to downgrade rubygems on ubuntu 10.10 or upgrade gitorious to work with 1.6.2 gems

    Read the article

  • Windows Server 2003 W3SVC Failing, Brute Force attack possibly the cause

    - by Roaders
    This week my website has disappeared twice for no apparent reason. I logged onto my server (Windows Server 2003 Service Pack 2) and restarted the World Web Publishing service, website still down. I tried restarting a few other services like DNS and Cold Fusion and the website was still down. In the end I restarted the server and the website reappeared. Last night the website went down again. This time I logged on and looked at the event log. SCARY STUFF! There were hundreds of these: Event Type: Information Event Source: TermService Event Category: None Event ID: 1012 Date: 30/01/2012 Time: 15:25:12 User: N/A Computer: SERVER51338 Description: Remote session from client name a exceeded the maximum allowed failed logon attempts. The session was forcibly terminated. At a frequency of around 3 -5 a minute. At about the time my website died there was one of these: Event Type: Information Event Source: W3SVC Event Category: None Event ID: 1074 Date: 30/01/2012 Time: 19:36:14 User: N/A Computer: SERVER51338 Description: A worker process with process id of '6308' serving application pool 'DefaultAppPool' has requested a recycle because the worker process reached its allowed processing time limit. Which is obviously what killed the web service. There were then a few of these: Event Type: Error Event Source: TermDD Event Category: None Event ID: 50 Date: 30/01/2012 Time: 20:32:51 User: N/A Computer: SERVER51338 Description: The RDP protocol component "DATA ENCRYPTION" detected an error in the protocol stream and has disconnected the client. Data: 0000: 00 00 04 00 02 00 52 00 ......R. 0008: 00 00 00 00 32 00 0a c0 ....2..À 0010: 00 00 00 00 32 00 0a c0 ....2..À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 92 01 00 00 ... With no more of the first error type. I am concerned that someone is trying to brute force their way into my server. I have disabled all the accounts apart from the IIS ones and Administrator (which I have renamed). I have also changed the password to an even more secure one. I don't know why this brute force attack caused the webservice to stop and I don't know why restarting the service didn't fix the problem. What should I do to make sure my server is secure and what should I do to make sure the webserver doesn't go down any more? Thanks.

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • HTTPS request to a specific load-balanced virtual host (using Shibboleth for SSO)?

    - by Gary S. Weaver
    In one environment, we have three servers load balanced that have a single Tomcat instance on each, fronted by two different Apache virtual hosts. Each of those two virtual hosts (served by all three servers) has its own different load balancer. Internally, the first host (we'll call it barfoo) is served by port 443 (HTTPS) with its cert and the second host (we'll call it foobar) is served by port 1443 (HTTPS). When you hit foobar, it goes to the load balancer which is using IP affinity for that host, so you can easily test login/HTTPS on one of the servers serving foobar, but not the others (because you keep getting that server for the lifetime of the LB session, iirc). In addition, each of the servers are using Shibboleth v2 for authN/SSO, using mod_shib (iirc). So, a normal request to foobar hits the LB, is directed to the 3rd server (and will do that from then on for as long as the LB session lasts), then Apache, then to the Shibboleth SP which looks at the request, makes you login via negotiation with the Shibboleth IdP, then you hit Apache again which in turn hits Tomcat, renders, and returns the response. (I'm leaving out some steps there.) We'd like to hit one of the individual servers (foobar-03.acme.org which we'll say has IP 1.2.3.4) via HTTPS (skipping the load balancer), so we at first try putting this in /etc/hosts: 1.2.3.4 foobar.acme.org But since foobar.acme.org is a secondary virtual host running on 1443, it attempts to get barfoo.acme.org rather than foobar.acme.org at port 1443 and see that the cert for barfoo.acme.org is invalid for this case since it doesn't match the request's host, foobar.acme.org. I thought an ssh tunnel might be easy enough, so I tried: ssh -L 7777:foobar-03.acme.org:1443 [email protected] I tried just hitting https://localhost:7777/webappname in a browser, but when the Shibboleth login is over, it again tries to redirect to barfoo.acme.org, which is the default host for 443, and we get into an infinite redirect loop. I then tried setting up an SSH tunnel with privileged port 443 locally going to 443 of foobar-03.acme.org as the hostname for that virtual host: sudo ssh -L 443:foobar-03.acme.org:1443 [email protected] I also edited /etc/hosts to add: 127.0.0.1 foobar.acme.org This finally worked and I was able to get the browser to hit the individual HTTPS host at https://foobar.acme.org/webappname, bypassing the load balancer. This was a bit of a pain and wouldn't work for everyone, due to the requirement to use the local 443 port and ssh to the server. Is there an easier way to browse to and log into an individual host in this case?

    Read the article

  • Force caching of handler output which actively resists caching

    - by deceze
    I'm trying to force caching of a very obnoxious piece of PHP script which actively tries to resist caching for no good reason by actively setting all the anti-cache headers: Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type: text/html; charset=UTF-8 Date: Thu, 22 May 2014 08:43:53 GMT Expires: Thu, 19 Nov 1981 08:52:00 GMT Last-Modified: Pragma: no-cache Set-Cookie: ECSESSID=...; path=/ Vary: User-Agent,Accept-Encoding Server: Apache/2.4.6 (Ubuntu) X-Powered-By: PHP/5.5.3-1ubuntu2.3 If at all avoidable I do not want to have to modify this 3rd party piece of code at all and instead just get Apache to cache the page for a while. I'm doing this very selectively to only very specific pages which have no real impact on session cookies or the like, i.e. which do not contain any personalised information. CacheDefaultExpire 600 CacheMinExpire 600 CacheMaxExpire 1800 CacheHeader On CacheDetailHeader On CacheIgnoreHeaders Set-Cookie CacheIgnoreCacheControl On CacheIgnoreNoLastMod On CacheStoreExpired On CacheStoreNoStore On CacheLock On CacheEnable disk /the/script.php Apache is caching the page alright: [cache:debug] AH00698: cache: Key for entity /the/script.php?(null) is http://example.com:80/the/script.php? [cache_disk:debug] AH00709: Recalled cached URL info header http://example.com:80/the/script.php? [cache_disk:debug] AH00720: Recalled headers for URL http://example.com:80/the/script.php? [cache:debug] AH00695: Cached response for /the/script.php isn't fresh. Adding conditional request headers. [cache:debug] AH00750: Adding CACHE_SAVE filter for /the/script.php [cache:debug] AH00751: Adding CACHE_REMOVE_URL filter for /the/script.php [cache:debug] AH00769: cache: Caching url: /the/script.php [cache:debug] AH00770: cache: Removing CACHE_REMOVE_URL filter. [cache_disk:debug] AH00737: commit_entity: Headers and body for URL http://example.com:80/the/script.php? cached. However, it is always insisting that the "cached response isn't fresh" and is never serving the cached version. I guess this has to do with the Expires header, which marks the document as expired (but I don't know whether that's the correct assumption). I've tried to overwrite and unset headers using mod_headers, but this doesn't help; whatever combination I try the cache is not impressed at all. I'm guessing that the order of operation is wrong, and headers are being rewritten after the cache sees them. early header processing doesn't help either. I've experimented with CacheQuickHandler Off and trying to set explicit filter chains, but nothing is helping. But I'm really mostly poking in the dark, as I do not have a lot of experience with configuring Apache filter chains. Is there a straight forward solution for how to cache this obnoxious piece of code?

    Read the article

  • BIND DNS server (Windows) - Unable to access my local domain from other computers on LAN

    - by Ricardo Saraiva
    I have a BIND DNS server running on my Windows 7 development machine and I'm serving pages with WAMPSERVER. My ideia is to develop some tools (in PHP) for my intranet at work and I want them to be accessible via LAN in this format: http://tools.mycompany.com I've already placed BIND and I can access http://tools.mycompany.com on the machine that holds BIND server, but I cannot access it from other LAN computers. I've done the following on my router: defined static IP's for all LAN computers set Port Forwarding to my server (remember: it serves DNS and Web pages) set DNS server configuration to point to my LAN server On LAN computers, I went to Local Area Network properties and also changed the DNS server IP in order to point to my local DNS server. If it helps, here is my named.conf file: options { directory "c:\windows\SysWOW64\dns\etc"; forwarders {127.0.0.1; 8.8.8.8; 8.8.4.4;}; pid-file "run\named.pid"; allow-transfer { none; }; recursion no; }; logging{ channel my_log{ file "log\named.log" versions 3 size 2m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ my_log; }; }; zone "mycompany.com" IN { type master; file "zones\db.mycompany.com.txt"; allow-transfer { none; }; }; key "rndc-key" { algorithm hmac-md5; secret "qfApxn0NxXiaacFHpI86Rg=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; ...and a single zone I've defined - file db.mycompany.com.txt: $TTL 6h @ IN SOA tools.mycompany.com. hostmaster.mycompany.com. ( 2014042601 10800 3600 604800 86400 ) @ NS tools.mycompany.com. tools IN A 192.168.1.4 www IN A 192.168.1.4 On the file above 192.168.1.4 is the IP of the local machine inside my LAN. Can someone help me here? I need my web pages to be accessible from other computers inside my LAN using my custom domain name. I've tried on other computers and they can access my server via http://192.168.1.4/, but no able when using http://tools.mycompany.com . Please, consider the following: I'm completely new to BIND I have basic knowledge in Apache configuration Thanks a lot for your help.

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

  • Htaccess strange behaviour with Nginx

    - by Termos
    I have a site running on Nginx (v1.0.14) serving as reverse proxy which proxies requests to Apache (v2.2.19). So Nginx runs on port 80, Apache is on 8080. Overall site works fine except that i cannot block access to certain directories with .htaccess file. For example i have 'my-protected-directory' on 'www.site.com' Inside it i have htaccess with following code: <Files *> order deny,allow deny from all allow from 1.2.3.4 <--- my ip address here </Files> When i try to access this page with my ip (1.2.3.4) i get 404 error which is not what i expect: http://www.site.com/my-protected-directory However everything works as expected when this page is served directly by Apache. I can see this page, everyone else can't. http://www.site.com:8080/my-protected-directory Update. Nginx config (7.1.3.7 is site ip.): user apache; worker_processes 4; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1024; gzip_http_version 1.1; gzip_proxied any; gzip_comp_level 5; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon; server { listen 80; server_name www.site.com site.com 7.1.3.7; access_log logs/host.access.log main; # serve static files location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/www.site.com/httpdocs; proxy_set_header Range ""; expires 30d; } # pass requests for dynamic content to Apache location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Range ""; proxy_pass http://7.1.3.7:8080; } } Could please anyone tell me what is wrong and how this can be fixed ?

    Read the article

  • My VPS ubuntu server is very slow

    - by askmike
    I just installed a frech copy of Ubuntu 12.04 on my vps because my old installation was very slow, unfortunately this did not fix the problem. With slow I mean requests for my PHP websites take a long time, very slow (30 sec per request) to slow (3+ sec per request). When it's really bad SSH is also laggish. The websites are: askmike.org (pretty standard Wordpress) mvr.me (own PHP) slow? very slow: Here is a picture of loading a clean install of wordpress slow: here is a picture of loading a small PHP based website the vps The VPS has 256mb ram and an 25GB hdd. Besides serving the 2 small websites it isn't doing anything AFAIK. What have I installed Clean Ubuntu server 12.04 LAMP stack few things like git and nodejs (not using both) ossec (because I thought my server was getting hammered) munin What I already tried / done I installed munin so that I could watch io speed and such. The problem is that I don't know where to look for in the munin report. I checked logs and don't see anything strange (although I don't really know where to look for besides strange / repetitive errors and GET requests). I configured Apache MPM to: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 40 MaxRequestsPerChild 0 </IfModule> (apache is using prefork, the default) Stats I copied the munin report as it appeared at 4:50 last night to a site hosted on a shared webhost. Note that tonight my mysql crashed somewhere after 1:00 (which is a new problem altogether), so therefor the graph for last night might look strange. Can anyone help me get my VPS up to normal speed? EDIT: Thanks for the replies. The VPS is 10 bucks a month and is from directvps.nl (Dutch host and I'm also dutch). I did two speed tests for disk IO: $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 23.1506 s, 46.4 MB/s $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 39.3796 s, 27.3 MB/s Anyway: how can I prove to my VPS host that it is to slow? I can understand a server being busy slowing a website down. But 5-30 sec loadtime for a normal PHP webpage?

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • SQL SERVER – Signal Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Signal Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Signal Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Signal Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the Signalwait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the Signal wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the Signal wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Single Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Single Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Single Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Single Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the single wait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the single wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the single wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Installing Visual Studio Team Foundation Server Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. Although I had no errors on my main computer, my netbook did have problems. Although I am not ready to call it a Service Pack problem just yet! Update 2011-03-10 – Running the Team Foundation Server 2010 Service Pack 1 install a second time worked As per Brian's post I am installing the Team Foundation Server Service Pack first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. This however does not affect you if you are running Visual Studio and Team Foundation Server on separate computers as is normal in a production deployment. Main workhorse I will be installing the service pack first on my main computer as I want to actually use it here. Figure: My main workhorse I will also be installing this on my netbook which is obviously of significantly lower spec, but I will do that one after. Although, as always I had my fingers crossed, I was not really worried. Figure: KB2182621 Compared to Visual Studio there are not really a lot of components to update. Figure: TFS 2010 and SQL 2008 are the main things to update There is no “web” installer for the Team Foundation Server 2010 Service Pack, but that is ok as most people will be installing it on a production server and will want to have everything local. I would have liked a Web installer, but the added complexity for the product team is not work the capability for a 500mb patch. Figure: There is currently no way to roll SP1 and RTM together Figure: No problems with the file verification, phew Figure: Although the install took a while, it progressed smoothly   Figure: I always like a success screen Well, as far as the install is concerned everything is OK, but what about TFS? Can I still connect and can I still administer it. Figure: Service Pack 1 is reflected correctly in the Administration Console I am confident that there are no major problems with TFS on my system and that it has been updated to SP1. I can do all of the things that I used before with ease, and with the new features detailed by Brian I think I will be happy. Netbook The great god Murphy has stuck, and my poor wee laptop spat the Team Foundation Server 2010 Service Pack 1 out so fast it hit me on the back of the head. That will teach me for not looking… Figure: “Installation did not succeed” I am pretty sure should not be all caps! On examining the file I found that everything worked, except the actual Team Foundation Server 2010 serving step. Action: System Requirement Checks... Action complete Action: Downloading and/or Verifying Items c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp: Verifying signature for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp Signature verified successfully for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi: Verifying signature for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi Signature verified successfully for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi: Verifying signature for DACProjectSystemSetup_enu.msi Exists: evaluating Exists evaluated to false c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi Signature verified successfully for DACProjectSystemSetup_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi: Verifying signature for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi Signature verified successfully for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi: Verifying signature for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi Signature verified successfully for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi: Verifying signature for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi Signature verified successfully for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi: Verifying signature for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi Signature verified successfully for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi: Verifying signature for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi Signature verified successfully for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab: Verifying signature for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab Signature verified successfully for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi: Verifying signature for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi Signature verified successfully for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\SetupUtility.exe: Verifying signature for SetupUtility.exe c:\757fe6efe9f065130d4838081911\SetupUtility.exe Signature verified successfully for SetupUtility.exe c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab: Verifying signature for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab Signature verified successfully for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi: Verifying signature for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi Signature verified successfully for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe: Verifying signature for NDP40-KB2468871.exe c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe Signature verified successfully for NDP40-KB2468871.exe Action complete Action: Performing actions on all Items Entering Function: BaseMspInstallerT >::PerformAction Action: Performing Install on MSP: c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp targetting Product: Microsoft Team Foundation Server 2010 - ENU Returning IDOK. INSTALLMESSAGE_ERROR [Error 1935.An error occurred during the installation of assembly 'Microsoft.TeamFoundation.WebAccess.WorkItemTracking,version="10.0.0.0",publicKeyToken="b03f5f7f11d50a3a",processorArchitecture="MSIL",fileVersion="10.0.40219.1",culture="neutral"'. Please refer to Help and Support for more information. HRESULT: 0x80070005. ] Returning IDOK. INSTALLMESSAGE_ERROR [Error 1712.One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible.] Patch (c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp) Install failed on product (Microsoft Team Foundation Server 2010 - ENU). Msi Log: MSI returned 0x643 Entering Function: MspInstallerT >::Rollback Action Rollback changes PerformMsiOperation returned 0x643 PerformMsiOperation returned 0x643 OnFailureBehavior for this item is to Rollback. Action complete Final Result: Installation failed with error code: (0x80070643), "Fatal error during installation. " (Elapsed time: 0 00:14:09). Figure: Error log for Team Foundation Server 2010 install shows a failure As there is really no information in this log as to why the installation failed so I checked the event log on that box. Figure: There are hundreds of errors and it actually looks like there are more problems than a failed Service Pack I am going to just run it again and see if it was because the netbook was slow to catch on to the update. Hears hoping, but even if it fails, I would question the installation of Windows (PDC laptop original install) before I question the Service Pack Figure: Second run through was successful I don’t know if the laptop was just slow, or what… Did you get this error? If you did I will push this to the product team as a problem, but unless more people have this sort of error, I will just look to write this off as a corrupted install of Windows and reinstall.

    Read the article

  • Aamir Khan’s Satyamev Jayate stirs a movement

    - by Gopinath
    Bollywood actor Aamir Khan is known for his dedication and hard work in inspiring millions of viewers though movies by discussing social problems and motivating people to solve them. His movie Rang De Basanthi seeded Indian anti-corruption movement, Tare Zameen Par touched the problems faced by few challenged kids and the latest movie 3 idiots exposed how education institutions in India are producing lakhs of Donkeys out of colleges every year. He extended his dedication of serving the society to small screen with the launch of reality TV show Satyamev Jayate. Before you start misjudging it as one of those non sense drama / entertaining reality shows, let me tell you that it is not a typical music, games, fight or dance reality show. Satyamev Jayate is all about the real people of India, their problems and how to tackle them.  This is not just a reality show, its movement to educate people about the social evils. Its been many years since I spent couple of hours  in front of TV as most of the programs are too cynical or does not add much value.  In my childhood I use to anxiously wait for Mahabarath or He-Man TV shows to start but after a two decades I waited anxiously for the start of Satyamev Jayate. The wait was worth and the 1 hours 30 minutes spent watching it meaningful. When was the last time you were so satisfied after watching a TV show and inspired to do something? I don’t remember. Today, the show focused on female foeticide and its impact. It showed women who were tortured and forced to abort female foetuses. On the show few brave women shared their experiences of giving birth to girl babies and rough times they are going through with their in-laws & husbands. The show not only focused on the problem but also on the root cause of the evil,  inspiring people working to tackle it and what every individual can do his part to solve it.  The best part of the show is,  its not a blame game. When there is a problem most of the people quickly get into identifying who is wrong and start blaming them instead of solve the actual problem.  Aamir did not blame anyone for female foeticide – neither the government who don’t impose strict rules, nor the doctors who abort girl babies to make money or the mother-in-laws & husbands who torcher girl baby mothers are blamed. He careful highlighted the problem, showed horrifying statistics and their impact on the future society and few inspiring people working to tackle the problem.  He touched heart and stirred a movement against the issue. First time ever I voted for a reality show through SMS and it’s for Satyamev Jayate. I’m proud to do so. Here are the few reactions of popular people, activists & media about the program @aamir_khan absolutely the best program I have seen on TV in recent past. Thanku for converting an idiot box into an inspirationsl medium — Kiran Bedi (@thekiranbedi) May 6, 2012 Satyamev Jayate proves tht TV 2 can b a tool of social change. — Shekhar Kapur (@shekharkapur) May 6, 2012 i absolutely loved #satyamevjayate. at least aamir is doing what all of us only talk about. — Harsha Bhogle (@bhogleharsha) May 6, 2012 Now Television will no longer be called an idiot box,the VISION of Television broadens up with#SatyamevJayate !!! — Madhur Bhandarkar (@mbhandarkar268) May 6, 2012 The Sunday 11am slot seems to have come back with a bang… #SatyamevJayate — atul kasbekar (@atulkasbekar) May 6, 2012   I was spellbound, says Prasoon Joshi – It’s a unique show. I was completely bowled over by it. It’s a never-done before concept Aamir Khan strikes the right chord with Satyamev Jayate – The format is quite crisp. Talking about the emotional connect, there are moments when your eyes well up with tears, but the various segments ensure there’s more content than emotional drama ‘Satyamev Jayate’ gutsy, sensible show: Viewers – From filmmakers to clinical psychologists to professors – everyone has given the thumbs up to Aamir Khan’s television show ‘Satyamev Jayate’, saying it is a gutsy, hard-hitting and sensible programme that strikes an emotional chord with the audiences. Aamir Khan’s TV debut ‘Satyamev Jayate’ takes Twitter by storm – The roads of the capital sported a deserted look around 11 am on Sunday morning, as everyone was hooked on to their TV sets. Did you watch the program? What is your opinion? I’m waiting for next 11 AM of next Sunday. Are you?

    Read the article

  • Nerdstock 2012: A photo review of Microsoft TechEd North America 2012

    - by The Un-T Guy
    Not only could I not fathom that I would ever be attending a tech event of the magnitude of TechEd, neither could any of my co-workers.  As the least technical person in the history of Information Technology ever, I felt as though I were walking into the belly of the beast, fearing I’d not be allowed out until I could write SSIS packages, program in Visual Basic, or at least arm wrestle a DBA.  Most of my fears were unrealized.   But I made it.  I was here.  I even got to wear the Mark of the Geek neck package with schedule, eyeglass cleaners, name badge (company name obfuscated so they don’t fire me), and a pen.  The name  badge was seemingly the key element, as every vendor in the place wanted to scan it to capture name, email address, and numbers to show their bosses back home.  It also let me eat the food and drink the coffee so that’s a fair trade.   A recurring theme throughout the presentations and vendor demos was “the Cloud” and BYOD (bring your own device).  The below was a common site throughout the week, as attendees from all over the world brought their own devices and were able to (seemingly) seamlessly connect to the Worldwide Innerwebs.  Apparently proof that Microsoft and the event organizers were practicing what they were preaching.   “Cavernous” is one way to describe the downstairs facility itself.  “Freaking cavernous” might be more accurate.  Work sessions were held in classrooms on the second and third floors but the real action was happening downstairs.  Microsoft bookstore, blogger hub (shoutout to Geekswithblogs.net), The Wall (sans Pink Floyd, sadly), couches, recharging stations…   …a game zone with pool and air hockey tables, pinball machines, foosball…   …vintage video games…           …and a even giant chess board.  Looked like this guy was opening with the Kaspersky parry.   The blend of technology and fantasy even went so far as to bring childhood favorites to life.  Assuming, of course, your childhood was pre-video games (like mine) and you were stuck with electric football and Rock ‘em Sock ‘em robots:   And, lest the “combatants” become unruly or – God forbid – afternoon snacks were late, Orange County’s finest was on the scene to keep the peace.  On a high-tech mode of transport, of course.   She wasn’t the only one to think this was a swell way to transition from one concourse to the next.  Given the level of support provided by the entire Orange County Convention Center staff, I knew they had to have some secret.   Here’s one entrance to the vendor zone/”Technical Learning Center.”  Couldn’t help but think of them as the remora attached to the Whale Shark that is Microsoft…   …or perhaps planets orbiting the sun. Microsoft is just that huge and it seemed like every vendor in the industry looks forward to partnering with the tech behemoth.   Aside from the free stuff from the vendors, probably the most popular place in the house was the dining area.  Amazing spreads every day, multiple times a day.  While no attendance numbers were available at press time, literally thousands of attendees were fed, and fed well, every day.  And lest you think my post from earlier in the week exaggerated about the backpacks…   …or that I’m exaggerating about the lunch crowds.  This represents only about between 25-30% of the lunch crowd – it was all my camera could capture at once.  No one went away hungry.   The only thing missing was a a vat of Red Bull but apparently organizers went old school, with probably 100 urns of the original energy drink – coffee – all around the venue.   Of course, following lunch and afternoon sessions, some preferred the even older school method of re-energizing.  There were rumors that Microsoft was serving graham crackers and milk in this area.  But they were only rumors.   Cannot overstate the wonderful service provided by the Orange County Convention Center staff.  Coffee, soft drinks, juice, and water were available always.  Buffet meals were delicious with a wide range of healthy options available, in addition to hundreds (at least) special meal requests supported every day.  Ever tried to keep up with an estimated 9,000 hungry and thirsty IT-ers?  These folks did.  Kudos to all of the staff and many thanks!   And while I occasionally poke fun at the Whale Shark, if nothing else this experience convinced me of one thing:  Microsoft knows how to put on a professional event.  Hundreds of informative, professionally delivered sessions, covering a wide range of topics set at varying levels of expertise (some that even I was able to follow), social activities, vendor partnerships…they brought everything you could ask for to inform, educate, and inspire an entire IT industry.   So as I depart the belly of the beast, I can both take pride in the fact that I survived the week and marvel at the brilliance surrounding me.  The IT industry – or at least the segment associated with Microsoft – is in good, professional hands.  And what won’t fit in their hands can be toted in the Microsoft provided backpacks.  Win-win.   Until New Orleans…

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • Minidlna Directory Issues

    - by Somnambulist
    I've done my searching and can't find an answer to THIS specific issue. I have my minidlna set up and running - but it's not really done properly. First off, when I open the server on my bluray player, all of my movies are listed twice - when they are certainly not saved on my external twice. Second, when I open the server - rather than reading "Movies" "TV" "Music", etc - It just mashes all of my movies, tv, and some other folders all together with no real organization. I never had this problem when I had my Windows set up, so I know it's something configured improperly more-so than my external drive giving me gruff. Here's my minidlna.conf file: # This is the configuration file for the MiniDLNA daemon, a DLNA/UPnP-AV media # server. # # Unless otherwise noted, the commented out options show their default value. # # On Debian, you can also refer to the minidlna.conf(5) man page for # documentation about this file. media_dir=/media/somnambulist/Ghost In You # This option can be specified more than once if you want multiple directories # scanned. # # If you want to restrict a media_dir to a specific content type, you can # prepend the directory name with a letter representing the type (A, P or V), # followed by a comma, as so: # * "A" for audio (eg. media_dir=A,/var/lib/minidlna/music) # * "P" for pictures (eg. media_dir=P,/var/lib/minidlna/pictures) # * "V" for video (eg. media_dir=V,/var/lib/minidlna/videos) # # WARNING: After changing this option, you need to rebuild the database. Either # run minidlna with the '-R' option, or delete the 'files.db' file # from the db_dir directory (see below). # On Debian, you can run, as root, 'service minidlna force-reload' instead. #media_dir=/var/lib/minidlna media_dir=V,/media/somnambulist/Ghost In You/Movies media_dir=V,/media/somnambulist/Ghost In You/TV media_dir=P,/home/somnambulist/Pictures # Path to the directory that should hold the database and album art cache. db_dir=/home/somnambulist/serverart # Path to the directory that should hold the log file. log_dir=/home/somnambulist/serverlog # Minimum level of importance of messages to be logged. # Must be one of "off", "fatal", "error", "warn", "info" or "debug". # "off" turns of logging entirely, "fatal" is the highest level of importance # and "debug" the lowest. #log_level=warn # Use a different container as the root of the directory tree presented to # clients. The possible values are: # * "." - standard container # * "B" - "Browse Directory" # * "M" - "Music" # * "P" - "Pictures" # * "V" - "Video" # if you specify "B" and client device is audio-only then "Music/Folders" will be used as root root_container=B # Network interface(s) to bind to (e.g. eth0), comma delimited. #network_interface= # IPv4 address to listen on (e.g. 192.0.2.1). #listening_ip= # Port number for HTTP traffic (descriptions, SOAP, media transfer). port=8200 # URL presented to clients. # The default is the IP address of the server on port 80. #presentation_url=http://example.com:80 # Name that the DLNA server presents to clients. friendly_name=Somnambulist Media Server # Serial number the server reports to clients. serial=12345678 # Model name the server reports to clients. #model_name=Windows Media Connect compatible (MiniDLNA) # Model number the server reports to clients. model_number=1 # Automatic discovery of new files in the media_dir directory. #inotify=yes # List of file names to look for when searching for album art. Names should be # delimited with a forward slash ("/"). album_art_names=Cover.jpg/cover.jpg/AlbumArtSmall.jpg/albumartsmall.jpg/AlbumArt.jpg/albumart.jpg/Album.jpg/album.jpg/Folder.jpg/folder.jpg/Thumb.jpg/thumb.jpg # Strictly adhere to DLNA standards. # This allows server-side downscaling of very large JPEG images, which may # decrease JPEG serving performance on (at least) Sony DLNA products. #strict_dlna=no # Support for streaming .jpg and .mp3 files to a TiVo supporting HMO. #enable_tivo=no # Notify interval, in seconds. #notify_interval=895 # Path to the MiniSSDPd socket, for MiniSSDPd support. #minissdpdsocket=/run/minissdpd.sock` And here's the error I get in terminal when I run: sudo service minidlna restart sudo service minidlna force-reload Force restart error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] Force-reload error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] rm: cannot remove ‘/home/somnambulist/serverart/files.db’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Slumdog Millionaire/Slumdog.Millionaire.Cover.jpg’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Zack and Miri Make a Porno/ZackAndMiriMakeAPornoCover.jpg’: Permission denied [2013/08/12 21:19:46] minidlna.c:744: warn: Failed to clean old file cache. [ OK ] I've spent hours on this at this point, read through various files - and even had a friend who is relatively Ubuntu-savvy try to help me via chat - no such luck. Thanks in advance for any help.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44  | Next Page >