Search Results

Search found 89681 results on 3588 pages for 'cross server'.

Page 737/3588 | < Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >

  • ISA Server 2006 SP1 :: Allow unauthenticated users (non domain users) access to external (internet)

    - by Klaptrap
    Now that we have applied an internal to external rule blocking all users access to the internet, other than those users in a whitelist, we have the obvious issue of non authenticated users, not on our domain, i.e.; domain-less guests not being able to access the internet. Other than configuring each machine to use our alternative gateway - which would require a member of IT to be onsite everytime a guest arrives - can this be done through ISA adn AD?

    Read the article

  • Cherokee web server on Ubuntu Lucid

    - by Fazal
    I've been trying to find some decent tutorials on how to set up a recent release of Cherokee webserver on Ubuntu (or equivalent Linux distros) which outline how to setup the webserver, mysql, phpmyadmin and php. Some already exist, such as http://www.howtoforge.com/installing-cherokee-with-php5-and-mysql-support-on-ubuntu-10.04 however, I've found that the Cherokee version used in the tutorial is considerably out of date and the update process has been painful to say the least. Thanks.

    Read the article

  • Cherokee web server on Ubuntu Lucid

    - by Fazal
    I've been trying to find some decent tutorials on how to set up a recent release of Cherokee webserver on Ubuntu (or equivalent Linux distros) which outline how to setup the webserver, mysql, phpmyadmin and php. Some already exist, such as http://www.howtoforge.com/installing-cherokee-with-php5-and-mysql-support-on-ubuntu-10.04 however, I've found that the Cherokee version used in the tutorial is considerably out of date and the update process has been painful to say the least. Thanks.

    Read the article

  • Using .htaccess to server files from Amazon S3 CloudFront

    - by Adrian A.
    My ideal setup would be to take a current clients site, upload a .htaccess with a regex inside, that would match the URI, and if it finds a certain file extension, it would use the same path, but with an altered domain. ie. Normal path: http://www.domain.com/something/images/someimage.jpeg http://www.domain.com/assets/js/jquery.js .htaccess translated would turn the above into: http://mycdn.other.com/something/images/someimage.jpeg http://mycdn.other.com/assets/js/jquery.js I googled this for hours in a row, no luck. Again, this is for actually making use of Amazon's CloudFront. S3 is already mounted to the website for backups and storing files using s3fs, but this doesn't solve the issue since it's using S3 directly, not using the CloudFront.

    Read the article

  • Book Review: Pro SQL Server 2008 Relational Database Design and Implementation

    - by Alexander Kuznetsov
    Investing in proper database design is a very efficient way to cut maintenance costs. If we expect a system to last, we need to make sure it has a good solid foundation - high quality database design. Surely we can and sometimes do cut corners and save on database design to get things done faster. Unfortunately, such cutting corners frequently comes back and bites us: we may end up spending a lot of time solving issues caused by poor design. So, solid understanding of relational database design is...(read more)

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by E3 Group
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • "Deep Fried Bytes Podcast": Lars on SQL Server Modeling

    Here's how the Deep Fried guys describe episode 45: "At PDC 2009, 'Oslo' was renamed to SQL Modeling and it left a lot of developers scratching their heads. What better way to sort it all out than to talk with someone deep into the stack. We sat down with Lars Corneliussen to see how this is all going to turn out and it what it means for developers. Definitely an interesting show as it paints a different picture about where things are going with 'M', 'M' Grammar, SQL modeling, Entity Framework, Quadrant...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • "Deep Fried Bytes Podcast": Lars on SQL Server Modeling

    Here's how the Deep Fried guys describe episode 45: "At PDC 2009, 'Oslo' was renamed to SQL Modeling and it left a lot of developers scratching their heads. What better way to sort it all out than to talk with someone deep into the stack. We sat down with Lars Corneliussen to see how this is all going to turn out and it what it means for developers. Definitely an interesting show as it paints a different picture about where things are going with 'M', 'M' Grammar, SQL modeling, Entity Framework, Quadrant...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • OpenVPN access server on Amazon VPC vs free version

    - by imaginative
    Maybe I'm missing the point, but I'd like to setup simple VPN access with software VPN to access my private network on Amazon VPC. I thought OpenVPN would be a great solution for this, and I thought it might make sense to put this on the NAT instance that comes with VPC so I don't have to spend money on another instance. Is there any advantage to running the following: http://www.openvpn.net/index.php?option=com_content&id=493 vs sticking to the free solution of OpenVPN? What does one offer over the other? Any reason not to run this on the NAT instance itself?

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Network Printer or Share Printer on Server?

    - by Joeme
    Hi, Small office, <10 users. USB printer which also has a network port. Is it better to share the printer by plugging the usb into the sevrer, and do a windows share, or use the built in network port? We are using the built in network port at the moment, but don't have control to delete jobs in the queue that get stuck. Thanks, Joe

    Read the article

  • UK SQL Server User Group Event (May)

    Our very own Darren Green is speaking at a UK user group event in Cambridge (UK) on 20.05.2009.  He will be speaking on Integration Services.  Peter Blackburn will also be there and what he doesn’t know about SSRS isn’t worth knowing.  It promises to be a good night.  We would love to see as many people there as possible so head over to the UK User Group site and register. Register Here

    Read the article

  • Snow Leopard Server, how to turn on compression for web

    - by gbrandt
    I am trying to make the webserver in Snow Leopard compress all output by default. The only thing I have found is to add SetOutputFilter DEFLATE in the .htaccess file for a directory. I really don't want to add an .htaccess file to every directory served. How can I globally get Apache2 on Snow Leopard to compress output?

    Read the article

  • Getting 400 Bad Request when requesting by server name on nginx/uwsgi

    - by Marc Hughes
    I'm trying to run 2 different sites on nginx via different ports (they each have a load balancer that points to the appropriate port). The first site work perfectly. The second site... If I access http://localhost:81/ it works correctly If I access http://127.0.01:81/ it works correctly If I access the hostname http://THEHOSTNAME:81/ it fails with a 400 error If I access the public IP http://x.x.x.x:81/ it fails with a 400 error I've set the error_log to info, but the only lines I get in the log when this happens is: ==> /var/log/nginx/access.log <== 10.183.38.141 - - [24/Aug/2014:21:03:28 +0000] "GET / HTTP/1.1" 400 37 "-" "curl/7.36.0" "-" ==> /var/log/nginx/error.log <== 2014/08/24 21:03:28 [info] 7029#0: *5 client 10.183.38.141 closed keepalive connection In my uwsgi log, I only see this: [pid: 6870|app: 0|req: 87/92] 10.28.23.224 () {32 vars in 380 bytes} [Sun Aug 24 21:05:21 2014] GET / => generated 26 bytes in 1 msecs (HTTP/1.1 400) 2 headers in 82 bytes (1 switches on core 2) What should be my next step in debugging this?

    Read the article

  • SQL SERVER World Shapefile Download and Upload to Database Spatial Database

    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it.VDS Technologies has all the spatial [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Config Server Firewall: Spamming my email | lfd on localhost: Suspicious process running under user www-data

    - by Henry Hoggard
    I have just installed and configured CSF and I am getting 100s of spam emails containing this message. lfd on localhost: Suspicious process running under user www-data Time: Wed May 23 01:05:52 2012 +0200 PID: 8503 Account: www-data Uptime: 118 seconds Executable: /usr/lib/apache2/mpm-prefork/apache2 Command Line (often faked in exploits): /usr/sbin/apache2 -k start Network connections by the process (if any): tcp6: 0.0.0.0:80 -> 0.0.0.0:0 Files open by the process (if any): Does anyone know how to fix?

    Read the article

  • Geek City: SQL Server 2014 In-Memory OLTP (“Hekaton”) Whitepaper for CTP2

    - by Kalen Delaney
    Last week at the PASS Summit in Charlotte, NC, the update of my whitepaper for CTP2 was released. The manager supervising the paper at Microsoft told me that David DeWitt himself said some very nice things about the technical quality of the paper, which was one of the most ego enhancing compliments I have ever gotten! Unfortunately, Dr. DeWitt said those things at his “After-the-keynote” session, not in the keynote that was recorded, so I only have my manager’s word for it. But I’ll take what I can...(read more)

    Read the article

  • CakePHP: trouble configuring .htaccess for user directories enabled server

    - by bullettime
    I've placed the CakePHP files in a directory in /home/user/public_html/cakephp. When I try to reach localhost/~user/cakephp with my browser, there's an error message. In my case, since I'm using Chrome, it is 'Oops! This link appears to be broken.". Looking for a solution on Google, I found a few articles saying that I have to edit the .htaccess files that came with CakePHP, since it was made to work out of the box in /var/www/htdocs. Apparently I have to add a 'RewriteBase' statement to the .htaccess files. I added 'RewriteBase /' to it but it didn't work. If I change the RewriteBase statement in my user web directory to 'RewriteBase /cakephp' and then try to access localhost/~user/cakephp, the browser then shows not the copy in /home/user/public_html/cakephp but the copy in /var/www/htdocs/cakephp. What can I do to fix this?

    Read the article

  • Mail server responds with Error 500 line too long

    - by mawimawi
    I am occasionally receiving error messages from mail servers with the above message. It seems that at least one line in the e-mail has more than 999 characters and therefore the e-mail is bounced. Is this "by design"? in an RFC? Or some weird pseudo-spam-filter? Or just a bad mailserver? I googled around a bit but did not find a competent answer. Hopefully one of you guys can enlighten me.

    Read the article

  • Using dnsmasq dns server on a laptop for a locally hosted tld

    - by kimausloos
    Hi, I'm running a working apache install with mod vhost_alias on a laptop. Everything works, but I have to manually add hosts in my /etc/hosts file. I'd like to install dnsmasq to point a tld to my local machine (this is not the problem, I know how to add a wildcard tld to dnsmasq), but how do I make it so that it will go look for other dns servers upstream? Remember I'm talking about a laptop, so I will be working on various wifi hotspots, with different default gateways, dns servers, etc. I've searched for quite some time but the only things I find are guides that require you to hardcode the upstream servers wich is a no-go for me. Any help is appreciated!

    Read the article

< Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >