Search Results

Search found 19412 results on 777 pages for 'multiple resultsets'.

Page 111/777 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Apache redirect multiple domain names from https

    - by Cyril N.
    My server distribute two main websites, says : www.google.com & www.facebook.com (yeah I know :p) I want them to be distributed via https. Using Apache, I defined a vhost file in sites-available/enabled containing this : <VirtualHost *:80> ServerName google.com Redirect / https://www.google.com/ </VirtualHost> <VirtualHost *:80> ServerName facebook.com Redirect / https://www.facebook.com/ </VirtualHost> <VirtualHost *:80> DocumentRoot /srv/www/google/www/ ServerName www.google.com ServerAlias www.facebook.com <Directory ... /> # Google & Facebook points to the same directory (crazy right ?) # Next of the config </VirtualHost> <VirtualHost *:443> SSLEngine On SSLCertificateFile /path/to/google.crt SSLCertificateKeyFile /path/to/google.key DocumentRoot "/srv/www/google/www/" ServerName www.google.com <Directory .../> # Next of the config </VirtualHost> <VirtualHost *:443> SSLEngine On SSLCertificateFile /path/to/facebook.crt SSLCertificateKeyFile /path/to/facebook.key DocumentRoot "/srv/www/google/www/" ServerName www.facebook.com <Directory .../> # Next of the config </VirtualHost> If I access to https://www.google.com, the httpS works correctly If I access to https://www.facebook.com, the httpS works correctly. If I access to http://www.google.com, the http works correctly # Without https ! If I access to http://www.facebook.com, the http works correctly # Without https ! BUT : If I access to https://facebook.com, it fails saying that the SSL connection is not what expected : Google.com instead of facebook.com Based on my configuration file, I understand why, so I tried to add : <VirtualHost *:443> SSLEngine On ServerName facebook.com Redirect / https://www.facebook.com/ </VirtualHost> But then, I can't even access facebook.com nor www.facebook.com via http/https. So my question is quite simple : how can I redirect all https access to facebook.com (and eventually all sub facebooks : facebook.fr, www.facebook.fr, etc) to www.facebook.com (redirecting to www domain) in HTTPS ? Thanks for your help ! :)

    Read the article

  • MySQL "OR MATCH" hangs (very slow) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated. EDIT All the tables involved are MyISAM, utf8_general_ci. Here's the EXPLAIN SELECT statement: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY a ALL NULL NULL NULL NULL 16076 Using temporary; Using filesort 1 PRIMARY b ref product_id product_id 4 database.a.product_id 2 1 PRIMARY e eq_ref PRIMARY PRIMARY 4 database.a.industry_id 1 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 23261 1 PRIMARY d eq_ref PRIMARY PRIMARY 4 database.a.brand_id 1 Using where 2 DERIVED product_images ALL NULL NULL NULL NULL 25933 Using where I don't know how to make that look neater -- sorry about that UPDATE it returns the query after 196 seconds (I think correctly). The query without multiple tables takes about .56 seconds (which I know is really slow, we plan on changing to solr or sphinx soon), but 196 seconds?? If we could add a number to the relevance if it was in the brand name ( d.name ), that would also work

    Read the article

  • Multiple rack apps on nginx + passenger, one as root, the other not...config help

    - by cannikin
    So I've got two apps I want to run on a server. One app I would like to be the "default" app--that is, all URLs should be sent this app by default, except for a certain path, lets call it /foo: http://mydomain.com/ -> app1 http://mydomain.com/apples -> app1 http://mydomain.com/foo -> app2 My two rack apps are installed like so: /var /www /apps /app1 app.rb config.ru /public /app2 app.rb config.ru /public app1 -> apps/app1/public app2 -> apps/app2/public (app1 and app2 are symlinks to their respective apps' public directories). This is the Passenger setup for sub URIs described here: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_rack_to_sub_uri With the following config I've got /foo going to app2: server { listen 80; server_name mydomain.com; root /var/www; passenger_enabled on; passenger_base_uri /app1; passenger_base_uri /app2; location /foo { rewrite ^.*$ /app2 last; } } Now, how do I get app1 to pick up everything else? I've tried the following (placed after the location /foo directive), but I get a 500 with an infinite internal redirect in error.log: location / { rewrite ^(.*)$ /app1$1 last; } I hoped that the last directive would prevent that infinite redirect, but I guess not. /foo gets the same error. Any ideas? Thanks!

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Pin same app multiple times in Windows 7

    - by Mr. Shiny and New ??
    I use some programs with command line arguments and like to have shortcuts for launching those programs with those arguments. For example, I keep several Firefox profiles around and like to specify the profile name on the command line. Similarly I have several Eclipse shortcuts with a command line argument specifying the workspace to open. I would like to be able to pin these shortcuts to the start menu or taskbar in Windows 7. The problem I have is that once I've pinned one of these, no other shortcuts which launch the same exe can be launched. I'm also open to suggestions such as a suitable desktop gadget which can contain a bunch of arbitrary shortcuts, yet remain in a fixed position on my desktop somewhere, or some way of adding a secondary taskbar (this was possible in XP).

    Read the article

  • Rename multiple files as "Modified Date/Time" using cmd or Powershell

    - by Mehper C. Palavuzlar
    I have hundreds of JPG files in a folder. I want to rename each file so that the file name is replaced with "Modified Date/Time" of that file, namely DD.MM.RRRR.HH.MM.jpg. For example, Before After 001.jpg 11.01.2011.16.58.jpg 002.jpg 12.01.2011.09.32.jpg 003.jpg 14.01.2011.12.41.jpg ... ... Since colon (:) cannot be used in file names, the colon between HH and MM must be replaced with a period. I don't want to use a 3rd party tool. Can you provide me with the code to achieve this in Powershell or command line?

    Read the article

  • Corrupted NTFS Drive showing multiple unallocated partitions

    - by volting
    My external hdd with a single NTFS partition was accidentaly plugged out (kids!)... and is now corrupted. Iv tried running ntfsfix - with no luck - output below.. When I look at the disk under disk management in Windows 7 it shows up as having 5 partitions 2 of which are unallocated - none have drive letters and it is not possible to set any (that option and most others are greyed out) - so I can't run chkdsk /f Iv tried using Minitool partition wizard which was mentioned as a solution to another similar question here. It showed the whole drive as one partition, but as unallocated, and the option -- "Check File System" was greyout. Is there anything else I could try ? Output of fdisk -l Disk /dev/sdb: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytest I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69205244 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 218129509 1920119918 850995205 72 Unknown /dev/sdb2 ? 729050177 1273024900 271987362 74 Unknown /dev/sdb3 ? 168653938 168653938 0 65 Novell Netware 386 /dev/sdb4 2692939776 2692991410 25817+ 0 Empty Partition table entries are not in disk order Output of ntfsfix me@vaio:/dev$ sudo ntfsfix /dev/sdb Mounting volume... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Attempting to correct errors... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Failed to startup volume: Input/output error Checking for self-located MFT segment... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument OK ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error Volume is corrupt. You should run chkdsk. Options available with MiniTool: Related questions: How to fix a damaged/corrupted NTFS filesystem/partition without losing the data on it? Repair corrupted NTFS File System

    Read the article

  • 389 DS Achitecture for Multiple Sites

    - by Kyle Flavin
    I'm looking to deploy 389 Directory in my environment to replace an existing iPlanet installation. I would be using it primarily to store user account data for authentication purposes. I have two physically separate data centers that I would like to share the same directory tree. My initial thinking is to setup 389 DS as follows: -A Master/Consumer in DataCenter A -A Master/Consumer in DataCenter B -Replication agreement between both masters, to mirror the directory tree in both environments. Does this sound like a reasonable approach? Is there a better way to do it? (ie: four masters?) Is there documentation for best practices when setting up 389 DS in situations such as this? Thanks.

    Read the article

  • Permissions in OS X for iTunes library with multiple users

    - by John
    I currently have a lot of music on an external drive and my iTunes set up from there. However, periodically, when the external drive isn't connected, iTunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my Mac's HD is large enough to house the music collection. However, I have 4 family members – all with their own logins – using this same gob of music. I don't want four copies of the library, only one with all libraries referencing it. So, what I want to do is: Move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. Get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let iTunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the iTunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any XML library file editing if possible. Am I dreaming?

    Read the article

  • Hyper-V with multiple physical networks

    - by Yaman
    I have Hyper-V on my laptop with a wireless and Ethernet card, sometimes I connect using wireless card, and sometimes using the Ethernet cable. I am trying to configure Hyper-v to work always with internet provided to virtual machine. I have tried to create 2 virtual switches, one external to the Wireless network card, and the other one external to the Ethernet card. What happens is that the wireless network creates a bridge object in the network and sharing center\Network connections of windows 8, while the Ethernet does not. Unfortunately, they do not work together as external, i have to set the connected one to external and the other one as external, I also have to go to the properties of the bridge and virtual Ethernet properties to uncheck and check some components like: Client for Microsoft Networks Deterministic Network Enhancer VMWare Bridge Protocol QoS Packet Scheduler File and Printer Sharing for Microsoft Networks In order to make things work. Sometimes I keep the wireless network switch external, and go to another location (another wireless network), and it disconnects, i have to reconfigure the switches. Is there a way to do the configuration once and remain working wherever I connect, whether its Wireless or Ethernet and on any network with DHCP?

    Read the article

  • Restreaming video from XSplit to multiple JustinTV/TwitchTV channels in different resolutions and bitrates

    - by lmojzis
    I have a really simple question but the answer may be a little more complex I guess. Okay. Let's go. I have an Application called Xsplit Broadcaster (http://www.xsplit.com/). It supports streaming video through RTMP. Now what I want to do is this: +--(720p)--> TwitchTV FirstChannel XSplit --(720p RTMP)-->[MyTranscodingServer]--+ +--(360p)--> TwitchTV SecondChannel Is there a simple way to do this? Additional info: Both channels accept standard RTMP stream on their RTMP endpoint using either username/password or streamkey. The server operating system is GNU/Linux

    Read the article

  • RouterLess, house-wired network using multiple powerline adapters

    - by Cliff Arnell
    related to the 'old days' of one ethernet cable tapped with Ts for each monitor.... my question might be very simple... or not. I have an over-the-air internet provider with a wire dish with a powered transceiver and cat5 cable out of the providers supplied modem. I'm presently connecting the output of the modem into my wireless router which sends the internet signal all over the house. Standard stuff, I believe. My Question. Can I just connect the output of the modem into 1 powerline adapter and tie all my equipment such as computer, printer, laptop, Tivo recorder, etc. into 1-each local powerline adapters located near each devices resulting in a 'house-wired' network and no router? I'm bothered by the idea that my over-the-air provider might be using something in my router to establish and keep my IP connection alive. I did have to configure the router for my IP, a router which, in my proposed scenario, would no longer exist. Thank you for your help.

    Read the article

  • dhclient append settings from multiple DHCP servers

    - by Brian
    I have a server with two interfaces connected to two separate networks, using DHCP for both. When dhclient is writing /etc/resolv.conf, I would like it to append settings that aren't already there. For instance, if I receive from one DHCP server: nameserver 10.0.0.1 search one.mydomain.com and from another: nameserver 10.1.1.254 search two.mydomain.com Then resolv.conf should look like this: search one.mydomain.com two.mydomain.com nameserver 10.0.0.1 nameserver 10.1.1.254 At the moment, it seems the last dhclient overwrites whatever was there. I know I can preconfigure settings in dhclient.conf using supercede or append, but then I have to hard-code the values. I've scoured the man page for dhclient, but it seems like dhclient prefers to work alone (i.e. not in conjunction with any other dhclients)...or am I missing something?

    Read the article

  • Session persistence between multiple Rails / Unicorn servers with Redis as session_store on AWS

    - by d_ethier
    I've got 2 nginx EC2 instances pointing to 2 Unicorn EC2 instances in a round robin load balanced configuration. The two nginx instances are being the Elastic Load Balancer. Both Unicorn instances have a Redis session_store configured which is in a master/slave configuration with an Elastic IP attached to the master. I've tried configuring the session stickiness on the load balancer, but sessions are lost on each page refresh. I'm using the redis-store gem for the session_store configuration and redis support. Anyone have any ideas as to why this is not working?

    Read the article

  • multiple definition in header file

    - by Jérôme
    Here is a small code-example from which I'd like to ask a question : complex.h : #ifndef COMPLEX_H #define COMPLEX_H #include <iostream> class Complex { public: Complex(float Real, float Imaginary); float real() const { return m_Real; }; private: friend std::ostream& operator<<(std::ostream& o, const Complex& Cplx); float m_Real; float m_Imaginary; }; std::ostream& operator<<(std::ostream& o, const Complex& Cplx) { return o << Cplx.m_Real << " i" << Cplx.m_Imaginary; } #endif // COMPLEX_H complex.cpp : #include "complex.h" Complex::Complex(float Real, float Imaginary) { m_Real = Real; m_Imaginary = Imaginary; } main.cpp : #include "complex.h" #include <iostream> int main() { Complex Foo(3.4, 4.5); std::cout << Foo << "\n"; return 0; } When compiling this code, I get the following error : multiple definition of operator<<(std::ostream&, Complex const&) I've found that making this fonction inline solves the problem, but I don't understand why. Why does the compiler complain about multiple definition ? My header file is guarded (with #define COMPLEX_H). And, if complaining about the operator<< fonction, why not complain about the public real() fonction, which is defined in the header as well ? And is there another solution as using the inline keyword ?

    Read the article

  • Nginx Multiple Domains

    - by showFocus
    I am trying to add a second virtual host to nginx. When i go to the new domain it redirects to the old one. I have tried restarting Nginx, rebooting the server. Has anyone come across this before, care to share? File: nginx.conf ### user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } File: ../sites-enabled/domain1.co.uk server { listen 80; server_name www.domain1.co.uk; rewrite ^/(.*) http://domain1.co.uk/$1 permanent; } server { listen 80; server_name domain1.co.uk; access_log /home/me/public_html/domain1.co.uk/log/access.log; error_log /home/me/public_html/domain1.co.uk/log/error.log; location / { root /home/me/public_html/domain1.co.uk/public/; index index.php index.html; # WordPress supercache & permalinks. include /usr/local/nginx/conf/wordpress_params.super_cache; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/me/public_html/domain1.co.uk/public/$fastcgi_script_name; } } File: ../sites-enabled/domain2.co.uk server { listen 80; server_name www.domain2.co.uk; rewrite ^/(.*) http://domain2.co.uk/$1 permanent; } server { listen 80; server_name domain2.co.uk; access_log /home/me/public_html/domain2.co.uk/log/access.log; error_log /home/me/public_html/domain2.co.uk/log/error.log; location / { root /home/me/public_html/domain2.co.uk/public/; index index.php index.html; # Basic version of WordPress parameters, supporting nice permalinks. # include /usr/local/nginx/conf/wordpress_params.regular; # Advanced version of WordPress parameters supporting nice permalinks and WP Super Cache plugin include /usr/local/nginx/conf/wordpress_params.super_cache; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/me/public_html/domain2/public/$fastcgi_script_name; } }

    Read the article

  • Printer offline until spooler service is restarted multiple times

    - by Zian Choy
    When I try to print from my ThinkPad to a printer shared through a Windows 7 Homegroup hosted by a desktop computer, I often have to restart the Print Spooler service several times before the job will go through. In particular, this problem occurs when the desktop is in sleep mode when the print job is started and then brought out of sleep mode after the print job has been kicked off. Both computers are running Windows 7 32-bit edition with the latest patches. I have tried the following with no improvement: SNMP registry hack (see MS KB for details) Following the instructions in a blog post entitled "Sharing Printers on Vista 64-bit" Looking at Printer offline until spooler service is restarted

    Read the article

  • How to maintain long-lived python projects w.r.t. dependencies and python versions ?

    - by Gyom
    short version: how can I get rid of the multiple-versions-of-python nightmare ? long version: over the years, I've used several versions of python, and what is worse, several extensions to python (e.g. pygame, pylab, wxPython...). Each time it was on a different setup, with different OSes, sometimes different architectures (like my old PowerPC mac). Nowadays I'm using a mac (OSX 10.6 on x86-64) and it's a dependency nightmare each time I want to revive script older than a few months. Python itself already comes in three different flavours in /usr/bin (2.5, 2.6, 3.1), but I had to install 2.4 from macports for pygame, something else (cannot remember what) forced me to install all three others from macports as well, so at the end of the day I'm the happy owner of seven (!) instances of python on my system. But that's not the problem, the problem is, none of them has the right (i.e. same set of) libraries installed, some of them are 32bits, some 64bits, and now I'm pretty much lost. For example right now I'm trying to run a three-year-old script (not written by me) which used to use matplotlib/numpy to draw a real-time plot within a rectangle of a wxwidgets window. But I'm failing miserably: py26-wxpython from macports won't install, stock python has wxwidgets included but also has some conflict between 32 bits and 64 bits, and it doesn't have numpy... what a mess ! Obviously, I'm doing things the wrong way. How do you usally cope with all that chaos ?

    Read the article

  • Hosting multiple websites from home

    - by dean nolan
    I have just been accepted for Microsofts Wevsite Spark program which I mainly got for the tools, Visual Studio, Blend. I also have a few of my own websites, personal and a couple of business ones. I also work freelance and sometimes I would like a place to just put a demo up of a clients project. The websites I currently have are all on differnet hosting provders and domain registrars. The WebsiteSpark comes with Windows Server 2008 and SQL Server 2008. It would be really advantagous of me to have all these in one place but also so I have complete control over the database and the environment. So I am thinking over the next 4-6 months of migrating all this to my own server that I will host from home, or maybe even setup at home and then store in a proper datacentre. I was wondering what steps I should take and what to be aware of, specifically: 1) having all these different websites on one computer and having the url got to the proper place. 2) Cost effectiveness? Having the server in home as apposed to datacentre. Most solutions I see charge over £1000 a month to have a machine in datacentre. This is mostly for my own ease of management and shared hosting which I currently have is very limited configuration wise. Would getting a server in house be beneficial for then upgrading to the cloud? What measures should I take with my ISP? I know this is a lot I've asked but just even links to good articles would be good. Thanks

    Read the article

  • NAT Policy Inbound Source Problem on SonicWall TZ-210 with Multiple DSL Lines

    - by HK1
    We recently added three more DSL connections to our SonicWall TZ-210. My NAT Policies work fine as long as I leave them set with an inbound interface of X1, which hosts our original DSL connection. However, I'd like to change some of the NAT Policies to use inbound source/interface X2, X3, X4 or Any. In my initial tests, when I change one of the policies to use an inbound interface of X2, that port forward policy does not work at all. Traffic never makes it to the internal destination. What could be the problem?

    Read the article

  • Switch Before Firewall / Router - Multiple public IPs

    - by rii
    I currently Have a 10Mbit Full duplex circuit connected to a small unmanaged switch which then connects to a Sonicwall Firewall / Router. I have several public IP addresses (/28) that are assigned to several devices in my setup. Now the problem is the small switch I have was lent to me and needs to be returned, I have replaced this switch with several other switches but for some reason any other switch I use causes the network to become extremely slow. I believe this is a problem with the autonegotiation of theses hubs, so I am thinking of purchasing a small managed switch (cisco 300 series) and making the receiving port on the swith Explicitly 10Mbit Full Duplex and see if this works. My question is, being that this is a managed switch and needs an IP, will I still be able to run my public ips through it? Say the circuit has 70.80.4.1 - 7 will I still be able to assign 70.80.4.2 to my firewall and 70.80.4.3 to my router connected to some other port in the switch? Will I have to assign the switch a public IP address in this range as well for it to "route" to those other devices or does the switch does not care what IPs goes through it while operating as a Layer 2 Switch? Any help would be greatly appreciated. Thanks in advanced!

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >