Search Results

Search found 841 results on 34 pages for 'distribute'.

Page 8/34 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Internet setup for my office

    - by prakash
    We have two internet connections to our office and our current setup is like this.. The internet connections require pppoe log in so i take each cable and plug it into a wifi router and configure the router to log in to the pppoe and then plug in a cable from the router to a switch and distribute the internet throughout my office. The problem with this setup is it is really hard to monitor and im not able to monitor who is hogging internet usage and what he or she is actually using it for. apart from this we also have a nas setup which is routed through another switch . Could someone please throw a little light on how i can restructure this setup for easy monitoring and better transparency... ? each wan router is connected to a different switch and is distributed to users accordingly.. we have around 40 users in the office.. we want to setup a single linux box to which i want to connect the two wan connections and from there distribute it to all our users.... im looking for a solution where we do not have to invest more that buying a single pc and a couple of nics

    Read the article

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

  • Q&A: Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    We had a great webcast yesterday and wanted to recap the questions that were asked throughout. Can ECM distribute contents to 3rd party sites?ECM, which is now called WebCenter Content can distribute content to 3rd party sites via several means as well as SSXA - Site Studio for External Applications. Will you be able to provide more information on these means and SSXA?If you have an existing JSP application, you can add the SSXA libraries to your IDE where your application was built (JDeveloper for example).  You can now drop some code into your 3rd party site/application that can both create and pull dynamically contributable content out of the Content Server for inclusion in your pages.   If the 3rd party site is not a JSP application, there is also the option of leveraging two Site Studio (not SSXA) specific custom WebCenter Content services to pull Site Studio XML content into a page. More information on SSXA can be found here: http://docs.oracle.com/cd/E17904_01/doc.1111/e13650/toc.htm Is there another way than a ”gadget” to integrate applications (like loan simulator) in WebCenter Sites?There are some other ways such as leveraging the Pagelet Producer, which is a core component of WebCenter Portal. Oracle WebCenter Portal's Pagelet Producer (previously known as Oracle WebCenter Ensemble) provides a collection of useful tools and features that facilitate dynamic pagelet development. A pagelet is a reusable user interface component. Any HTML fragment can be a pagelet, but pagelet developers can also write pagelets that are parameterized and configurable, to dynamically interact with other pagelets, and respond to user input. Pagelets are similar to portlets, but while portlets were designed specifically for portals, pagelets can be run on any web page, including within a portal or other web application. Pagelets can be used to expose platform-specific portlets in other web environments. More on Page Producer can be found here:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#CHDIAEHG Can you describe the mechanism available to achieve the context transfer of content?The primary goal of context transfer is to provide a uniform experience to customers as they transition from one channel to another, for instance in the use-case discussed in the webcast, it was around a customer moving from the .com marketing website to the self-service site where the customer wants to manage his account information. However if WebCenter Sites was able to identify and segment the customers  to a specific category where the customer is a potential target for some promotions, the same promotions should be targeted to the customer when he is in the self-service site, which is managed by WebCenter Portal. The context transfer can be achieved by calling out the WebCenter Sites Engage Server API’s, which will identify the segment that the customer has been bucketed into. Again through REST API’s., WebCenter Portal can then request WebCenter Sites for specific content that needs to be targeted for a customer for the identified segment. While this integration can be achieved through custom integration today, Oracle is looking into productizing this integration in future releases.  How can context be transferred from WebCenter Sites (marketing site) to WebCenter Portal (Online services)?WebCenter Portal Personalization server can call into WebCenter Sites Engage Server to identify the segment for the user and then through REST API’s request specific content that needs to be surfaced in the Portal. Still have questions? Leave them in the comments section! And you can catch a replay of the webcast here.

    Read the article

  • Q&A: Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    We had a great webcast yesterday and wanted to recap the questions that were asked throughout. Can ECM distribute contents to 3rd party sites?ECM, which is now called WebCenter Content can distribute content to 3rd party sites via several means as well as SSXA - Site Studio for External Applications. Will you be able to provide more information on these means and SSXA?If you have an existing JSP application, you can add the SSXA libraries to your IDE where your application was built (JDeveloper for example).  You can now drop some code into your 3rd party site/application that can both create and pull dynamically contributable content out of the Content Server for inclusion in your pages.   If the 3rd party site is not a JSP application, there is also the option of leveraging two Site Studio (not SSXA) specific custom WebCenter Content services to pull Site Studio XML content into a page. More information on SSXA can be found here: http://docs.oracle.com/cd/E17904_01/doc.1111/e13650/toc.htm Is there another way than a ”gadget” to integrate applications (like loan simulator) in WebCenter Sites?There are some other ways such as leveraging the Pagelet Producer, which is a core component of WebCenter Portal. Oracle WebCenter Portal's Pagelet Producer (previously known as Oracle WebCenter Ensemble) provides a collection of useful tools and features that facilitate dynamic pagelet development. A pagelet is a reusable user interface component. Any HTML fragment can be a pagelet, but pagelet developers can also write pagelets that are parameterized and configurable, to dynamically interact with other pagelets, and respond to user input. Pagelets are similar to portlets, but while portlets were designed specifically for portals, pagelets can be run on any web page, including within a portal or other web application. Pagelets can be used to expose platform-specific portlets in other web environments. More on Page Producer can be found here: http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#CHDIAEHG Can you describe the mechanism available to achieve the context transfer of content?The primary goal of context transfer is to provide a uniform experience to customers as they transition from one channel to another, for instance in the use-case discussed in the webcast, it was around a customer moving from the .com marketing website to the self-service site where the customer wants to manage his account information. However if WebCenter Sites was able to identify and segment the customers  to a specific category where the customer is a potential target for some promotions, the same promotions should be targeted to the customer when he is in the self-service site, which is managed by WebCenter Portal. The context transfer can be achieved by calling out the WebCenter Sites Engage Server API’s, which will identify the segment that the customer has been bucketed into. Again through REST API’s., WebCenter Portal can then request WebCenter Sites for specific content that needs to be targeted for a customer for the identified segment. While this integration can be achieved through custom integration today, Oracle is looking into productizing this integration in future releases.  How can context be transferred from WebCenter Sites (marketing site) to WebCenter Portal (Online services)?WebCenter Portal Personalization server can call into WebCenter Sites Engage Server to identify the segment for the user and then through REST API’s request specific content that needs to be surfaced in the Portal. Still have questions? Leave them in the comments section! And you can catch a replay of the webcast here.

    Read the article

  • How do I deploy building blocks (quick parts) for Microsoft Outlook 2007?

    - by now
    I want to deploy some building blocks for Microsoft Outlook 2007. Microsoft has put up a poor solution at http://office.microsoft.com/en-us/outlook/HA102086531033.aspx#4 that asks you to save a template. That solution would require you to distribute that template to all the clients. An optimal solution would allow you to put the template containing the building blocks somewhere on the network and simply use the ”Workgroup building blocks path” group policy setting for shared paths in Microsoft Office 2007. Sadly, Outlook doesn’t respect that policy. Also, the described solution mentioned in the article above doesn’t work. Step 4 requests you to save the template as a Word Template after first asking you to save it as an Outlook Template. It seems that they copy&pasted the steps from the Word article and forgot to check whether it worked (and adjust the steps accordingly). Anyway, does anyone have any suggestions for how to distribute the building blocks without distributing NormalEmail.dotm (which will overwrite the clients’ own building blocks each time it is updated). Thanks!

    Read the article

  • SQL Compact error: Unable to load DLL 'sqlceme35.dll'. The specified module could not be found

    - by Ciaran Bruen
    Hi - I'm developing a Winforms application using Visual Studio 2008 C# that uses a SQL compact 3.5 database on the client. The client will most likely be 32 bit XP or Vista machines. I'm using a standard Windows Installer project that creates an msi file and setup.exe to install the app on a client machine. I'm new to SQL compact so I haven't had to distribute a client database like this before now. When I run the setup.exe (on new Windows XP 32 bit with SP2 and IE 7) it installs fine but when I run the app I get the error below: Unable to load DLL 'sqlceme35.dll'. The specified module could not be found I spent a few hours searching this error already but all I can find are issues relating to installing on 64 bit Windows, none relating to normal 32 bit that I'm using. The install app copies the all the dependant files that it found into the specified install directory, including the System.Data.SqlServerCe.dll file (assembly version 3.5.1.0). The database file is in a directory called 'data' off the application directory, and the connection string for it is <add name="Tickets.ieOutlet.Properties.Settings.TicketsLocalConnectionString" connectionString="Data Source=|DataDirectory|\data\TicketsLocal.sdf" providerName="Microsoft.SqlServerCe.Client.3.5" /> Some questions I have: should the app be able to find the dll if it's in the same directory i.e. local to the app, or do I need to install it in the GAC? (If so cam I use the Windows Installer to install a dll in the GAC?) is there anything else I need to distribute with the app in order to use a Sql Compact database? there are other dlls also such as MS interop for exporting data to Excel on the client. Do these need to be installed in the GAC or will locating them in the application directory suffice? TIA, Ciaran.

    Read the article

  • Are there any applications written in the Io programming language? (Or, distributing Io applications

    - by Rayne
    I've recently become interested in prototype-based OOP, and I've been playing with Io and Ioke. Distributing an application with Ioke is simple. It's on the JVM. Need I say more? However, I'm absolutely stumped as to how one would distribute an Io application, especially on Windows. It's not like you can have end-users compile Io to run your application. I was actually shocked the Io has gone for 8 years without forming some sort of standards for things like distribution. Ruby has gems, Java has jars, and so on. The worse thing about it is, I can't find a single application written in Io to maybe steal ideas on distribution from. Maybe I suck at google searching (Io is a horrible search name, by the way ;P). Is there any sort of canonical way to distribute Io applications? Are there even any Io applications in existence, or am I just missing the point? I'm not sure if this should be community wiki or not. If you think it should, comment and let me know.

    Read the article

  • PLKs and Web Service Software Factory

    - by Nix
    We found a bug in Web Service Software Factory a description can be found here. There has been no updates on it so we decided to download the code and fix it ourself. Very simple bug and we patched it with maybe 3 lines of code. However* we have now tried to repackage it and use it and are finding that this is seemingly an impossible process. Can someone please explain to me the process of PLKs? I have read all about them but still don't understand what is really required to distribute a VS package. I was able to get it to load and run using a PLK obtained from here, but i am assuming that you have to be a partner to get a functional PLK that will be recognized on other peoples systems? Every time i try and install this on a different computer I get a "Package Load Failure". Is the reason I am getting errors because I am not using a partner key? Is there any other way around this? For instance is there any way we can have an "internal" VS package that we can distribute? Edit Files I had to change to get it to work. First run devenv PostInstall.proj Generate your plks and replace ##Package PLK## (.resx files) --Just note that package version is not the class name but is "Web Service Software Factory: Modeling Edition" -- And you need to remove the new lines from the key ProductDefinitionRegistryFragment.wxi line 1252(update version to whatever version you used in plk) Uncomment all // [VSShell::ProvideLoadKey("Standard", Constant in .tt files.

    Read the article

  • How to override the default init.tcl

    - by Sean Murphy
    I'm working on a project where I want to make use of TCL as the command interpreter. I have a working c library object which I can load from within the tcl shell but my problem is finding a way to automatically do this while starting a tclsh. Essentially my ultimate goal is to be able to run a script and have it load my library and run some initial startup tcl code before dropping me back to the tclsh command prompt in interactive mode. e.g. tclsh -f myscript.tcl --then-switch-to-interactive or EXPORT TCLINIT=myscript.tcl tclsh The basic goal is to avoid having to distribute tclsh but rather rely in local user installations of tcl. All I would like to distribute is my library, a startup script and a shell command to launch the tclsh with the library preloaded. I've tried using the environment variables TCLINIT and TCL_LIBRARY but they seem to have no effect. The only workable solutions I've found so far are to add "source myscript.tcl" to either the end of /usr/share/tcltk/tcl8.5.init.tcl or ~/.tclshrc However both of these "solutions" are non perfect as they require modification of the default users workspace. It strikes me that there must be a way to handle this in TCL, but my research so far hasn't yielded anything. Does anyone have any suggestions?

    Read the article

  • Licensing iPhone apps per user in existing system

    - by Alxandr
    I've been asked by my job to write a iPhone app for an existing system for managing worktasks. This system is proprietary and costs money, so in order to login you need to be a customer. Now, I've got two questions about the legality of licensing iPhone apps with this system: My company would like to be able to sell the app for profit, but not as a one-time payment, but as a added subscription-fee to the already existing one. Is it legal for us (according with the terms of distributing an iPhone app on the Apple App Store) to do this? That way we'll just add another field to the users-database saying weather or not iPhone is enabled for them, and distribute the app as a free app on App Store. If the previous question is not legal, we'd like to just create a free app and distribute it as part of the existing system. In other words, no extra fee for using the iPhone app for the users, but still free distribution trough App Store. Due to our company not being american or having an office in the U.S. at all enterprice account is not an option. Please let me know if there is anything wrong with any of the above approaches.

    Read the article

  • Licensing for the undecided

    - by Jasper
    I am creating a game in C++. Now I am not sure on how I will distribute it yet - though I am pretty sure that I won't be asking money for it. And I am looking into a licensing and I wondered if there is a license that is suited for the undecided like me. My current releases (which are really really early versions of the game, with far from full functionality) are executable only. However, I am actually thinking that I might release the source on an open source license. For now, I am the only contributor, so that would be no problem as I only need my own permission to move to a less restrictive license. However, when I allow other people to contribute, I would need all their permissions to do so (right?). So I was wondering if there is a license that let's me distribute the game executable only for now, but will let me switch to a less restrictive license if I want. Basically I need a license in which contributors give permission to switch to a less restrictive license up front. Does anybody know of license (or other construction) that would allow me to do so?

    Read the article

  • iPhone application purchase verification -- possible?

    - by Sedate Alien
    The iPhone 3.0 SDK's StoreKit.framework provides support for in-app purchases to give the user additional content, functionality and so on. It is possible for an app to send the transactionReceipt property of SKPaymentTransaction objects to the developer's server for verification of successful purchasing before granting service. Is there any analogous SDK to verify the initial application purchase itself? A developer that wishes for their server to only provide services to genuine applications (i.e. not pirated) without using IAP could do so by verifying the application in this manner, e.g. ensure that only users with the correct transactionReceipt are catered for. I understand that this approach would still be vulnerable to replay attacks; a dedicated group of pirates could share a valid transactionReceipt. However, my server provides a consumable service to users, i.e. once they've connected and done the work, it needn't work a second time so replay attacks are nullified. The service that my app provides is relatively niche. I could distribute it on the App Store as a free application that requires at least one IAP to do anything useful, but I am lead to believe that this would be a very unpopular move among users as it would be considered misleading. If I distribute it as a paid app, I do not know how to ensure that only genuine apps can access the webservice. This is important as every invocation of the webservice costs me money! What are my options?

    Read the article

  • Why should I install Python packages into `~/.local`?

    - by Matthew Rankin
    Background I don't develop using OS X's system provided Python versions (on OS X 10.6 that's Python 2.5.4 and 2.6.1). I don't install anything in the site-packages directory for the OS provided versions of Python. (The only exception is Mercurial installed from a binary package, which installs two packages in the Python 2.6.1 site-packages directory.) I installed three versions of Python, all using the Mac OS X installer disk image: Python 2.6.6 Python 2.7 Python 3.1.2 I don't like polluting the site-packages directory for my Python installations. So I only install the following five base packages in the site-packages directory. For the actual method/commands used to install these, see SO Question 4324558. setuptools/ez_setup distribute pip virtualenv virtualenvwrapper All other packages are installed in virtualenvs. I am the only user of this MacBook. Questions Given the above background, why should I install the five base packages in ~/.local? Since I'm installing these base packages into the site-packages directories of Python distributions that I've installed, I'm isolated from the OS X's Python distributions. Using this method, should I be concerned about Glyph's comment that other things could potentially break (see his comment below)? Again, I'm only interested in where to install those five base packages. Related Questions/Info I'm asking because of Glyph's comment to my answer to SO question 4314376, which stated: NO. NEVER EVER do sudo python setup.py install whatever. Write a ~/.pydistutils.cfg that puts your pip installation into ~/.local or something. Especially files named ez_setup.py tend to suck down newer versions of things like setuptools and easy_install, which can potentially break other things on your operating system. Previously, I asked What's the proper way to install pip, virtualenv, and distribute for Python?. However, no one answered the "why" of using ~/.local.

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • What can I do to speed up createrepo?

    - by jsd
    We are using a yum repository to distribute our software to our production instances. Unfortunately, createrepo is becoming a bottleneck, and we only have 469 packages in the repository. $ time createrepo /opt/tm-yum-repo Spawning worker 0 with 469 pkgs Workers Finished Gathering worker results Saving Primary metadata Saving file lists metadata Saving other metadata Generating sqlite DBs Sqlite DBs complete real 0m43.188s user 0m37.798s sys 0m1.296s What can I do to make it faster?

    Read the article

  • Fortigate 80C multi wan

    - by emamdouh
    I've Fortigate 80c and two internet lines from two separate ISPs. I'm trying to distribute sessions between both internet lines following http://docs-legacy.fortinet.com/cb/html/index.html#page/FOS_Cookbook/Install_advanced/routing_ecmp_basic.html , but it seems connections go through just one of two internet lines. I have "wan1 as it's configured first, and I could edit static route table to be wan2 instead of wan1", but not both of internet lines I have. Any ideas why this happens? Thanks in advance

    Read the article

  • few questions on packet sniffer/analyzer

    - by user37652
    I have few questions about packet sniffer. I'm using a zyxel p-600 series modem and a hub to distribute the internet connection. Can I use a packet sniffer here to determine if the user is downloading something? Can I determine if the user is downloading a file based on the modem alone.(The lights blink faster) Is there an application that I could use for the modem or the hub to limit or avoid direct downloads. Details: OS: Ubuntu 9.10 and Windows 7

    Read the article

  • Replacement for MySQL proxy

    - by tostinni
    We have a standard MySQL Master/Slave replication for our database. In order to use the slave server we have set up MySQL Proxy. However we have been strongly discouraged to use it as it still alpha and not very well supported. Our application is built with Drupal 7 which doesn't use very effectively its slave database (see my related question on Drupal Answers). What tools could we use to act like MySQL proxy and send SELECT queries on the slave server in order to distribute the load ?

    Read the article

  • Sharing svn reposities between web servers

    - by Luke
    I have my subversion hosting set up to be accessed through Apache web server. Everything runs fine. Now I'd like to add another web server to distribute the load between two web servers. Is it save to have my svn repositories accessed by two web servers at the same time? Does the normal fsfs subversion repository type protect me enough or do I need to switch to Berkely DB for this sort of thing?

    Read the article

  • pull bandwidth from wireless WAN into local LAN

    - by cortical
    We have a local network (A) of about 50 computers connected via gigabit ethernet. We get a connection to the internet using two broadband connections to a backbone and distribute the bandwidth across the 50 computers using a CISCO 1811 router, but the bandwidth is not enough for everybody. There is a campus wide wireless network(B) that has very high bandwidth, is there a device or way to setup multiple individual connections to network B and supply the bandwidth to our network A?

    Read the article

  • Postfix configuration for load balancing

    - by Naval
    Server A should take mails from a php script which is running on other machine . Now server A should distribute all this mail to its remaining 3 Nodes(3 server B,C,D for relaying mails which have different IP and Domain name) here the architechture which i want ---B php script(for mail generation)----server A(postfix mta) ---C ---D how should i configure postfix main.cf file for this ? plz help me out in this.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >