Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 622/924 | < Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >

  • Allow different headers on different servers using WFF

    - by Brian
    We've got multiple web servers configured in a cluster using Microsoft's Web Farm Framework. One of the things I like to do to help debugging is to create a header in IIS that identifies the server that handled the request. Unfortunately when I try to do this, WFF sets the headers to the same value on all the servers. Is there a way around this? I tried looking into using skipDirectives, but I can't find any documentation on it (other than a little bit showing how to use it to skip directories and bindings). If there is documentation on this, please link to it! I would like to be able to read up more on it in case I need to do other things as well.

    Read the article

  • Threading models when talking to hardware devices

    - by Fuzz
    When writing an interface to hardware over a communication bus, communications timing can sometimes be critical to the operation of a device. As such, it is common for developers to spin up new threads to handle communications. It can also be a terrible idea to have a whole bunch of threads in your system, an in the case that you have multiple hardware devices you may have many many threads that are out of control of the main application. Certainly it can be common to have two threads per device, one for reading and one for writing. I am trying to determine the pros and cons of the two different models I can think of, and would love the help of the Programmers community. Each device instance gets handles it's own threads (or shares a thread for a communication device). A thread may exist for writing, and one for reading. Requested writes to a device from the API are buffered and worked on by the writer thread. The read thread exists in the case of blocking communications, and uses call backs to pass read data to the application. Timing of communications can be handled by the communications thread. Devices aren't given their own threads. Instead read and write requests are queued/buffered. The application then calls a "DoWork" function on the interface and allows all read and writes to take place and fire their callbacks. Timing is handled by the application, and the driver can request to be called at a given specific frequency. Pros for Item 1 include finer grain control of timing at the communication level at the expense of having control of whats going on at the higher level application level (which for a real time system, can be terrible). Pros for Item 2 include better control over the timing of the entire system for the application, at the expense of allowing each driver to handle it's own business. If anyone has experience with these scenarios, I'd love to hear some ideas on the approaches used.

    Read the article

  • Configuring NVIDIA Quadro with Dell Precision M4600

    - by vsecades
    After a frustrating couple of weeks when I recently bought my Dell Precision laptop, I managed to fix an issue where Ubuntu (yes, was NOT using Windows, get serious) would not recognize the video card and would cause all sorts of problems all over the place. I ended up one Saturday morning nearly throwing this thing away, when I managed to find a post about NVIDIA Optimus technology... ( http://www.pcmag.com/article2/0,2817,2358963,00.asp ). Now I am a huge advocate of disruptive new stuff, as long as we keep the broader audience in mind. Anyhow, disabling this (which as the BIOS settings state only work on Windows 7 or later), effectively allow the NVIDIA based Ubuntu driver to kick in full force. No need for a trash can anymore thankfully. As I saw multiple posts all over the place about this, check your BIOS, disable and try the video again to see if this corrects your issues. Best of luck!

    Read the article

  • is there a way to automate changing filenames in <link> , <script> tags

    - by nepsdotin
    when we use Expires header for text files like js, css, contents are cached in the browser, to get new content we need to change in the html file the new names in the link and script tag. When we add changes. How can we automate it. I may have some bunch of html files in multiple folders also in subdirectories. There would be a text file filelist.txt OldName NewName oldfile1-ver-1.0.js oldfile1-ver-2.0.js oldfile2-ver-1.0.js oldfile2-ver-2.0.js oldfile3-ver-1.0.js oldfile3-ver-2.0.js oldfile4-ver-1.0.js oldfile4-ver-2.0.js The script should change all the oldfile1-ver-1.0.js into oldfile1-ver-2.0.js in the html, php files I would run this script before i start uploading. Finally the script could create a list of files and line number where it made the update. The solution can be in PERL/PHP/BATCH or anything thats nice and elegant

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • Upgrading from php 5.3 to php 5.4 with Macport

    - by dr.stonyhills
    PHP5.4 has been available for sometime now and Macport recently caught up with the release of port php54 but the process of upgrading is not as clear as possible. Even worst for those who are new to maintaining multiple versions of PHP on the same machine. I am keen on trying out some of the new features in PHP5.4 like traits, new array form etc but falling back on to php5.3 for other compatibility stuff. So i sudo port install php5+ (all the variants, apache2 etc) Then i tell it what PHP port to use as default sudo port select --set php php54 Check what version of PHP is active in the terminal using php -v outputs php 5.4.3. But i seem to be having issues with choosing the right non cli version as in the version of the module run by apache etc is still php5.3.12. Do i have to change the reference to the libphp5 in apache httpd.conf? Any advice on the right workflow for switching between php version on macport greatly appreciated!

    Read the article

  • "This file came from another computer..." - how can I unblock all the files in a folder without having to unblock them individually?

    - by Schnapple
    Windows XP SP2 and Windows Vista have this deal where zone information is preserved in downloaded files to NTFS partitions, such that it blocks certain files in certain applications until you "unblock" the files. So for example if you download a zip file of source code to try something out, every file will display this in the security settings of the file properties "This file came from another computer and might be blocked to help protect this computer" Along with an "Unblock" button. Some programs don't care, but Visual Studio will refuse to load projects in solutions until they've been unblocked. While it's not terribly difficult to go to every project file and unblock it individually, it's a pain. And it does not appear you can unblock multiple selected files simultaneously. Is there any way to unblock all files in a directory without having to go to them all individually? I know you can turn this off globally for all new files but let's say I don't want to do that

    Read the article

  • Incorrect "from" account used when accepting Outlook meeting requests

    - by Greg
    I am using Outlook 2013 and I have multiple accounts configured: AccountA (IMAP) - default account AccountB (Exchange) (There are others but I don't think it's directly relevant) I have been receiving Outlook meeting requests via AccountB and duly accepting them. All of my meetings, whether recorded manually or via meeting requests are saved in the calendar for AccountA (this works fine). I have discovered today that even though meeting requests are arriving via AccountB, the accept/decline messages that Outlook generates on my behalf (when I click the accept/decline button) are addressed from AccountA. I don't believe that I have any control over the address used to reply. This seems non-intuitive at best. I understand that the underlying calendar is in AccountA, but in every other scenario the "From" address in a reply to a message defaults to the account it was sent to. Can I change this behaviour so that it works as I expect?

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • Very fast Laptop Heating due to very high CPU usage by dwm.exe

    - by Sushan Shrestha
    My laptop gets heated very fast and when I check the task manager, I find dwm.exe consuming very high CPU (mostly 100%). In the beginning I found the message "The graphic adapter has been stopped and recovered".Also I am getting one error message "Cidaemon.exe has stopped working" very frequently. During this period also, cpu usage is 100% which is consumed by dwm.exe. I have run ccleaner and Kingsoft PC doctor monitor to fix the register problem but the problem has not been solved. My display adaptor is : NVIDIA NVS 5200M. After the driver update, I am not getting "The graphic adapter has been stopped and recovered" but dwm.exe is still consuming 100% CPU one time in about 5 mins. Multiple restarts has not helped. Laptop Model: Dell Latitude E6430. Thanks in advance.

    Read the article

  • Help me choose an Open-Source license

    - by Spartan-117A
    So I've done lots of open-source work. I have released many projects, most of which have fallen under GPL, LGPL, or BSD licensing. Now I have a new project (an implementation library), and I can't find a license that meets my needs (although I believe one may exist, hence this question). This is the list of things I'm looking for in the license. Appropriate credit given for ALL usage or derivative works. No warranty expressed or implied. The library may be freely used in ANY other open-source/free-software product (regardless of license, GPL, BSD, EPL, etc). The library may be used in closed-source/commercial products ONLY WITH WRITTEN PERMISSION. GPL - Useless to me, obviously, as it completely precludes any and all closed-source use, violating requirement (4). BSD/LGPL/MIT - Won't work, because they wouldn't require closed-source developers to get my permission, violating requirement (4). If it wasn't for that, BSD (FreeBSD in particular) would look like a good choice here. EPL/MPL - Won't work either, as the code couldn't be combined with GPL-code, therefore violating requirement (3). Also I'm pretty sure they allow commercial works without asking permission, so they don't meet (4) either. Dual-licensing is an option, but in that case, what combination would hold to all four requirements? Basically, I want BSD minus the commercial use, plus an option to use in commercial/closed-source as long as the developer has my written permission. EDIT: At the moment, thinking something like multiple-licensing under GPL/LGPL plus something else for commercial?

    Read the article

  • Lightweight, low cost enterprise backup solution

    - by Scott
    Looking for a backup solution primarily for Windows clients (XP/7), that will either back up to 2 different servers (1 on site, 1 off site - internet - can be our own server), or back up to 1 server and then we would need to somehow backup that server offsite/internet. By lightweight, I mean the backup client software should not eat up much memory and processor since some of the client machines are older. I am used to using Crashplan for home use - the pricing is nice for the amount of backup I get, and it works great / easy to install and get going - I can back up to my own machines locally and over the net. However, the price is going to be a little steep for enterprise level backup, 1500+ machines. Possibly ZManda and Bacula are good choices to consider? Are they light weight? Can the clients/agents be set to go over the net and/or multiple backup servers?

    Read the article

  • Error when adding to the domain : the specified server cannot perform the requested operation

    - by James
    When we add computers to the domain in Windows 7, we get the error: Changing the Primary Domain DNS name of this computer to "" failed. The name will remain "domain.com". The error was: The specified server cannot perform the requested operation. This happens on multiple computers and retrying yields the same result. Despite the error, the computer is still able to login to the domain ok. The DCs are windows 2003. Has anyone found a way to get rid of this error? Any help is appreciated.

    Read the article

  • Backup virtual hard disk

    - by Harshil Sharma
    I have a VM created in VMWare Player. It's VHD is currently sized 17 GB, split among multiple 2 GB files. The host OS is Windows 8. I use CrashPlan in host OS for file backup. The problem is, whenever I use the VM, CrashPlan detects all parth of VHD as altered and backs up the 17 GB VHD. WHat I want is a software that can run on host OS (Windows 8), treat the VHD as a physical hard disk and create incremental backups of the VHD, includeing all files, programs and the OS

    Read the article

  • SAS instead of SATA 2 for my hard drives?

    - by jasondavis
    I am building a new system soon, I will have multiple 1-2tb hard drives for storage in it. I only have experience uasing the sataII drives but I saw somewhere that I should be using something like SAS? I read that if I were going to have 20 drives that I could use 4 SAS cables vs 20 SATA cables. Can someone help me understand this better? If it were only 4 cables then how would 20 drives hook up? Also can a regualr sata2 drive hook up to that?

    Read the article

  • Google Drive desktop client not updating existing files from other users

    - by cqm
    I've looked around and there doesn't really seem to be any troubleshooting information for the Google Drive desktop client. It all assumes you are using Google Docs on the web. Anyway, my team is trying to use Google Drive like Dropbox, where multiple people are editing files shared amongst them through the desktop, such as images. Dropbox is really good at noticing when a checksum for a file is changed, and syncing it. Google Drive's desktop client seems not to do this at all. Google Drive desktop client seems to only sync newly created files and not giving any notification at all that there is a modified version, it will never sync it, even though going online and opening that file will show the modified version. Is there any way to fix this? and the answer has nothing to do with proxy or firewall configurations. Team is using computers running OSX and Windows.

    Read the article

  • What are best monitoring tool customizable for cluster / distributed system?

    - by Adil
    I am working on a system having multiple servers. I am interested in monitoring some server specific data like CPU/memory usage, disk/filesystem usage, network traffic, system load etc. and some other my process specific data. What are available open source that can serve my purpose? If it provides to customize the parameter to be monitored and monitor your own data by creating plugin / agent. Any suggestions? I heard of Nagios, Zabbix and Pandora but not sure if they provide such interface.

    Read the article

  • Better solution for boolean mixing?

    - by Ruben Nunez
    Sorry if this question has been asked in the past, but searching Google and here didn't yield relevant results, so here goes. I'm working on a fragment shader that implements both conditional/boolean diffuse and bump mapping (that is to say, you don't need a diffuse texture or a normals texture, and if they're not present, they're simply changed to default values). My current solution is to use a uniform float to say "mix amount". For example, computing the diffuse texel works as: // Compute diffuse amount scaled by vCol // If no texture is present (mDif = 0.0), then DiffuseTexel = vCol // kT[0] is the diffuse texture // vTex is the texture co-ordinates // mDif is the uniform float containing the mix amount (either 0.0 or 1.0) vec4 DiffuseTexel = vCol*mix(vec4(1.0), texture2D(kT[0], vTex), mDif); While that works great and all, I was wondering if there's a better way of doing this, as I will never have any use for in-between values for funky effects. I know that perhaps the best solution is to simply write separate shaders for mDif=0.0 and mDif=1.0, but I'd like a more elegant solution than splicing shaders before compiling or writing multiple shader files and keeping each one updated. Any ideas are greatly appreciated. =)

    Read the article

  • block access to wrt from vlan using iptables dd-wrt

    - by NitroxDM
    I set up multiple isolated vlans in dd-wrt. Now I need to forward a port to vlan2. I isolated the vlans using: iptables -I FORWARD -i br0 -o vlan2 -j DROP iptables -I FORWARD -i br0 -o vlan3 -j DROP iptables -I FORWARD -i br0 -o vlan4 -j DROP Now I need to block a clients on each vlan from accessing the router. This doesn't work: iptables -I INPUT -i br0 -o vlan2 --dport telnet -j REJECT --reject-with tcp-reset I'm new it iptables... am I missing something?

    Read the article

  • Help deciding on language for a complex desktop - web application

    - by user967834
    I'm about to start working on a fairly complex project needing a desktop GUI as well as a web interface and I need to decide on a language(s) to use. This is from an electrical engineering/robotics background. These are the requirements: Program will have to read data from multiple sensors and inputs (motion sensor, temperature sensor, capacitive sensor, infrared, magnetic sensors, etc) through a port on a computer - so either through USB or ethernet. Program will have to be able to send control signals based on this input. Program will have to continuously monitor all input signals at all times - so realtime data. Program will require authentication. Program will need to be controllable from a web interface from anywhere via logging in to a website. Web interface will also need to have realtime feedback once authenticated. What language do you think would best accomplish this? I was thinking maybe saving everything into a database which can be accessed by both the desktop and web app? And would Python be able to do all of this? Or something like a remote desktop app? I know this is a complex project but let's assume I can learn any language. Has anyone done something like this and if so how did you accomplish it?

    Read the article

  • Acad 14 SOLIDS disappeared, DWG file doesn't open correctly

    - by MikeD
    I made a drawing in Autocad 14 (Win XP) containing of mostly SOLID's. I saved and re-opened it multiple times without any problem. Before I last saved it I viewed my drawing using the SHADE functions. After re-opening all my SOLIDS have disappeared. I spent numerous hours searching for a solution, tried SATFIX.ARX , AUDIT, RECOVER without success (ACIS error - which should be gone after applying SATFIX), changed my computer locale from German (decimal = comma) to English (decimal = dot) but my screen remains empty (and yes I tried to recover from a .BAK, too) I also tried to export (the non-display drawing) into DXF and can confirm that all my objects are in there, but re-opening the DXF results in a huge ACIS error list again I am desperate - please can someone help - thanks! Mike

    Read the article

  • Office jukebox systems

    - by Jona
    We're looking for a good office jukebox solution where staff can select songs via a web interface to be played over the central set of speakers. Must haves: Web Interface RSS / easy to scrap display of currently playing songs Ability to play mp3s and manage an ordered playlist. Good cataloguing of media. Multiple OSs supported as clients - Windows, Mac, Fedora Linux (will probably be accomplished by virtue of a web interface). We have tried XBMC which worked well as a proof of concept however the web interface is just too immature and has too many bugs for a reliable multi-user solution. I believe the same will be true of boxee. Nice to have: Ability to play music videos onto a monitor Ability to listen to radio streams specifically Shoutcast and the BBC. Ability to run on Linux is a nice to have but windows solutions which worked well would certainly be considered. I am aware of question 61404 and don't believe this to be a duplicate due to the specific requirements.

    Read the article

  • glusterfs to replicate files to other servers

    - by sbrattla
    I've got multiple servers which all need to have the same content in /home. In other words, if the file /home/user1/test.txt is updated on server A, this needs to be replicated to all other servers in the cluster. Is it possible to use GlusterFS for this purpose? That is, let each server have a full copy of all data locally - which that server will be working on - and solely use GlusterFS to take care of replicating this data to the other servers? I'm not intersted in a combined storage, but rather have all data on all machines only to have GlusterFS to replicate it to the other machines.

    Read the article

  • How to configure Amazon Security Groups to achieve multi-tier architecture?

    - by ks78
    What is the preferred way to configure Amazon Security Groups to achieve a multi-tier architecture? Each of my instances has its own Security Group, which I only want to use for rules specific to an instance. I'd like to keep any rules which apply to multiple instances in a separate Security Group, which can then be assigned to instance Security Groups as necessary. As an example, I've setup a group called "admin", which allows administrative access from my IP. I added the "admin" group as the source to each of my instance security groups. However, I still can't access the instances from my IP without adding the rules directly to the instance's group. Am I missing something? Although it seems a multi-tier security architecture should be possible, it doesn't seem to be working.

    Read the article

< Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >