Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 366/2673 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Anyone know a good mind mapper that works with a scheduler?

    - by GLycan
    TL;DR: Mind mapping tasks to be processed into a schedule based on task metadata. I have all sorts of ideas about what to invest resources (mainly time) in, but when I actually have time to do something I useually end up browsing reddit for not knowing what do to, and the frequancy with which I forget deadlines scares me. I'd love to bring order and structure into my mind, and always know what to do next. So, I want a mind mapping app, where I'd give each branch (types and subtypes of things I want to do) a importance score (if there were two branches, and one had 60 while the other 40, they would respectivily get 60% and 40% of the parent's importance, with the root being 100) and a how soon that branch should be revised/updated (an hobby I want to try out might be checked, say, once a week, while a school subject should be checked once a day) and give each leaf (something I want/need to do) how much time it takes, deadline (if any), and optionally an absolute importance, reoccurrence (guitar practice might repeat once a week), and prerequisites (reading something requires that book (although that could be brought somewhere), coding requires a box, jogging requires being outside) and maybe some other flags, like if it's enjoyable or not. It should either be packaged or working with a schedular app, to which I'd say, look, my day works this way (completely busy from 8 to 9:15, then 15 minutes of being inside with nothing, ..., two hours with box and possibility to go outside, etc), saying that such-and-such pattern is school and happens ever weekday except such-and-such days. The output should be of the form of a schedule, fit for printing or, when I finally get an android, mobile viewing, that schedules tasks with regards to availability of resources and importance (importance being derived from the leaf-task's parent branches), and the set of flags (all work and no play makes me a dull boy). One of these tasks should be reviewing anything that should be updated on that day, including future day layouts (e.g, if the time slots of future days have changed. This should be done every day.) Does anyone know some collection of preferably open-source (or free, or pirateable) tools, or better yet a single one, that accomplishes this task? I know python pretty well, and should be able to write any necessary glue.

    Read the article

  • Nginx > Varnish > Gunicorn Error Too many Redirections

    - by kollo
    I have the following config: Nginx Varnish Gunicorn Django I want to cache 2 versions of same site (mobile & web) with Varnish. Gunicorn : WEB: gunicorn_django --bind 127.0.0.1:8181 MOBILE: gunicorn_django --bind 127.0.0.1:8182 Nginx: WEB: server { listen 80; server_name www.mysite.com; location / { proxy_pass http://127.0.0.1:8282; # pass to Varnish proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } MOBILE: server { listen 80; server_name m.mysite.com; location / { proxy_pass http://127.0.0.1:8282; # pass to Varnish proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Varnish: default.vcl backend mobile_mysite { .host = "127.0.0.1"; .port = "8182"; } backend mysite { .host = "127.0.0.1"; .port = "8181"; } sub vcl_recv { if (req.http.host ~ "(?i)^(m.)?mysite.com$") { set req.http.host = "m.mysite.com"; set req.backend = mobile_mysite; }elsif (req.http.host ~ "(?i)^(www.)?mysite.com$") { set req.http.host = "mysite.com"; set req.backend = mysite; } if (req.url ~ ".*/static") { /* do not cache static content */ return (pass); } } The problem: On Nginx if I setup Mobile version with Varnish (port 8282) and let WEB version with Gunicorn( port 8181), MOBILE is cached by varnish, both WEB & MOBILE works but WEB is not cached. If I set the proxy_pass of WEB version to Varnish (port 8282) and restart Nginx I got an error when accessing web version (www.mysite.com) "Too many redirections" . I Think my problem come from the Varnish config file, as the site works well if I setup Nginx proxy_pass to Gunicorn ports (MOBILE & WEB).

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here? EDIT: It seems I can't even put ext3 or ext4 on it, they both complain about a corrupted journal. Gheh, guess something is really broken here.

    Read the article

  • HyperV - low CPU usage

    - by Klark
    I am very new to HyperV and virtual machine philosophy in general, so please expect more or less nooby questions :) I have a server that is only used as a host for virtual machines. OS is windows server 2008 R2 and it is running on 16 CPU and 48 GBs of RAM. On aforementioned server there are 8 VMs, each having 4 CPUs and 4 GBs of RAM. On those VMs we are running some CPU intensive tasks. Each machine has nearly 100% cpu usage. After I noticed slow performance I went to the host machine and started playing with process explorer. It turned out that cpu usage is very low. Also I/O is very low, and of course, memory consumption is high, which is expected. Of course, I don't expect that those 4 virtual cores dedicated to a VM work as fast as real, hardware 4 cores, but still I expected a higher consumption of real hardware. Is this sort of behaviour normal? I see that the most of CPU usage on host machine are marked as interrupts (which I guess is normal) and all those interrupts are passed to only one core (which is strange). Are there out of box optimization that I could perform to finally use all that processing power that is under the hood. My knowledge of virtualization technology is near to embarrassing, so I would be grateful for any links that could enlightened me :) Thanks.

    Read the article

  • How to make Virtualbox, OpenVPN, and Win2008 Web R2 like one another?

    - by Aquitaine
    Back with web developer guy wearing net admin hat. Hopefully this is an easy one. We have two servers on a public network at a hosted facility. Server A is our public-facing web server and server B is our database server. Both are running Windows 2008 Server R2 Web Edition. We want Server B isolated from everything except Server A, such that anyone who has to connect to server B goes through the VPN on Server A. It's not perfect since we have no access to do this on the router side, but it's what we've got. We've set up VirtualBox and OpenVPN Access Server on Server A. It has one network interface set to 'NAT' mode, such that OpenVPN gets its IP at 10.0.2.x, and to connect to the OpenVPN interface, I go to the local IP for the Virtualbox network adapter, 192.168.56.x, which works as I configured the appropriate ports using VBoxManage. My question is, do I need to be using Bridged Networking and give the VPN server its own IP, or is there some way to tell the server (either Windows or the Virtualbox OpenVPN) that 'any public connection on the real external IP on port X should be directed to this internal LAN address of 192.168.1.x on port Y'? OpenVPN itself doesn't seem to be aware of the server's real external IP unless we put it in Bridged networking mode; is that necessary or advisable? We're without RRAS since this is Web edition, but I feel like what we're going for is pretty simple. Thanks! Aq

    Read the article

  • How to figure out what VirtualBox did?

    - by AndrejaKo
    I'm trying to boot a custom made-in-ASM OS on my recent laptop. The OS is intended to be installed on a floppy and during make creates a bootable floppy. Since I don't have a floppy drive, I installed it on a virtual floppy. After that I used WinToFlash's create bootable MS-DOS USB drive option to transfer the floppy image to an USB flash drive. Then I tried to boot my computer from it but got only a repeating broken string on screen. After all that I made a virtual hard disk image form the flash drive using this tutorial and tried to boot a virtual machine from it. First time I got same problem as on real computer. I then used the reset option and next time and every time after that OS booted correctly. My question is: How do I figure out what exactly happened to the virtual machine between first and second boot? UPDATE I just created a new VM with default settings for windows XP and it has the same problem that I have on a real computer. I was unable to reproduce the procedure which made the first VM work correctly.

    Read the article

  • Could local ISP capture my location whenever i launch a VPN to a VPN server?

    - by Ozgun Sunal
    I am extremely concerned that my ISP collects any information once I am connected to a VPN server. For instance, as far as I know, when I start a connection to a HotSpotShield VPN server, an IP address is assigned to me just before a successful connection. Besides, I'll be having an extra IP address at the beginning with the TAP Adapter. An encryption tunnel is set up between me and the VPN server. Whenever my request for a website reaches them (VPN server), they decrypt the data and later they encrypt the reply which returns from the web (targeted) server. This works like that. So, the ISP can not see what I am watching, displaying and writing because the connection is encrypted. But, the targeted websites see and record all actions. Still, they can not identify my real IP address. I'm really concerned about if the ISP can see "my location". OK, it has an IP address from another country as my real IP address, but how does my ISP detect the traffic going through them? Can they find out who I am? Won't they say "Hey, there is a traffic but who is and what he is doing right now?", because I get the Internet from them?

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Is VGA port hot-pluggable?

    - by Martin Bøgelund
    In meetings, I often see people detaching the VGA connector from one running laptop and connecting it to another, while the projector is still on. Is this 100% risk free, and OK by design of the VGA standard? If there's a risk involved in hot-plugging VGA, can it be removed by turning off or suspending either laptop, display, or both? I see this being done all the time without causing disaster, so clearly I'm not interested in answers stating "we do it all the time, so it should be OK!". I want to know if there's a risk - real or in theory - that something breaks when doing this. EDIT: I did an internet search on the topic, and I never found a clear statement as to why it is safe or unsafe to hot swap VGA devices. The typical form is a forum question asking basically the same question as I did, and the following types of statements Yes it's hot swappable! I do it all the time! It involves some kind of risk, so don't do it! You're some kind of moron if you think there's a risk, so just do it! But no explanation as to why it safe or not... Joe Taylors answer below contains a link to a forum post and answers that basically give me the same statements as mentioned above. But again, no good explanation why. So I looked for an actual manual for a projector, and found "Lenovo C500 Projector User’s Guide". It states on page 3-1: Connecting devices Computers and video devices can be connected to the projector at the same time. Check the user’s manual of the connecting device to confirm that it has the appropriate output connector. [image] Attention: As a safety precaution, disconnect all power to the projector and devices before making connections. But again, no good explanation.

    Read the article

  • Online computer not responding to pings

    - by mastercork889
    I was doing a bit of scanning on my network lately, knew all the hostnames to each computer connected. But whilst pinging one of them ping returned Request timed out.. This is strange as I know the computer is online and that the computer responds correctly to pinging on a different (enterprise) network. Is there something on the computer, my network, or my computer that is bugging with this? - That's just a sub-question, I don't expect this to be the main answer. The real question: Why does this happen? Why does pinging the IP4 address not work? EDIT : Pinging the Hostname used to default to the IP4 address, but now it defaults to the IP6 address. Why does this happen? But now that it pings using IP6, how come it all of a sudden works? > ping -6 THE_COMPUTER Pinging THE_COMPUTER [lengthy IP6 address] with 32 bytes of data: Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Ping stats: Sent = 4, Recieved = 4, Lost = 0 (0% loss) But when this is done using IP4 it doesn't work. So there are now two questions: How come IP6 works and not IP4? Why does IP4 not work?

    Read the article

  • How do I restrict the Open/Save dialog in Windows to one folder?

    - by MindModel
    I spend a significant amount of time helping non-techies use their PC's. I realized most of that time is spent trying to explain to them how the Windows folder hierarchy works, where the "open file" dialog is pointing now, and how to find that Word document they saved. All this time, they're telling me they "just want to print the file". They refuse to learn how to read the PC screen, try to memorize a fixed set of steps, and end up calling me back to tell me their files disappeared again. I realize it's not productive to try to restrict where PC apps (e.g. Quicken) store files. But if there was a Windows utility I could turn on or off that would restrict the Open/Save dialogs in Windows apps, my noob user friends and I would save an enormous amount of time. The goal would be to have all files whose locations are chosen by the Save dialog saved to one folder, and have the Open dialog always point to that folder, until the utility is turned off. Does such a Windows utility exist?

    Read the article

  • Backup script to FTP with timed subfolders

    - by Frederik Nielsen
    I want to make a backup script, that makes a .tar.gz of a folder I define, say fx /root/tekkit/world This .tar.gz file should then be uploaded to a FTP server, named by the time it was uploaded, for example: 07-10-2012-13-00.tar.gz How should such backup script be written? I already figured out the .tar.gz part - just need the naming and the uploading to FTP. I know that FTP is not the most secure way to do it, but as it is non-sensitive data, and FTP is the only option I have, it will do. Edit: I ended up with this script: #!/bin/bash # have some path predefined for backup unless one is provided as first argument BACKUP_DIR="/root/tekkit/world/" TMP_DIR="/tmp/tekkitbackup/" FINISH_DIR="/tmp/tekkitfinished/" # construct name for our archive TIME=$(date +%d-%m-%Y-%H-%M) if [ $1 ]; then BACKUP_DIR="$1" fi echo "Backing up dir ... $BACKUP_DIR" mkdir $TMP_DIR cp -R $BACKUP_DIR $TMP_DIR cd $FINISH_DIR tar czvfp tekkit-$TIME.tar.gz -C $TMP_DIR . # create upload script for lftp cat <<EOF> lftp.upload.script open server user user password lcd $FINISH_DIR mput tekkit-$TIME.tar.gz exit EOF # start backup using lftp and script we created; if all went well print simple message and clean up lftp -f lftp.upload.script && ( echo Upload successfull ; rm lftp.upload.script )

    Read the article

  • Failed to re-publish a page - Tridion 2011 SP1

    - by Wilson Yu
    We are getting some strange error when re-publishing the same page. The page was published successfully the first time and we can see the page from presentation server. It failed with the following error (see below) when we tried to publish it again (no change to page). The page ran OK within template builder and we got the correct html output, it failed in the last committing deployment step (Prepare Transport, Transporting, Preparing Deployment and Deploying are all successful). Once it fails to publish the second time, it always fails to publish, and we can't un-publish it either. Also when we make a copy of the failed page and create a new page, we can publish the new page first time, the new page then fails to publush the second time with the same error. Does anyone know what would cause this error? any help would be greatly appreciated. Here is the error msg: Committing Deployment Failed Phase: Deployment Prepare Commit Phase failed, Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: "", Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: ""

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

  • I'm looking for a program that can automate opening/closing a program

    - by Peterstone
    I am looking a for a program to remember things with these features: Open files or programs in my own computer at a planned time. For example I want every morming at 8:00 the program open a particular mp3 file. But suposse, by mistake, that I on my computer and 9:00, then I want that the program rememberme what I planned to open at 8:00. Show me the program as an active windows on my desktop. The windows of the program opened is what the user is seeing (Is at the first place in the desktop) and the rest of the program windows are below. Close programs or files in my own computer at a planned time. For example I want that the program mp3 file that was opened at 8:00 was closed at 10:00 if at that time still be opened. Detection of events. For instance If I open particular videogame program. then a mp3 file (with a recording message arguing why I shouldn´t continue playing that videogame at work time) is opened. Possibility of combine the features mentioned before each other.

    Read the article

  • HP Photosmart C4780 printer/scanner breaks netgear WNDR3700v3 router on connection

    - by CodeJunkie
    A few months ago, we upgraded our Netgear router's firmware. Immediately, we started having trouble connecting to the internet. Each time this problem would happen, every device in the house would stop being able to make new connections to the internet. For example, you couldn't open new pages in the browser, but if Skype was running when the problem happened, you could keep talking to people. The only solution to this problem seemed to be resetting the router to factory defaults. Eventually, we solved this problem by just downgrading the router firmware. A while later, we got a new Netgear router. Almost right away, the new router started having exactly the same problems as the old one did on current firmware. The network connection would stay active and the computer would say it had an internet connection, but you couldn't do anything online except for using Skype. We eventually figured out that this happens every time our HP printer gets onto the network. Any time the printer gets onto the wireless network, the whole network stops connecting to the internet almost completely. The only thing that will fix it at that point is to reset the router to factory specs, and unplug the printer so it can't get back onto the network. The Netgear router has the latest version of the firmware, but the printer/scanner is very old. It looks to me like this problem is probably a result of a firmware conflict between the printer and the router, but I'm not sure how to fix that problem. Here's some additional information: Printer: HP Photosmart C4780 Router: WNDR3700v3 Router firmware: V1.0.0.22_1.0.17 (Stock, up to date firmware) Why would the printer getting on the network cause the router to not be able to access the internet correctly until it was reset? What can be done to allow the printer to be on the network without breaking the network for all other devices? Edit: One other thing that happens during this internet problem is that multiple computers in the house display an "IP conflict" message repeatedly, and extremely frequently (as often as every five to ten minutes, and every time a connection to the wireless is made).

    Read the article

  • Sudden and frequent hangs on desktop computer: mobo or CPU fault?

    - by djechelon
    I have a desktop computer equipped with an ASUS Crosshair 2 Formula and a Phenom x6 3.2GHz CPU. My problem is that often the computer will hang all of a sudden, completely stopping responding. When that occurs, reset key is inoperative and power button turns the computer off but is unable to turn it back on. I have to physically disconnect power cable. The problem can occur anytime, when I'm booting Windows, when I'm logging in, when I'm listening to a song, when I'm browsing Internet, etc. It always occurs after very few minutes of 3D gameplay I thought it was a video card fault. I had 3 8800GTX so I could try all combinations of them: didn't fix I thought it was a RAM problem: I tried running with only a subset of my DDR2 banks but didn't fix. Almost every time I have to reset and reconfigure BIOS (without AHCI, Win7 won't boot, so I need to restore a few things). If I enable AMD Live, Cool&Quiet or other things from CPU configuration menu I'll be sure that the computer won't reach Windows desktop in 99% of cases (it randomly hangs somewhere in the boot process or even in the BIOS POST). Another interesting thing is that during the POST process the computer always takes unusually long time detecting USB devices (LCD POSTer shows USB INIT), and I've also tried disconnecting all USB devices but didn't take less time to POST BIOS revision is 2702, the latest. Today I found a different behaviour once: during boot screen I got a BSOD with error Stop 0x00000101 A clock interrupt was not received on a secondary processor within the allocated time interval, and this is usually related to overclocking, but I never overclocked my CPU. Judging from the description of my problem, hoping someone had the same and fixed, and since I don't have a spare CPU or motherboard for replacement, I'd like to ask if you think this is a problem with faulty CPU or faulty motherboard, and if I can perform additional tests (I mean software tests because of my lack of spare components) to identify the component to replace.

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

  • Apache https is slsow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With http (KeepAlive off), I can get over 5000 requests per second. However, with https, I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • Master File Table Corrupt, any way to save data?

    - by domen
    hi. I've used search, but none of the results match my problem so I didn't have to ask separate question. I've Installed Windows 7 RTM recently and since then partitions located on one of my HDDs have gone "crazy". They used to "freeze" and didn't open in explorer for some time (minute or two, usually), sometimes all partitions of the drive wouldn't show until reboot and finally, one of those partitions started showing "disk structure is corrupted and unreadable" warning, it appeared in Disk Management window as RAW and chkdsk showed "mft corrupt". There were no important data on the partition and I didn't have enough time to analyze the problem at the moment, so I just reformatted it and ran antivirus scan on system. After that problem settled for some time, but yesterday the problematic HDD vanished again from the system. After reboot chkdsk identified mft of four partitions corrupt and now they are all in same conditions as the above mentioned one. But the difference is that the files stored in them are extremely important. and just for info: I upgraded from Win7 build 7077, but had some performance issues, so I reformatted system drive and installed fresh Win7 RTM on it. I've downloaded TestDisk and it shows all the partitions marked as NTFS (not RAW) and my knowledge of the program wasn't sufficient to obtain any other info from it :-) and the images that could help describe the problem (sorry, I'm not allowed to post images and more than one hyperlink): http:// img22.imageshack.us/img22/5909/chkdskz.jpg http:// img198.imageshack.us/img198/5576/computeray.jpg I'm interested, is there a way to let me restore the MFT or just access files so I can backup them before reformatting the drive. Thanks for your time. :) P.S. my reformatted drive is showing no problems, could there be a problem with windows 7 itself? I googled, but with no results.

    Read the article

  • DNS resolve .com domain on local domain

    - by Joost Verdaasdonk
    I'm building a local 2008 R2 domain as a test case to be able to write a roadmap for the real new domain that needs to be created soon. What I would like to know if I'm able to make a record in DNS that will point the domain name: www.example.com and example.com to one of the servers in my network. I tried creating an a-record for it but that doesn't work. To be honest I'm not even sure if this is possible? So can I do this? That way I would be able to fully test all our services (and webb app) offline before I build the real domain and switch the DNS records at the provider. Some advice if possible and where to start is appreciated. The solution (Thanks Brent): Create new Forward lookup zone pointing to example.com Create empty A record pointing to IP of the webserver you are targeting If www is needed create A record with Name: www and IP of your webserver sub domains repeat the process but then with names for example: sub or www.sub (and ip your webserver) Be aware of the DNS Cache while you are in this process. Things can take time or do the following: Right click the server and choose clear cache in CMD: ipconfig /flushdns (to flush the client cache)

    Read the article

  • Weird problem with Visual C++ 2010 Express

    - by Robert Vella
    This has happened before on my Vista Premium installation, and now it's happening on my Windows 7 Home Premium installation. Basically everytime I install Visual Studio Express 2010, it works fine for a random amount of time but then suddenly starts to hide from my sight -- that's the best way I can explain it. VS does not crash, and from what I can tell it does not freeze either; It continues to work, I can even "minimize" and "maximize" it; I simply cannot see it nor can I interact with it any meaningful way. Also: After the "crash" there are no logs in: Root\Program Files (x86)\Microsoft Visual Studio 10.0\Microsoft Visual C++ 2010 Express - ENU. Nor any other files created at the time of the crash. There are no traces in the event viewer. The program seems to be functioning perfectly in the process manager. If I reinstall Visual C++, it works normally for a seemingly random period of time before going cookoo again. I am stumped. This has never happened to me before, with any other program. And yet I doubt it really is a problem with Visual C++; More like something general that seems to have picked on it for some reason. Still, after a clean install with a new OS, I'm kinda thinking there's something wrong here. Any help would be appreciated, altough I suspect that the answer to this question will make me feel embarassed. P.S. Not sure if it helps, but I think around the same time I started having problems (On both installations) with windows turning off the display when I leave the computer, and then seemingly crashing when it turns it on again -- in fact when I interact with it it seems to be responding to my commands without actually display anything.

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >