Search Results

Search found 23072 results on 923 pages for 'microsoft words'.

Page 628/923 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • Reliable custom Windows shortcut keys?

    - by Peter Baer
    I have global Windows shortcut keys assigned to several different cmd.exe instances. I do this by creating shortcuts to cmd.exe on my desktop, and assigning each one a unique shortcut key (for example, CTRL + SHIFT + U). Pretty basic stuff. I'm using Win2K8 (R1 and R2). This works just fine... most of the time. But with infuriating regularity, sometimes it doesn't. Or it will work with a long delay (many seconds). It doesn't matter what app currently has focus (it can even be one of the command prompts). It doesn't matter what keys I assign (I've tried a few variations of WIN, CTRL and SHIFT). I did notice that this is often, but not always, correlated with explorer.exe struggling in some way or another (say, an explorer window opened to a file share that's unavailable, or an app being unresponsive, or whatever). In other words the shortcut key handling appears to be very sensitive to unrelated system activity. Note that whenever I have this problem I can always successfully ALT + TAB to the window I want to get to, but that's tedious. I use the shortcuts to these command windows hundreds of times a day so even a 1% failure rate becomes really annoying. Is there a way to fix this, or is there some third-party utility out there that will RELIABLY intercept custom key combinations to bring focus to whatever apps I want, in a way that is independent of other system activity? ADDENDUM: There is a property of the Windows shortcuts that I would not want to lose if switching to a third-party hotkey tool: Windows shortcuts are idempotent. Once you've launched a shortcut to some app, pressing the shortcut key combo again takes you to the already launched process - it does not launch a new process.

    Read the article

  • Slowdown upon router/modem setup change

    - by Ollie Saunders
    I’ve been using a Belkin FSD7632-4 modem router to connect to my TalkTalk provided ADSL internet connection for some time and been pretty happy with it. Recently, however, the connection has been failing and I decided to get a ASUS RT-N16 instead, which is also a much more capable router generally. The ASUS RT-N16 doesn’t come with a modem built-in so I purchased as Zoom modem as well. I’ve set them both up and am using them to post this message. But I’m a bit miffed to find that I get a significantly and consistently slower downstream rate from the new configuration than with the old Belkin. Belkin modem router: downstream: 3.45 mbps upstream: 0.73 mbps ASUS router + Zoom modem: downstream: 2.71 mbps upstream: 0.66 mbps Any ideas why this is? The really weird thing about this is that the Zoom supports ADSL2 and ADSL2+ but I don’t think the old Belkin does. At first I thought it might be due to the Zoom modem being limited to PPPoE instead of PPPoA, which my ISP supports, but then I tried using PPPoE with the Belkin and that still gave a high speed. I’m using VC-Mux encapsulation with both. VPI of 0 and VCI of 38. I pulled this data off the Zoom: Mode: ADSL2 Line Coding: Trellis On Status: No Defect Link Power State: L0 Downstream Upstream SNR Margin (dB): 12.3 11.8 Attenuation (dB): 43.0 24.9 Output Power (dBm): 12.9 0.0 Attainable Rate (Kbps): 3936 844 Rate (Kbps): 3194 840 MSGc (number of bytes in overhead channel message): 59 10 B (number of bytes in Mux Data Frame): 99 14 M (number of Mux Data Frames in FEC Data Frame): 2 16 T (Mux Data Frames over sync bytes): 1 8 R (number of check bytes in FEC Data Frame): 8 8 S (ratio of FEC over PMD Data Frame length): 1.9833 9.0594 L (number of bits in PMD Data Frame): 839 219 D (interleaver depth): 32 2 Delay (msec): 15 4 Super Frames: 15808 14078 Super Frame Errors: 0 4294967232 RS Words: 513778 111753 RS Correctable Errors: 126 4294967238 RS Uncorrectable Errors: 0 N/A HEC Errors: 0 4294967279 OCD Errors: 0 0 LCD Errors: 0 0 Total Cells: 1920175 237597 Data Cells: 205993 392 Bit Errors: 0 0 Total ES: 0 0 Total SES: 0 0 Total UAS: 34 0

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • grep --color=auto with -i option disables the matching text color, why?

    - by emptyset
    I was messing around with grep and put this in my .zshenv: export GREP_OPTIONS="--color=auto" export GREP_COLORS='mt=1;34' I was bonking my head on the keyboard and changing GREP_COLORS around for a minute trying to figure out why the folder colors were working, but the matching text wasn't. I was doing this: $ grep -R -n -i -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * The line number and file names were set with the default colors, but the matching text wasn't. After spending way too much time, I thought to do this: $ grep -R -n -e "functionFoo\(" --include=*.cs --exclude-dir=Logs * (I removed the -i option.) That's all it took to get the matching text to correctly show up in bold blue. This is a Cygwin on Vista setup, with rxvt running zsh. Any idea why grep colors would break on specifying a case-insensitive match? Update: Under cygwin 1.7, it's a little bit better - case insensitive search works correctly, but it only highlights the word that matches the expression exactly. In other words, "FunctionFoo" highlights "FunctionFoo" but not "functionFoo" and vice versa. Probably a grep issue so I'll be submitting it to that list.

    Read the article

  • TCPDump and IPTables DROP by string

    - by Tiffany Walker
    by using tcpdump -nlASX -s 0 -vvv port 80 I get something like: 14:58:55.121160 IP (tos 0x0, ttl 64, id 49764, offset 0, flags [DF], proto TCP (6), length 1480) 206.72.206.58.http > 2.187.196.7.4624: Flags [.], cksum 0x6900 (incorrect -> 0xcd18), seq 1672149449:1672150889, ack 4202197968, win 15340, length 1440 0x0000: 4500 05c8 c264 4000 4006 0f86 ce48 ce3a E....d@[email protected].: 0x0010: 02bb c407 0050 1210 63aa f9c9 fa78 73d0 .....P..c....xs. 0x0020: 5010 3bec 6900 0000 0f29 95cc fac4 2854 P.;.i....)....(T 0x0030: c0e7 3384 e89a 74fa 8d8c a069 f93f fc40 ..3...t....i.?.@ 0x0040: 1561 af61 1cf3 0d9c 3460 aa23 0b54 aac0 .a.a....4`.#.T.. 0x0050: 5090 ced1 b7bf 8857 c476 e1c0 8814 81ed P......W.v...... 0x0060: 9e85 87e8 d693 b637 bd3a 56ef c5fa 77e8 .......7.:V...w. 0x0070: 3035 743a 283e 89c7 ced8 c7c1 cff9 6ca3 05t:(>........l. 0x0080: 5f3f 0162 ebf1 419e c410 7180 7cd0 29e1 _?.b..A...q.|.). 0x0090: fec9 c708 0f01 9b2f a96b 20fe b95a 31cf ......./.k...Z1. 0x00a0: 8166 3612 bac9 4e8d 7087 4974 0063 1270 .f6...N.p.It.c.p What do I pull to use IPTables to block via string. Or is there a better way to block attacks that have something in common? Question is: Can I pick any piece from that IP packet and call it a string? iptables -A INPUT -m string --alog bm --string attack_string -j DROP In other words: In some cases I can ban with TTL=xxx and use that should an attack have the same TTL. Sure it will block some legit packets but if it means keeping the box up it works till the attack goes away but I would like to LEARN how to FIND other common things in a packet to block with IPTables

    Read the article

  • Apache2: Limit simultaneous requests & throttle bandwidth per IP/client?

    - by xentek
    I want to limit simultaneous requests & throttle bandwidth per IP/Client on a single apache vhost. In other words, I want to ensure that this site, which hosts large media files, doesn't get hammered by someone trying to download everything all at once (just happened the other night). I'd like to limit the outgoing transfer speed overall for this site, as well as limit the number of connections a single IP can make to the server to a sane default (i.e. within normal browser limits for multiple requests so page loads aren't effected too much). Bonus points if I can actually scope it to file types (i.e. leave web files alone, but apply these rules to just the media files). We're running Ubuntu 9.04 on all the servers, and have two apache/php servers being load balanced via Round Robin by a squid proxy server. MySQL is running on its own box as well. We've got plenty of bandwidth to give them, so I don't really want overall caps, but just want to throttle the amount of memory/CPU it takes to serve this site. There other sites on these servers that we don't want to apply these rules too, just want to keep this one from hogging all the resources. Let me know if you need more info! Thanks in advance for your suggestions!

    Read the article

  • can't connect 2 subnets through RRAS 2008 r2

    - by mcdwight6
    I'm working on a project for a networking class. In VMWare Workstation, I have to set up a 2008 r2 server with DHCP reservations for 2 clients on separate subnets and have them ping each other. Here is the output of the route print command: =========================================================================== Interface List 13 ...00 50 56 2a e7 11 ...... Intel(R) PRO/1000 MT Network Connection #3 10 ...00 0c 29 66 88 dd ...... Intel(R) PRO/1000 MT Network Connection 1 ........................... Software Loopback Interface 1 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 11 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 14 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 16 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 17 ...00 00 00 00 00 00 00 e0 isatap.{5B8FB196-616F-4168-A020-03E63A309CEC} =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.0.2 266 0.0.0.0 0.0.0.0 On-link 223.6.6.2 266 10.0.0.0 255.0.0.0 On-link 10.0.0.2 266 10.0.0.2 255.255.255.255 On-link 10.0.0.2 266 10.255.255.255 255.255.255.255 On-link 10.0.0.2 266 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.6.0.0 255.255.0.0 On-link 10.0.0.2 11 128.6.255.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.0 255.255.255.0 On-link 10.0.0.2 11 223.6.6.0 255.255.255.0 On-link 223.6.6.2 266 223.6.6.2 255.255.255.255 On-link 223.6.6.2 266 223.6.6.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.255 255.255.255.255 On-link 223.6.6.2 266 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.0.2 266 224.0.0.0 240.0.0.0 On-link 223.6.6.2 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.0.2 266 255.255.255.255 255.255.255.255 On-link 223.6.6.2 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.0.0.2 Default 0.0.0.0 0.0.0.0 128.6.0.2 Default 0.0.0.0 0.0.0.0 223.6.6.2 Default 128.6.0.0 255.255.0.0 10.0.0.2 1 223.6.6.0 255.255.255.0 10.0.0.2 1 =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 14 1010 2002::/16 On-link 14 266 2002:8006:2::8006:2/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None My problem is that although I have set up both dynamic and persistent static routes in my r2 server, neither of the clients can ping even the NIC outside its own subnet. For example Client A can ping the NIC at 10.0.0.2 and vice-versa, but it gets a general transmit failure when it tries to ping the card at 223.6.6.2, let alone trying to ping the other client. I have completely disabled the firewalls on all machines and anything else I could think of, without success. What am I missing? Edit: Since posting this, I also noticed that the default gateways on my 2 NICs keep getting zeroed out. Does anyone know a fix for this?

    Read the article

  • Is reliability reputation of mechanical keyboards overblown?

    - by Rarst
    A while back I worked up to finally buying mechanical keyboard (~$100 range, "black" switches) and was initially quite content with purchase. However just outside first year (read it - as soon as warranty expired) it started to develop repeat issues (press once, get chain of letter repeated) on multiple keys. It doesn't react to generic cleaning (up to compressed air) and searching Internet shows noticeable amount of people with similar-to-identical issues, spanning years. This makes me severely hesitant to buy another mechanical keyboard, considering: every other keyboard I ever owned, including ultra-cheap crap managed to last longer than that typing experience is nice, but not lifechanging-fan-forever nice for me my choice of mechanical keyboards is severely limited not many brands represented in local market and primarily crazy looking gamer models russian (not to mention russian and ukrainian if possible) layout excludes international ordering price tag for a meek year of use I got our of it is plain demoralizing It is obvious mechanical keyboards have their fans, but shopping around for "best fit" or getting into multiple hundreds price tags is probably not something I am highly interested in. Considering my constraints and bad experience with reliability, is it practical for me to sink more money into buying mechanical keyboard(s) again? In other words - manufacturers are beaming about how crazy reliable mechanical keyboards are. Are active long time users of such keyboards confidently of same opinion?

    Read the article

  • iTunes copy just metadata (song and album ratings, playlists) from iPod

    - by Jared Updike
    I have an iPod touch that I synched with my Windows computer (iTunes 9.0 I think) until my harddrive failed and I lost my entire library. I rebuilt the library (songs) from a year old backup (and various other source for songs) but my playlists and ratings are of course a year old. My iPod itself has most of the playlists and ratings I care most about (favorite songs and albums, rated 4 and 5, for example). I have a catch 22 situation where I feel nervous that I haven't backed up my iPod in around 4 months (when my drive failed) so I'd like to back it up as soon as possible... but if I back it up I have to clear all the songs and playlists and copy them back, which I can't really do since I need to rebuild my playlists on my computer first (using the data only available on my iPod!) The question: is there a better way to READ the information off my iPod than doing it manually, song by song and album by album and playlist by playlist (XML, text dump, database, spreadsheet, anything). In other words, mostly I want the information (metadata like ratings and playlists, not songs) copied off the iPod so I can more quickly get my iTunes library ratings and playlists re-built (manually) so I can finally wipe the music and back up my apps, etc. Then I'd like to copy the music back immediately. The part I'd like to avoid is manually navigating everything on my iPod to read through all the playlists and ratings (50 GB, 6,000+ songs) as I re-enter all of that data by hand. I've done a few dozen albums and it's pretty time consuming having to tap around on the iPod. Reading from a spreadsheet (for example, or XML which I could write a script to get into spreadsheet form) would probably help tremendously, plus then I'd have a backup of that information somewhere besides just my iPod.

    Read the article

  • Mac, VNC and multiple monitors

    - by MarqueIV
    I asked a similar question here before but apparently I wasn't as clear as I had expected by the responses. That said, I'll try again. I have a Mac Pro with quad monitors which I would like to access remotely. I've been using VNC for this (either via screen sharing or a dedicated VNC client), which works, but the VNC protocol matches the physical layout/resolutions of attached monitors. One of the things I like about Microsoft's Remote Desktop (Terminal Server) client is that when you connect, it blanks out the local screens and sets the resolution to a client-specified setting. In other words, when natively running Windows, even though I'm running a physical 30" monitor flanked by 2 24" monitors as well as a 21" Cintiq monitor, I can set the Remote Desktop resolution to match my notebook's screen giving me a native, single-monitor configuration. As soon as I disconnect (and you log back in locally), the desktop un-blanks and the resolution resets back to the four physically attached monitors. Again, VNC works and yes I know I can use 5901, 5902...n to attach VNC to a specific monitor as opposed to the entire desktop, but I'm still at the mercy of trying to look at a 2560x1600 resolution on a 1280x800 screen. I'm left with either scaling (everything's too small) or panning/scrolling (it's like playing hide-and-seek with your documents!) SO... anyone know of any Mac-based remote software (client and server) that will let me connect to my Mac Pro and reset the resolution by the client, just like you can in Windows, or am I SOL?

    Read the article

  • Linux arp cache timeout values

    - by Jak
    I'm trying to configure sane values for the Linux kernel arp cache timeout, but I can't find a detailed explanation as to how they work anywhere. Even the kernel.org documentation doesn't give a good explanation, I can only find recommended values to alleviate overflow. Here is an example of the values I have: net.ipv4.neigh.default.gc_thresh1 = 128 net.ipv4.neigh.default.gc_thresh2 = 512 net.ipv4.neigh.default.gc_thresh3 = 1024 Now, from what I've gathered so far: gc_thresh1 is the number of arp entries allowed before the garbage collector starts removing any entries at all. gc_thresh2 is the soft-limit, which is the number of entries allowed before the garbage collector actively removes arp entries. gc_thresh3 is the hard limit, where entries above this number are aggressively removed. Now, if I understand correctly, if the number of arp entries goes beyond gc_thresh1 but remains below gc_thresh2, the excess will be removed periodically with an interval set by gc_interval. My question is, if the number of entries goes beyond gc_thresh2 but below gc_thresh3, or if the number goes beyond gc_thresh3, how are the entries removed? In other words, what does "actively" and "aggressively" removed mean exactly? I assume it means they are removed more frequently than what is defined in gc_interval, but I can't find by how much.

    Read the article

  • Problem with running a script at startup as root?

    - by Usman Ajmal
    Hi The main question: Is there a way I can run 'completely' one of my script when ubuntu's desktop appears no matter if root , administrator, desktop user or an unprivileged user logged in? What does the script do? The script mounts a partition, looks for a file in that partition and finally on the basis of that file a decision of copying a partition to another partition is made. That copying is done via dd if=/dev/sda2 of=/dev/sda5 When does the script run finely? Script runs smoothly when I run it from the terminal by sudo ./my_copying_script This command asks me for the password of currently logged in user. I enter the password and the script starts working. When does the script NOT run finely? I want to run the script at startup. I set it a startup program by using the Startup Applications utility of Ubuntu. Script ran at startup but exited at the dd command returing following error: dd: opening '/dev/sda2': Permission denied On edk's suggestion I set the owner of my_copying_script as root and set the SUID. Now the permissions of my_copying_script are (-rwsr-sr-x). edk's point of view was that once I set the suid, the startup program will run with the permissions of its owner. I did that but the same /dev/sda2 permission denied error came up. I then prefixed the dd with sudo as mentioned below sudo dd if=/dev/sda2 of=/dev/sda5 but this returned following error: sudo: no tty present and no askpass program specified In other words the mounting failed. If I run the script using sudo ./myProgram i don't face this problem and the drive gets mounted successfully.

    Read the article

  • Wireshark WPA 4-way handshake

    - by cYrus
    From this wiki page: WPA and WPA2 use keys derived from an EAPOL handshake to encrypt traffic. Unless all four handshake packets are present for the session you're trying to decrypt, Wireshark won't be able to decrypt the traffic. You can use the display filter eapol to locate EAPOL packets in your capture. I've noticed that the decryption works with (1, 2, 4) too, but not with (1, 2, 3). As far as I know the first two packets are enough, at least for what concern unicast traffic. Can someone please explain exactly how does Wireshark deal with that, in other words why does only the former sequence work, given that the fourth packet is just an acknowledgement? Also, is it guaranteed that the (1, 2, 4) will always work when (1, 2, 3, 4) works? Test case This is the gzipped handshake (1, 2, 4) and an ecrypted ARP packet (SSID: SSID, password: password) in base64 encoding: H4sICEarjU8AA2hhbmRzaGFrZS5jYXAAu3J400ImBhYGGPj/n4GhHkhfXNHr37KQgWEqAwQzMAgx 6HkAKbFWzgUMhxgZGDiYrjIwKGUqcW5g4Ldd3rcFQn5IXbWKGaiso4+RmSH+H0MngwLUZMarj4Rn S8vInf5yfO7mgrMyr9g/Jpa9XVbRdaxH58v1fO3vDCQDkCNv7mFgWMsAwXBHMoEceQ3kSMZbDFDn ITk1gBnJkeX/GDkRjmyccfus4BKl75HC2cnW1eXrjExNf66uYz+VGLl+snrF7j2EnHQy3JjDKPb9 3fOd9zT0TmofYZC4K8YQ8IkR6JaAT0zIJMjxtWaMmCEMdvwNnI5PYEYJYSTHM5EegqhggYbFhgsJ 9gJXy42PMx9JzYKEcFkcG0MJULYE2ZEGrZwHIMnASwc1GSw4mmH1JCCNQYEF7C7tjasVT+0/J3LP gie59HFL+5RDIdmZ8rGMEldN5s668eb/tp8vQ+7OrT9jPj/B7425QIGJI3Pft72dLxav8BefvcGU 7+kfABxJX+SjAgAA Decode with: $ base64 -d | gunzip > handshake.cap Run tshark to see if it correctly decrypt the ARP packet: $ tshark -r handshake.cap -o wlan.enable_decryption:TRUE -o wlan.wep_key1:wpa-pwd:password:SSID It should print: 1 0.000000 D-Link_a7:8e:b4 - HonHaiPr_22:09:b0 EAPOL Key 2 0.006997 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 3 0.038137 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 4 0.376050 ZyxelCom_68:3a:e4 - HonHaiPr_22:09:b0 ARP 192.168.1.1 is at 00:a0:c5:68:3a:e4

    Read the article

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • How to secure an Internet-facing Elastic Search implementation in a shared hosting environment?

    - by casperOne
    (Originally asked on StackOverflow, and recommended that I move it here) I've been going over the documentation for Elastic Search and I'm a big fan and I'd like to use it to handle the search for my ASP.NET MVC app. That introduces a few interesting twists, however. If the ASP.NET MVC application was on a dedicated machine, it would be simple to spool up an instance of Elastic Search and use the TCP Transport to connect locally. However, I'm not on a dedicated machine for the ASP.NET MVC application, nor does it look like I'll move to one anytime soon. That leaves hosting Elastic Search on another machine (in the *NIX world) and I would probably go with shared hosting there. One of the biggest things lacking from Elastic Search, however, is the fact that it doesn't support HTTPS and basic authentication out of the box. If it did, then this question wouldn't exist; I'd simply host it somewhere and make sure to have an incredibly secure password and HTTPS enabled (possibly with a self-signed certificate). But that's not the case. That given, what is a good way to expose Elastic Search over the Internet in a secure way? Note, I'm looking for something that hopefully, will not require writing code to provide shims for the methods that I want (in other words, writing forwarders).

    Read the article

  • What exactly is a Deamon ? ( how to run a root command from apache binded script that uses www-data user )?

    - by user224235
    I am trying to run this command from WSGI script service httpd restart The problem is this command can only be run by root and apache uses the www-data user. it has been said the solution is to use a Deamon Process i suppose the idea is to send the command to a file that will be executed by a script that is considered "root" user.. its difficult to understand why they would call this a Deamon Process and try to scare me. Perhaps it should have been called : proxy process when i got the idea that this was a proxy process.. i thought about adding a line to /var/spool/cron/root that way the cron would execute the command for me. but of course this means i have to get the system time and then add 1 second to it and then add it to that line so cron would execute the command for me as root but my script demands an output instantly. so i suppose i need to create a DEAMON PROCESS that works like the cron. in other words it is a bash file that will execute the command in a plain file.. but will this DEAMON PROCESS be running a while command 24/7 every second ? would that not waste resources ? it only needs to activate itself to check for a command to execute when there is a command to be executed. i mean in PHP and other programming languages.. running a while statement when there is nothing to be executed could waste resources of the server.. so why should a deamon process constantly be listening for anything. i only want it to listen and execute when it is needed. i do not need a process that is constantly listening.

    Read the article

  • AMD Catalyst 13.9 installation failure

    - by Simon Verbeke
    Earlier today I installed Windows 8.1, and when I wanted to go into Catalyst Control Center, I noticed some odd error of CCC not being able to display options. I then figured I needed a driver update, so I downloaded the latest drivers, version 13.9, and tried to install them. While it was trying to install the display drivers, I got a blue screen. Tried again and got the same. Then I used an uninstall tool from AMD to remove all traces of my old drivers and tried to install the new drivers. Again, a blue screen. This is all I could think of to try. Would anyone know some other things I can try? EDIT: thought I might want to include the log entry for the crash: - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Microsoft-Windows-WER-SystemErrorReporting" Guid="{ABCE23E7-DE45-4366-8631-84FA6C525952}" EventSourceName="BugCheck" /> <EventID Qualifiers="16384">1001</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2013-10-19T20:59:25.000000000Z" /> <EventRecordID>26587</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>Simon-PC</Computer> <Security /> </System> - <EventData> <Data Name="param1">0x0000007e (0xffffffffc0000005, 0xfffff80002a86dca, 0xffffd00025f250e8, 0xffffd00025f248f0)</Data> <Data Name="param2">C:\WINDOWS\MEMORY.DMP</Data> <Data Name="param3">101913-8953-01</Data> </EventData> </Event> Another edit: As it turns out, the graphics card isn't showing up any more in the device manager. But as far as I can tell, it is still working (the fans are spinning and my screen is plugged into that graphics card). This is solved. it appears that my graphics card is now running with a default windows driver. I also tried the forced method mentioned here: AMD Graphics Drivers won't install properly . But I still get a BSOD. Third edit: Slight succes! Managed to install version 13.4. Everything appears to be working fine now. I think I'm just going to skip version 13.9.

    Read the article

  • tc u32 --- how to match L2 protocols in recent kernels?

    - by brownian
    I have a nice shaper, with hashed filtering, built at a linux bridge. In short, br0 connects external and internal physical interfaces, VLAN tagged packets are bridged "transparently" (I mean, no VLAN interfaces are there). Now, different kernels do it differently. I can be wrong with exact kernel verions ranges, please forgive me. Thanks. 2.6.26 So, in debian, 2.6.26 and up (up to 2.6.32, I believe) --- this works: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 flowid 1:200 Here, "kernel" matches two bytes in "protocol" field with 0x8100, but counts the beginning of ip packet as a "zero position" (sorry for my English, if I'm a bit unclear). 2.6.32 Again, in debian (I've not built vanilla kernel), 2.6.32-5 --- this works: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 at 20 flowid 1:200 Here, "kernel" matches the same for protocol, but counts offset from the beginning of this protocol's header --- I have to add 4 bytes to offset (20, not 16 for dst address). It's ok, seems more logical, as for me. 3.2.11, the latest stable now This works --- as if there is no 802.1q tag at all: tc filter add dev internal protocol ip parent 1:0 prio 100 \ u32 ht 1:64 match ip dst 192.168.1.100 flowid 1:200 The problem is that I couldn't find a way to match 802.1q tag so far. Matching 802.1q tag at past I could do this before as follows: tc filter add dev internal protocol 802.1q parent 1:0 prio 100 \ u32 match u16 0x0ed8 0x0fff at -4 flowid 1:300 Now I'm unable to match 802.1q tag with at 0, at -2, at -4, at -6 or like that. The main issue that I have zero hits count --- this filter is not being checked at all, "wrong protocol", in other words. Please, anyone, help me :-) Thanks!

    Read the article

  • iTunes copy just metadata (song and album ratings, playlists) from iPod

    - by Jared Updike
    I have an iPod touch that I synched with my Windows computer (iTunes 9.0 I think) until my harddrive failed and I lost my entire library. I rebuilt the library (songs) from a year old backup (and various other source for songs) but my playlists and ratings are of course a year old. My iPod itself has most of the playlists and ratings I care most about (favorite songs and albums, rated 4 and 5, for example). I have a catch 22 situation where I feel nervous that I haven't backed up my iPod in around 4 months (when my drive failed) so I'd like to back it up as soon as possible... but if I back it up I have to clear all the songs and playlists and copy them back, which I can't really do since I need to rebuild my playlists on my computer first (using the data only available on my iPod!) The question: is there a better way to READ the information off my iPod than doing it manually, song by song and album by album and playlist by playlist (XML, text dump, database, spreadsheet, anything). In other words, mostly I want the information (metadata like ratings and playlists, not songs) copied off the iPod so I can more quickly get my iTunes library ratings and playlists re-built (manually) so I can finally wipe the music and back up my apps, etc. Then I'd like to copy the music back immediately. The part I'd like to avoid is manually navigating everything on my iPod to read through all the playlists and ratings (50 GB, 6,000+ songs) as I re-enter all of that data by hand. I've done a few dozen albums and it's pretty time consuming having to tap around on the iPod. Reading from a spreadsheet (for example, or XML which I could write a script to get into spreadsheet form) would probably help tremendously, plus then I'd have a backup of that information somewhere besides just my iPod.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Sharing between Vista and Windows 7

    - by Metro Smurf
    Vista Ultimate 32 bit Windows 7 Ultimate 64 bit I've read through similar questions about sharing between Win7 and Vista, but none of them have resolved my issue of not being able to share between Win7 and Vista: Connecting to a Vista shared folder from Windows 7 Networking Windows 7 and Vista Enable File sharing in Windows Vista Previously I had previously had my Vista and XP system sharing back and forth without any problems. I was able to access the shares without entering a user name / password in the NT challenge prompt (note: account names and passwords were different on the Vista and XP systems). Currently I replaced my XP system with a Win7 system. Now, when I attempt to access shares to/from Vista / Win7, I am continually prompted with an NT challenge to enter my credentials. Things I've Verified/Tried Both systems are on the same workgroup. Win 7 is using the Home network. Vista is using the Private network. In other words, neither system is using a Public network profile. Enabled file sharing with and without password protection on both Vista and Win7 Tried HomeGroup Connections (win7) with Windows to manage connections and Use user accounts to connect. Reviewed too many online articles to count to trouble shoot. Set the shares to have full control by everyone. Set up the shares directly on the directory and through the share manager. My Question How can I enable file sharing between Vista and Win7 without being prompted with a username/password challenge, ever?

    Read the article

  • Implications of Multiple JobTracker nodes in a Hadoop cluster?

    - by Jim Dennis
    I get the impression that one can, potentially, have multiple JobTracker nodes configured to share the same set of MR (TaskTracker) nodes. I know that, conventionally, all the nodes in a Hadoop cluster should have the same set of configuration files (conventionally under /etc/hadoop/conf/ --- at least for the Cloudera Distribution of Hadoop (CDH). Can we define multiple Job Trackers in mapred-site.xml? Something like: <configuration> <property> <name>mapred.job.tracker</name> <value>jt01.mydomain.not:8021</value> </property> <property> <name>mapred.job.tracker</name> <value>jt02.mydomain.not:8021</value> </property> ... </configuration> Or is there some other allowed syntax for this? What are the implications of doing this. Does each JobTracker get information about the load on each TaskTracker node. In other words can the two JobTracker co-ordinated their scheduling across the TT nodes only based on the gossip information from the TTs or would they need to talk to one another? Is this documented anywhere?

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • Share one ssl certificate between multiples vhost

    - by Cesar
    I have a setup like this: <VirtualHost 192.168.1.104:80> ServerName domain1 DocumentRoot /home/domain/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> ServerName domain2 DocumentRoot /home/domain2/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> DocumentRoot /home/domain3/public_html ServerName domain3 ... </VirtualHost> <VirtualHost 192.168.1.104:443> ServerName domain3 SSLCertificateFile /usr/share/ssl/certs/certificate.crt SSLCertificateKeyFile /usr/share/ssl/private/private.key SSLCACertificateFile /usr/share/ssl/certs/bundle.cabundle ... </VirtualHost> I want to use domain3 certificate in the other domains, preferably without having to repeat all the <VirtualHost 192.168.1.104:443> config. In other words I want something like this: If the vhost has no explicit ssl config use cert for domain3 (/usr/share/ssl/certs/certificate.crt) Notes: 1.- I for sure will be setting more vhosts in the future 2.- I know (and don't care) of the ssl warnings the browser will show (hostname mismatch) If this possible? how?

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >