Search Results

Search found 17550 results on 702 pages for 'real world'.

Page 523/702 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Advice needed: warm backup solution for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a warm backup server for a SQL Server Express instance running a single database? Sitting beside my production SQL Server 2008 Express box I have a second physical box currently doing nothing. I want to use this second box as a warm backup server by somehow replicating my production database in near real time (a little bit of data loss is acceptable). The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping natively, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • Linking to network shares from Sharepoint pages

    - by Russell C
    So the place I work decided to set up a Microsoft Sharepoint 2010 server for task management and I (as the lowly entry-level intern) have been tasked with "figuring it out." One thing that the end users really, really, really, want is the ability to link to network shares (that are readable by anyone who will be using sharepoint) from a Sharepoint web page. In order to do this, I have edited the HTML manually with several lines that look like the following: <a href="file://server/share">Server Share</a> This works (sometimes) but the link reported by Sharepoint is often wrong and editing pages that contain these links will mangle the code such that when I open it, the code no longer looks like what it did when I last hit save (breaking all those links). Obviously this is not sustainable. I've been told by coworkers that "It worked that way at the last place I worked" but I haven't found out how yet. Any ideas on how this would work or am I barking up the wrong tree? None of the knowledge searches I've done shed any light on the sitataion. Thanks for any help! -Russell P.S. It should be noted that the file option in an href tag ONLY works in IE (which is a real bummer since we mostly use Firefox).

    Read the article

  • Lose internet connection, yet online games continue

    - by Mike
    For the past week or so, my internet connection has been anything but stable. Restarting my modem/router always fixes the problems, but since it has occurred so often, I'm noticing confusing patterns which I was hoping someone could help answer. My internet connection kicks out about 4-5 times a day. The sure-fire way to fix it is to restart my all-in-one modem/router. Sometimes I can diagnose the problem on my laptop which resets my wireless network adapter and fixes the problem, but not always. If that doesn't fix the problem, it usually reports that the connection between the modem and internet is the problem which requires a restart of the router. The odd thing which baffles me is that my connection is supposedly lost such that no browsers can connect to sites, yet things like online games still continue to play without issue. How is this possible? I thought maybe the game was running locally on my PC but that couldn't be the answer because I was still getting messages from other players. So my real question is: How can my internet browsers (firefox, chrome, even IE) lose connection to the internet, but other applications like online games not? Am I actually losing connection or am I mistaken? Edit: I'd also like to add that netflix on my PS3 which is directly connected to the same access point will also lose connection. So internet browsers and netflix lose their internet connection while online games continue without an issue.

    Read the article

  • Can't seem to get python to work

    - by Justin Johnson
    I'm just starting out in Python. The Python interpreter works from the command line (I have 2.4.3), but I can't seem to get Apache to execute Python scripts. All I end up with is a blank screen and nothing in the Apache error logs. I enabled Python via the Plesk control panel. Here's the snippet that was generated in the httpd.include: <Files ~ (\.py$)> SetHandler python-program PythonHandler mod_python.cgihandler </Files> My test script is one of the examples that comes with the Python downloads at http://python.org/download/ #!/usr/local/bin/python """CGI test 1 - check server setup.""" # Until you get this to work, your web server isn't set up right or # your Python isn't set up right. # If cgi0.sh works but cgi1.py doesn't, check the #! line and the file # permissions. The docs for the cgi.py module have debugging tips. print("Content-type: text/html") print() print("<h1>Hello world</h1>") print("<p>This is cgi1.py") That wasn't working, so I changed #!/usr/local/bin/python to #!/usr/bin/python which is what which python tells me but the results were the same. Like I said, I'm ending up with a blank page. No errors that I know of, unless I'm checking the wrong error log (I'm checking the Apache error log). I'm on a MediaTemple (dv) running CentOS.

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Redundant OpenVPN connections with advanced Linux routing over an unreliable network

    - by konrad
    I am currently living in a country that blocks many websites and has unreliable network connections to the outside world. I have two OpenVPN endpoints (say: vpn1 and vpn2) on Linux servers that I use to circumvent the firewall. I have full access to these servers. This works quite well, except for the high package loss on my VPN connections. This packet loss varies between 1% and 30% depending on time and seems to have a low correlation, most of the time it seems random. I am thinking about setting up a home router (also on Linux) that maintains OpenVPN connections to both endpoints and sends all packets twice, to both endpoints. vpn2 would send all packets from home to vpn1. Return trafic would be send both directly from vpn1 to home, and also through vpn2. +------------+ | home | +------------+ | | | OpenVPN | | links | | | ~~~~~~~~~~~~~~~~~~ unreliable connection | | +----------+ +----------+ | vpn1 |---| vpn2 | +----------+ +----------+ | +------------+ | HTTP proxy | +------------+ | (internet) For clarity: all packets between home and the HTTP proxy will be duplicated and sent over different paths, to increase the chances one of them will arrive. If both arrive, the first second one can be silently discarded. Bandwidth usage is not an issue, both on the home side and endpoint side. vpn1 and vpn2 are close to each other (3ms ping) and have a reliable connection. Any pointers on how this could be achieved using the advanced routing policies available in Linux?

    Read the article

  • Limiting bandwith on an Windows 7 machine

    - by Mihai Damian
    I need to limit the bandwidth on my Windows 7 x64 machine. In the past (on XP) I've been able to use NetLimiter for similar tasks. However for some reason I can't get it to work anymore. For lower limits the bandwidth tests are able to exceed the limit by 10-50%; higher limits seem to be ignored completely and the bandwidth tests report download speeds of over 10 times the speed I set. I'm using speedtest.net and some similar service from my ISP for these tests. Anyway, I don't necessarily need a program as complex as NetLimiter since I only need to throttle my machine's bandwidth, not a specific program's. In case you are wondering why in the world I'd want to cripple my Internet speed, there is a funny story behind this. Long story short, my modem gets random disconnects. Tech support comes in, says my Internet speed is abnormally high and I must be using some tools to somehow make it go faster than it's supposed to and this messes up my modem. I check the connection with another computer and it seems that my PC is the only one in my network that gets abnormal speeds. I reinstall my OS, speed looks normal at first, after I install the batch of 50 or so updates, it goes back to abnormally high speeds and the disconnect problems are not solved. Now I don't have a clue if the explanation the tech team gave me was just a strategy to lay the blame on someone else, but I was trying to give them the benefit of the doubt and see what happens if I really reduce my speed to their specification. Any help appreciated.

    Read the article

  • What's the difference between Host and HostName in SSH Config?

    - by Bill Jobs
    The man page says this: Host Host Restricts the following declarations (up to the next Host keyword) to be only for those hosts that match one of the patterns given after the keyword. If more than one pattern is provided, they should be separated by whitespace. A single `*' as a pattern can be used to provide global defaults for all hosts. The host is the hostname argument given on the command line (i.e. the name is not converted to a canonicalized host name before matching). A pattern entry may be negated by prefixing it with an exclamation mark (`!'). If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. See PATTERNS for more information on patterns. HostName HostName Specifies the real host name to log into. This can be used to specify nicknames or abbreviations for hosts. If the hostname contains the character sequence `%h', then this will be replaced with the host name specified on the command line (this is useful for manipulating unqualified names). The default is the name given on the com- mand line. Numeric IP addresses are also permitted (both on the command line and in HostName specifications). For example, when I want to create an SSH Config for GitHub, what should Host and HostName be respectively?

    Read the article

  • Unable to access my IIS website using hostname. Works fine with localhost

    - by rajugaadu
    I am unable to access my IIS website or even the default website. I did a bit of research and checked/selected the option 'Integrate Windows Authentication' in the Properties > Directory Service tab. From then on I could access the website using localhost. But when I use my hostname, it asks for domain username/password. Why is it so? I don't understand why am I not able to access my website without checking this option to integrate windows authentication? My goal is to access the website using both localhost and hostname. More details on what I did: I haven't done anything out of world. What I did is: IIS - Websites - Create new Website - Create a working folder - Set a default page. I restart this website and then click on browse. And I do not see my default page. I had to go to Directory Service tab and select the check box "Integrate Windows Authentication". Then I can see the default page coming. On IE too I can see the default page coming when I use http://localhost. But when I use http://{hostname} it asks for domain username and password. Why???

    Read the article

  • Outlook 2007 will not send/receive using RPC over HTTP to an exchange server.. works for other users

    - by bob franklin smith harriet
    I have an incredibly frustrating user issue that I have been unable to resolve for over a week, any ideas for this would be greatly appreciated. The user is having troubles using Outlook 2007 to send or receive emails over using RPC over HTTP (Outlook Anywhere) to an exchange server. Basically what happens, the connection will be establised and the user will be prompted for the username and password, those are submitted and then outlook tries to download emails which fails and the connection to the exchange server will remain unavailable. The machine can ping and everything to the exchange server there is no connection issue there. The setup worked fine for him up untill now and also works for possibly hundreds of other users using the exact same settings, also the same settings will work from the users iphone on the same internet connection, and from my own system using outlook. The exchange server has the webmail https feature and that can be logged into and send and receive emails fine. Steps taken so far: removing the .ost file for the account and allowing office to rebuild it (fixes the issue for a short period of time, then the same symptons occur) deleted exchange profile and recreated (no change in issue) uninstalled all antivirus and firewalls (no change in issue) removed all cached passwords (keymgr.dll) (no change in issue) removed all entries from the hosts file (no change in issue) uninstalled and reinstalled office 2007 (Temporary fix of issue) Installing Symantec Endpoint Client caused a lot of email scan popups to be displayed, after a reboot this stopped and a scan it picked a few trojans that it removed. This fixed the issue temporarily as well, the issue is back again now. I am completely out of ideas now, there seems to be nothing that can be done to fix this issue outside of rebuilding the PC which is a massive pandoras box I don't want to enter with this user. --- Update ---- Malware scans from multiple products have been run on the machine and all updates were installed. The real problem with this user is his distance from us, there is no way to supply a spare laptop or rebuild the machine currently.

    Read the article

  • Which internet scenario would be better?

    - by JL
    I currently have an 8mbps (down) / 512kbps (up) telephone ADSL solution. I must say the reliability is excellent, and up until now its been the fastest connection I could get because I don't live in a cable zone. The real speed of my connection is around 7mbps, but sometimes I manage to get the full 8mbps. I use my connection for work, so it needs to be at least 99% reliable. Recently I was told by a guy who lives up the road that he has a wireless connection with an external antenna and his speeds are 20mbps / 512kbps - he's also paying about 1/2 of what I pay for my wired telephone connection. My question is, is wireless internet good enough for a power user who uses his connection for work 8 hours a day, including VPNing into servers remotely. Besides this I also enjoy playing the odd network game, not a WoW freak, but sometimes I do pick up the odd MMORPG and at times do indulge in some semi heavy gaming sprees. Will this wireless latency drive me crazy and seem slow in comparison? Will it be reliable enough, I also live in an area that snows heavily in winter. I guess its a question of - should I go wireless or not. I've only had 1 wireless connection before and that was years ago using iBurst technology and I remember it was terrible for VPN, but I guess the technology might have been improved since then? What do you guys think?

    Read the article

  • Streaming to PS3 with NAS and built-in dlna server?

    - by philt
    With consumer-grade hardware, is it possible to successfully stream 1080p mp4 videos to a PS3? I have a linksys router that can only do 10/100. The PS3 is wired to it with cat5e cable, and the PS3 itself supports gigabit ethernet. I would upgrade the router and get one that supports gigabit ethernet if it could handle streaming like this. It currently does work with minor jerkiness streaming from my mac to the PS3, but fast-forward/reverse and "goto" (equivalent of scene selection) take forever and/or fail completely. And streaming from my mac of course requires the mac to be on at all times. When I put the movies on an external USB drive and connect to the PS3 directly, it performs flawlessly. Fast forward and everything works great. So I was thinking about getting a NAS, but I don't know if any inexpensive NAS (i.e. Buffalo Linkstation Live, WD My Book World Edition, D-Link DNS-321, etc.) can actually deliver the performance necessary to do this, even with gigabit ethernet?

    Read the article

  • Why is Mac supposedly better than Windows for graphics?

    - by Svish
    Ok, people just keep telling me that if you're going to be working with graphics and design and stuff, you should get a Mac. And I just don't get the logic. Because most of these people would be working with Adobe software, which are for both Windows and Mac. To me it seems like their whole argument is based on that "everyone else does". Like, Mac had some graphics software that Windows didn't earlier in history, so most people were using Mac. And since most people were using Mac, new people also started using Mac. And since most people were using Mac, schools and universities used Mac. Which taught new people to use Mac. So they were using Mac. And told everyone they met that everyone they knew were using Mac. And so on. Anyways... What is the deal really? Is there actually any advantage in using Mac for graphics and design and such things? My take is that you pretty much have the same software and both Mac and Windows are powerful enough, support enough RAM, are stable (as long as you don't install lot's of junk or faulty drivers), et cetera. So, can anyone give me a good explanation on this? Is there a real difference or are people just brainwashed?

    Read the article

  • How to connect a VM running on an ESXi host to that host via a VMKernel NIC?

    - by Zac B
    Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it. The ESXi host itself has a gigabit NIC which connects to the outside world. The guest OS (CentOS) supports VMXNet3, however, and if I can, I'd like to use its VMXNET3 NIC to host iSCSI for the ESXi host. How should I go about doing this? I went to create a new virtual network, and selected "VKernel", as it suggested that I use that type of network for SAN traffic, but it is apparently not set up for "self-hosted" SAN hosts, as the new network did not appear as an option to attach the CentOS box's VMXNET3 NIC to. How should I best connect an iSCSI host out to its "parent" ESXi host, if I need a) a 10gb connection, and (optionally) b) a VMKernel network for it?

    Read the article

  • SharePoint Search: processing filenames containing underscores

    - by Todd Owen
    We use SharePoint Server 2007 to allow employees to search network file shares, but it seems that underscores in filenames are not treated as word separators when indexing the files. As a result, a search for chocolate will: match "chocolate milkshake.doc" but not match "chocolate_cake.doc" (Of course, this is a simplified example; in practice the content of the second file might include the word "chocolate" and match on that instead of the filename. But the problem itself is real enough, because a common scenario in a corporate environment is that a user knows the the partial name of the file they are looking for and expects to see matching filenames at the top of the search results. And using underscores in filenames is a widely used convention within our company). Underscores are not treated as word separators in the file content either, although this is less of a concern for us. The root cause of this problem is possibly related to the behaviour of the word breakers that SharePoint uses (i.e. the language-specific DLLs that implement the IWorkBreaker interface), although I haven't confirmed this yet. Does anyone know of a workaround for this issue? I have tested with Search Server 2008 Express too (which is based on the same technology), and it is also affected. I do not know whether the problem is fixed in SharePoint 2010 or not.

    Read the article

  • Unable to connect to CopSSH when running Windows service, works when running sshd directly

    - by Joe Enos
    I've been using CopSSH (that uses OpenSSH and Cygwin, so I don't know which of the three is the problem) as my SSH server application at home on Windows 7 Ultimate 32 bit. I have used it for about a year with no real problems, other than it sometimes takes 2 or 3 connection attempts to get through, but it's always worked within a few attempts. A few days ago, it just stopped working. The Windows service is still running, and I've rebooted, restarted the service, etc. with no change. On the client (using Putty on Windows), I get the message "Software caused connection abort". On the server, my event viewer registers the following: fatal: Write failed: Socket operation on non-socket I finally got it working, but only by executing sshd.exe directly from the command line on the server. No special flags or options, just straight execution, and then when I connect remotely, it goes through. I do have firewall and anti-virus software which appears to be configured properly, but the fact that things work when running sshd.exe also indicates that the firewall is fine. I thought the service and executable did exactly the same thing, but apparently there's some difference. Does anyone have any ideas on where I should look for the problem? If I can't find something, I suppose I can write a Windows service or scheduled task that fires off sshd.exe directly and ensures that it stays running, but that's kind of a last resort, since it's just wrapping around something that should already work. I appreciate your help.

    Read the article

  • Uninstall IIS on Windows 7

    - by CJM
    I've just rebuilt my development machine and installed IIS. I then installed the Web Deployment tool and used this to restore my previously-backed-up websites to the clean machine. Unfortunately the restoration didn't work correctly/fully. I couldn't easily correct the problem, so I decided to uninstall/reinstall IIS and recreate the sites manually. I uninstalled IIS and rebooted, but there was still plenty of stuff left around such as various files in /windows/system32/inetsrv/ which I tried to delete manually (with limited success!). I rebooted again and tried to reinstall IIS - it reported an error (no meaningful message) and requested another reboot. The event log includes the following errors: The World Wide Web Publishing Service (WWW Service) did not register the URL prefix http://*:80/gallery for site 1. The site has been disabled. and Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. I'd like to avoid another rebuild. Can I completely remove IIS, such that I can reinstall it from scratch? Or can I 'fix' the current setup so that IIS will reinstall over what is already there?

    Read the article

  • Internal and External DNS from Different Servers, Same Zone

    - by Shane
    Hello All, I am either having trouble understanding how DNS works, or I am having trouble configuring my DNS correctly (either one isn't good). I am currently working with a domain, I'll call it webdomain.com, and I need to allow all of our internal users to get out to dotster to get our public DNS entries just like the rest of the world. Then, on top of that, I want to be able to supply just a few override DNS entries for testing servers and equipment that is not available publically. As an example: public.webdomain.com - should get this from dotster outside.webdomain.com - should get this from dotster as well testing.webdomain.com - should get this from my internal dns controller The problem that I seem to be running into at every turn is that if I have an internal DNS controller that contains a zone for webdomain.com then I can get my specified internal entries but never get anything from the public DNS server. This holds true regardless of the type of DNS server I use also--I have tried both a Linux Bind9 and a Windows 2008 Domain Controller. I guess my big question is: am I being unreasonable to think that a system should be able to check my specified internal DNS and in the case where a requested entry doesn't exist it should fail over to the specified public dns server -OR- is this just not the way DNS works and I am lost in the sauce? It seems like it should be as simple as telling my internal DNS server to forward any requests that it can't fulfill to dotster, but that doesn't seem to work. Could this be a firewall issue? Thanks in advance

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Set scheduled task last result to 0x0 manually

    - by Rogier
    Every night a task runs that checks if any scheduled task has a Last Result is not equal to 0x0. If a scheduled tasks has an error like 0x1, then automatically an e-mail is sent to me. As some tasks are running only weekly, and sometimes an error occurs which results in not equal to 0x0, every night an e-mail is sent with the error message, as the Last Result column still shows the last result of 0x1. But I would like to set the Last Result column to 0x0 manually if I solved a problem, so I won't get every night an e-mail with the error message. So is it possible to set the scheduled tasks Last Result to 0x0 manually (or by a script)? @harrymc. See located script underneath that is sending the e-mail. I can easily add a criteria to ignore result 0x1 (or another code), however this is not the solution as most of the times this result is a real error and has to be e-mailed. set [email protected] set SMTPServer=SMTPserver set PathToScript=c:\scripts set [email protected] for /F "delims=" %%a in ('schtasks /query /v /fo:list ^| findstr /i "Taskname Result"') do call :Sub %%a goto :eof :Sub set Line=%* set BOL=%Line:~0,4% set MOL=%Line:~38% if /i %BOL%==Task ( set name=%MOL% goto :eof ) set result=%MOL% echo Task Name=%name%, Task Result=%result% if not %result%==0 ( echo Task %name% failed with result %result% > %PathToScript%\taskcheckerlog.txt bmail %PathToScript%\taskcheckerlog.txt -t %YourEmailAddress% -a "Warning! Failed %name% Scheduled Task on %computername%" -s %SMTPServer% -f %FromAddress% -b "Task %name% failed with result %result% on CorVu scheduler %computername%" )

    Read the article

  • Juniper NetScreen NS-5GT traffic monitoring

    - by blah
    I've done casual research into the subject and am truly dismayed at the lack of compatible tools for such a simple task. Maybe someone can provide assistance. We have a NetScreen NS-5GT in the office. I need to be able to get a glance of current traffic per endpoint -- I think the equivalent of 'get sessions' with byte counts/rates. I don't care about bars, graphs, and reports. Something as simple as a classic software firewall display would be perfect. I can't shell out money on something real like SolarWinds products, so a free solution is essential. I'm willing to do a little work but refuse to program something from scratch. It's not prudent right now for me to install a hub or otherwise mess around physically. There must be something out there I can use, maybe in combination. I don't believe I'm asking too much. Specific answers only please, e.g. monitoring software you know will actually work with this antiquated device. I've read about general approaches to the broader problem dozens of times already.

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • Set up server at home for running svn and Bugzilla

    - by erikric
    Hi, I'm a fairly experienced developer, but when it comes to servers and network related stuff, I'm pretty green. We are developing a web site, and I would like to set up a server that can host my SubVersion repository, and also host Bugzilla for when we release a testversion on some users. So what do i need to accomplish this? I have an old computer that can be used. Can I run this on any OS? It currently has Win 7 installed, but I was thinking about going for Ubuntu instead. Any reason to go for one or the other? I guess I need a web server, and I guess Apache will do fine. Do I need something else to let my computer be available from all over the web, or is a web server and a standard internet connection all it takes? A link some good online tutorials would be much appreciated. And then I mean for real dummies ;). I usually find pages that try to explain setting up servers going way over my head.

    Read the article

  • Which Linux distribution for vehicle LCD instrument panel

    - by Brent
    I will be designing an instrument panel for a vehicle to display the common gauges that you would find in a car - (speedometer, rpm, fuel level, oil pressure, etc.). We have selected a 7" LCD and are in the process of narrowing down the hardware (This will use an ARM processor). The idea is to read these values off of the CAN Bus and update the UI with those values. This needs to have a fairly quick boot time, 5-10 seconds would be acceptable from the time the ignigtion is turned on to the time the UI is running. I have been doing a lot of research on which linux distribution to use, but I wanted to ask the question here to get the community's suggestions. I have been a .NET programmer for years, so linux is a new world to me. Here is what I have found so far... Tizen is geared for In-Vehicle Infotainment (IVI) (plus some others). However, this project is not an IVI, and I do not need the phone dialer, navigation, etc. Meego is dead, and Tizen seems to be the replacement Angstrom, Debian... would either of these be useful? I am not tied to a particular programming language or IDE. Any help and direction is appreciated!

    Read the article

  • Nginx proxy domain to another domain with no change URL

    - by Evgeniy
    My question is in the subj. I have a one domain, that's nginx's config of it: server { listen 80; server_name connect3.domain.ru www.connect3.domain.ru; access_log /var/log/nginx/connect3.domain.ru.access.log; error_log /var/log/nginx/connect3.domain.ru.error.log; root /home/httpd/vhosts/html; index index.html index.htm index.php; location ~* \.(avi|bin|bmp|css|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|js|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|pps|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /home/httpd/vhosts/html; access_log off; expires 1d; } location ~ /\.(git|ht|svn) { deny all; } location / { #rewrite ^ http://connect2.domain.ru/; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_hide_header "Cache-Control"; add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; proxy_hide_header "Pragma"; add_header Pragma "no-cache"; expires -1; add_header Last-Modified $sent_http_Expires; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } I need to proxy connect3.domain.ru host to connect2.domain.ru, but with no URL changed in browser's address bars. My commented out rewrite line could solve this problem, but it's just a rewrite, so I cannot stay with the same URL. I know that this question is easy, but please help. Thank you.

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >