Search Results

Search found 13441 results on 538 pages for 'toll free'.

Page 433/538 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Changing Mac OS X 10.6 Routing after VPN'd In

    - by Matt Rogish
    I have a coffee shop around the corner that I use to do some work when I want to get away from home. They offer free wi-fi and I then use my Mac 10.6 VPN to log into my work network. I have "Send all traffic over VPN connection" checked. Before, their network was 10.0.0.x. I think they got a new router because it's now 192.168.2.x However, this interferes with one of the subnets at work so now I can't visit 192.168.2.x at work. So: 1) Office network: VPN gives IPs as 192.168.1.x. Another network is 192.168.2.x 2) Coffee network: Gives IPs as 192.168.2.x I think if I set a route to send all 2.x traffic over the tunnel, it would blow up my routing to their system, right? What should I do? I know the individual IPs of the servers I want... Maybe I could add each one, or can I add all of them minus the default gateway of their router? How do I set that up "temporarily" in my Mac? Thanks!!

    Read the article

  • How do I find information about a particular trojan? "W32/Smalltroj.XVGT", as reported by Norman

    - by Lasse V. Karlsen
    I tried checking the Norman antivirus page, Virus-descriptions, but sadly it seems Norman has intentionally obfuscated their search results (I tried clicking on W, and it seems they just list viruses with a W somewhere in the description, instead of more typical, all viruses with a name starting with a W.) Is there a common virus-list somewhere, or is it as I suspect, every antivirus manufacturer is free to come up with their own identification tags for each virus? Several "vshost32.exe" files, related to Microsoft Visual Studio 2008, has been quarantined on our server today, probably related to a test-deployment of some internal software. Some developer machines that have grabbed that latest version of our program has also had the same files quarantined. Now, these files should not have been deployed in the first case, so I'll be looking into that, but whenever any developer now builds a program locally and attempts to debug, the same file is placed in the build output directory, and promptly quarantined. Does anyone have any clues as to how I can go about verifying this before I pointedly ask the antivirus software to go take a hike on this particular virus? Edit: I've copied one of the quarantined files manually to a machine over the network that doesn't have antivirus installed, and compared the file on that machine with a local copy (on that machine) of the vshost32.exe template file, and they're bit-for-bit identical. I guess this is a false positive. I still would like to know if it would be possible for me to verify this in any other way though, since next time such a trojan might be reported in a compiled file that we won't have a pristine copy of.

    Read the article

  • Juniper NetScreen NS-5GT traffic monitoring

    - by blah
    I've done casual research into the subject and am truly dismayed at the lack of compatible tools for such a simple task. Maybe someone can provide assistance. We have a NetScreen NS-5GT in the office. I need to be able to get a glance of current traffic per endpoint -- I think the equivalent of 'get sessions' with byte counts/rates. I don't care about bars, graphs, and reports. Something as simple as a classic software firewall display would be perfect. I can't shell out money on something real like SolarWinds products, so a free solution is essential. I'm willing to do a little work but refuse to program something from scratch. It's not prudent right now for me to install a hub or otherwise mess around physically. There must be something out there I can use, maybe in combination. I don't believe I'm asking too much. Specific answers only please, e.g. monitoring software you know will actually work with this antiquated device. I've read about general approaches to the broader problem dozens of times already.

    Read the article

  • Linux: prevent VNC from swapping like mad

    - by Weezy
    I'm accessing a MacMini (with MacOS X 10.4) from my Linux machine using VNC and there's an issue that is driving me crazy... My Linux machine has 4 GB of ram and I run a lot of various apps on it and I've got no issue at all. It's all snappy and don't hear the hard disk swapping/read/writing too often. Now with VNC, the hard disk is swapping like mad... When I'm moving things on the OS X desktop. So I was thinking of creating a ramdisk and forcing the temp VNC files to go into that ramdisk but the problem is I can't find any temp files. I've attempted to do that: #!/bin/bash while [ true ] do lsof | grep vnc done And eyeball parse the output to try to find some temp file: no luck. The VNC version I'm using is this one: $ vncviewer -version VNC Viewer Free Edition 4.1.1 for X - built Jan 30 2009 19:33:16 Copyright (C) 2002-2005 RealVNC Ltd. No matter how much data is coming from the Mac, there should be plenty of memory (4 GB of ram) so there's really no reason to swap like crazy. This is driving me mad. Any help as to how I could solve this problem is most welcome because this is literally driving me nuts.

    Read the article

  • Macbook Pro won't boot from DVD with SSD

    - by Adam Carr
    Here's the timeline of events. Had a running MBP 17 Early 2011 Thunderbolt with OWC Mercury Extreme Pro SSD 115GB drive. Installed Windows 7 via bootcamp. I have done this multiple times before and every time I need to format the bootcamp partition before installing. I think this time I actually deleted the partition and then selected the freespace to install. This worked fine for the most part but I wasn't able to boot the boot camp partition using vmware fusion. I gave up and used the boot camp assistant to revert back to one mac partition. I was getting some odd behavior so I rebooted the machine. It then came up with a message saying no bootable partiton. This made me think (and still does) that the windows install using the free space versus the boot camp partition caused the windows MBR boot loader to get installed incorrectly and mucked up the OS X installation. Ok, fine, I can just reinstall. I can't seem to boot from the original MBP installation DVD. I hold down c on boot but I never get past the all grey screen. I hear the DVD drive spin up but it eventually stops. I put the original HD back in it and everything works fine but when I put the SSD in, I can't boot from the DVD drive. I have already set up an RMA with OWC to send back the drive but considering the order of events, I feel as though it isn't a hardware issue but can't seem to figure out how to fix it. I can always send it back in but figured I would check and see if anyone could offer some guideance/assistence before doing so.

    Read the article

  • CMS/Wiki to use for a HTML5 video site

    - by Clinton Blackmore
    Greetings. I want to put up a website with instructive screencasts, and allow for people to add comments to them. I would like use the Video for Everybody technique, partly because I dislike Flash and because it helps in a small way to move the web forward [while being backwards compatable]. I recognize that HTML5 is still in draft, and that support for it varies. I do have some hosting space, and can run Perl, PHP, and Ruby on Rails applications, with a MySQL backend. I should mention that part of my working job involves running some web servers, and that I am a programmer by training (with only a limited familiarity with Perl and PHP, and none with Ruby). I should mention why I don't particularly want to go with a video hosting site (like YouTube or Vimeo): Flash Video Resolution and Quality [I'd like to put up 800x600 videos] Videos promote a club that is not stricly non-profit [ie. may fall afoul of Terms of Service] I'm already paying for web hosting, and free video hosting comes with time and bandwidth limits I don't want there to be two locations where you can comment on the video Now, having said all that, I'd be quite comfortable putting up my own HTML pages, except: that's so web 1.0! :) [ie. it does not allow for comments] I also want to do some blogging and possibly put up a wiki; the site will not be entirely screencasts So, can anyone recommend a CMS (or Wiki, or similar application) that I can customise for this purpose?

    Read the article

  • Trying to delete a directory stored on a WIndows server, mounted on a mac

    - by AdamG
    I am trying to delete a directory stored on a Windows 2008 R2 server, mounted on a Mac as network home (10.8.5). The directory was created by Safari and stores temporary internet files. I need to be able to delete this folder on logout from a Mac bash script. The Terminal on Mac shows the directory as empty: 36W-FacRm-02:History lwickham$ cd /home/lwickham/Library/Caches/Metadata/Safari/History 36W-FacRm-02:History lwickham$ ls -al total 0 drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:24 . drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:28 .. However, on the Windows server it has a single 0kb file that doesn't start with a "." but yet is invisible to the Mac. E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History>dir Volume in drive E is FacultyUsers2 Volume Serial Number is 8C17-4EF3 Directory of E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History 11/08/2013 09:24 AM <DIR> . 11/08/2013 09:24 AM <DIR> .. 11/07/2013 04:28 PM 0 http?%2F%2Fwww.google.com%2Furl?sa=t&rct= j&q=&esrc=s&source=web&cd=6&ved=0CFsQFjAF&url=http%253A%252F%252Fwww.usbanklocat ions.com%252Fhsbc-bank-usa-96th-street-branch.html&ei=5vR7UtmXEPjfsATe0YCIBA&usg =AFQjCNF9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d. 1 File(s) 0 bytes 2 Dir(s) 514,231,967,744 bytes free All my attempts to delete the dir from the Mac have failed: 36W-FacRm-02:History lwickham$ rm -fr /home/lwickham/Library/Caches/Metadata/Safari/History/* 36W-FacRm-02:History lwickham$ rm -frd /home/lwickham/Library/Caches/ rm: /home/lwickham/Library/Caches//Metadata/Safari/History: Directory not empty rm: /home/lwickham/Library/Caches//Metadata/Safari: Directory not empty rm: /home/lwickham/Library/Caches//Metadata: Directory not empty rm: /home/lwickham/Library/Caches/: Directory not empty

    Read the article

  • How to setup an IPSec / GRE tunnel on Windows Server 2008

    - by qbeuek
    I have a Windows Server 2008 that has a single network interface configured with a public IP address. My business partner has a private network. From my server, I need to access all the devices on his private network, and those devices must be able to access my server. My business partner has a standard solution for these requirements. They will setup an IPSec + GRE tunnel to my server. They told me, that I will need an additional public IP address for this to work. If it really is necessary, there is no problem, I can get an additional public IP address, although it will be assigned to the same physical network interface. I assume that on my server I will have both public IP addresses and also the private IP address from the tunnel (the same that is visible for the devices inside the private network). What alternatives do I have? Is it possible to configure this tunnel on my Windows Server 2008? Can it be done using only Windows tools, or do I need an additional free / commercial VPN software? If it cannot be done directly on Windows, can I setup an additional virtual machine running Linux, that will handle the IPSec + GRE tasks? How to do it? If it cannot be done on a virtual linux box, will I have to buy and setup a Cisco router to handle the IPSec + GRE tasks? Thanks for your opinions. I'm watching this question to clarify any issues or questions.

    Read the article

  • How to set up a VPN Incoming connection with Windows to tunnel Internet traffic?

    - by Mehrdad
    I want to set up a VPN on a remote server to route all my Internet traffic for privacy reasons. I can set up an incoming connection and connect to it successfully. The problem is, I can just see the remote computer and no other Web sites will open. I want the remote server to act like a NAT. How can I do that? Note that I don't want to split Internet traffic. I actually want to send all the traffic to the remote server but need to make it relay the traffic. For the record, my remote server is Windows Web Server 2008 which does not have routing and remote access service. Clarification I'm mostly interested in server configuration. I don't have any problems configuring the client. By the way, Windows Web Server 2008 seems to have the same VPN features built in client OSes (like Vista) and specifically, it doesn't include the RRAS console in MMC. I'm also open to suggestions regarding third party PPTP/L2TP daemons available, if they are free.

    Read the article

  • What are the attack vectors for passwords sent over http?

    - by KevinM
    I am trying to convince a customer to pay for SSL for a web site that requires login. I want to make sure I correctly understand the major scenarios in which someone can see the passwords that are being sent. My understanding is that at any of the hops along the way can use a packet analyzer to view what is being sent. This seems to require that any hacker (or their malware/botnet) be on the same subnet as any of the hops the packet takes to arrive at its destination. Is that right? Assuming some flavor of this subnet requirement holds true, do I need to worry about all the hops or just the first one? The first one I can obviously worry about if they're on a public Wifi network since anyone could be listening in. Should I be worried about what's going on in subnets that packets will travel across outside this? I don't know a ton about network traffic, but I would assume it's flowing through data centers of major carriers and there's not a lot of juicy attack vectors there, but please correct me if I am wrong. Are there other vectors to be worried about outside of someone listening with a packet analyzer? I am a networking and security noob, so please feel free to set me straight if I am using the wrong terminology in any of this.

    Read the article

  • CentOS 6.3 X86_64 RAM detection

    - by Peter
    I have a machine with 8GB ram (BIOS sees it, so my motherboard and CPU supports it), and I installed CentOS 6.3 on it. When it starts up, it only see 3.1GB. uname says: 2.6.32-279.1.1.el6.x86_64 #1 SMP BIOS-provided physical RAM map: BIOS-e820: 0000000000000000 - 000000000009fc00 (usable) BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved) BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 00000000cf65f000 (usable) BIOS-e820: 00000000cf65f000 - 00000000cf6e8000 (ACPI NVS) BIOS-e820: 00000000cf6e8000 - 00000000cf6ec000 (usable) BIOS-e820: 00000000cf6ec000 - 00000000cf6ff000 (ACPI data) BIOS-e820: 00000000cf6ff000 - 00000000cf700000 (usable) dmesg | grep -i memory says: initial memory mapped : 0 - 20000000 init_memory_mapping: 0000000000000000-00000000cf700000 Reserving 129MB of memory at 48MB for crashkernel (System RAM: 3319MB) PM: Registered nosave memory: 000000000009f000 - 00000000000a0000 PM: Registered nosave memory: 00000000000a0000 - 00000000000e0000 PM: Registered nosave memory: 00000000000e0000 - 0000000000100000 PM: Registered nosave memory: 00000000cf65f000 - 00000000cf6e8000 PM: Registered nosave memory: 00000000cf6ec000 - 00000000cf6ff000 Memory: 3184828k/3398656k available (5152k kernel code, 1016k absent, 212812k reserved, 7166k data, 1260k init) please try 'cgroup_disable=memory' option if you don't want memory cgroups Initializing cgroup subsys memory Freeing initrd memory: 16136k freed Non-volatile memory driver v1.3 agpgart-intel 0000:00:00.0: detected 8192K stolen memory crash memory driver: version 1.1 Freeing unused kernel memory: 1260k freed Freeing unused kernel memory: 972k freed Freeing unused kernel memory: 1732k freed Update: Memtest see all the 8GB, and dmidecode -t 17 | grep Size too. But free -m still see only 3.1 GB. Question: How can I repair/modify the system, to see all the 8GB RAM? Thanks in advance!

    Read the article

  • Refresh file access time under Linux / Discard disk read cache

    - by calandoa
    I am making use of the access time to analyse some build process, but it is not working the way I want: the access time is updated the first time I read the file, then it stays the same for a long while, or until the next reboot. For instance: $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 10:03 some_file $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time is updated # waiting a few minutes... $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time has not been updated :( I suppose that the file is buffered by Linux in the free memory, the only this copy is accessed the subsequent times for speed reasons. A solution would be to discard the buffers in memory. After searching some forums, I found: sync echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches But it is not working, it seems that it only sync up the write buffers, not the read ones. May be it is due to some custom kernel configuration on my distro (fedora 9)? Or I am missing something here? Is there a way to achieve this access time refresh? Note also that I do not want to simulate some writes on my entire file tree. Because I am using some makefile based build system, this will cause the entire project to be build again.

    Read the article

  • Web filtering (Proxy or DNS) with option for users to ignore the block

    - by Jon Rhoades
    We are struggling with our users visiting infected or "attack" sites and Phising in general. Most of our machines are protected by an Enterprise anti virus and monitoring solution (McAffe ePO) and we try to get people to use Firefox... But no AV is perfect and we have to endure personal machines as well (albeit on their own 'Plague' VLANs) and would like to do something about Phishing as our users seem intent on disclosing their passwords to the world... To complicate matters we don't want to implement a block for many many reasons instead we would like to implement something akin to Firefox's "Reported Scam/Phish/Attack Site" - "Get me out of here" or crucially "Let me in anyway", giving the user a choice to still infect themselves if they feel like it (or look at a site incorrectly blacklisted). The reason we can't just use Firefox is we have a core enterprise App only certified on IE6&7 - thank you Oracle. Is it possible to implement this type of advisory filtering either using a proxy (in our case Squid) or DNS? http://serverfault.com/questions/15801/what-free-options-are-available-for-web-content-filtering http://serverfault.com/questions/47520/open-source-filtering-of-https-traffic Were a good start, but they don't address the advisory aspect of the filtering.

    Read the article

  • WinDbg Problem with ntoskrnl

    - by Wilf
    I've got a similar problem to "BSOD - Unable to verify timestamp for ntoskrnl.exe", in that I can't seem to get the correct symbols to read ntoskrnl. I've followed the advice given by BK1E, but still can't get a result. Text from debug below: Loading Dump File [C:\Users\XXXX\AppData\Local\Temp\WER9D78.tmp\Mini030610-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*c:\Windows\Symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Unable to load image \SystemRoot\system32\ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe *** ERROR: Module load completed but symbols could not be loaded for ntoskrnl.exe Windows Server 2008/Windows Vista Kernel Version 6002 (Service Pack 2) MP (4 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Personal Machine Name: Kernel base = 0xfffff800`01e59000 PsLoadedModuleList = 0xfffff800`0201ddd0 Debug session time: Sat Mar 6 14:08:20.516 2010 (UTC + 0:00) System Uptime: 0 days 0:42:01.723 Unable to load image \SystemRoot\system32\ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe *** ERROR: Module load completed but symbols could not be loaded for ntoskrnl.exe Loading Kernel Symbols ............................................................... ................................................................ ......................... Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {11, c, 0, fffff80001ec9489} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. How do I fix this issue? OS is Windows Vista x64 SP2.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • What are "Excess Fragments" in defragmenting a hard drive?

    - by Andrew Swift
    I'm defragmenting my hard drive (XP SP3) with PerfectDisk 7.0, and it finds 816,659 excess fragments when I ask for an analysis. [update] Specifically, it shows that the 1TB disk is 14% fragmented with 19693 fragments and 816,659 excess fragments. About 20% of the disk is still free space. What does excess fragments refer to? What is the difference between fragments and excess fragments? I have had problems in the past where I defragmented a fragmented disk and many files were corrupted. It seemed as though "excess fragments" referred to orphan pieces, where the program couldn't find out where to put them. If that was true, then defragmenting a disk resulted in many incomplete files, and in fact I defragmented a disk full of MP3's and got a lot of corrupted files as a result. Instead, I started to simply format a separate disk and copy everything from one to the other. That way there were no orphan bits, and no file corruption. Does anybody know what "excess fragments" really are?

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • Server load high, CPU idle. NFS the cause?

    - by Mech Software
    I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0 0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0 0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0 0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0 6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0 4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0 4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0 3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0 3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0 5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0 0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0 0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0 4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0 3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0 3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0 From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?

    Read the article

  • Hard Drive benchmark values show write very very slow

    - by John
    I recently started to have issues with my laptop being very slow. I ran a hard drive benchmarking tool (by ATTO) that showed that the write speed was very very slow on my boot drive. I ran the same benchmark on my usb drive and it was 650 times faster than my boot drive when it came to writing. Reading is very fast/normal on both. I swapped out an identical drive and ran the same benchmark. This time the drive showed proper write speed. Thinking that I had a hard drive going bad I cloned the old one onto the new one. I managed to clone the problem too. Anyone have any ideas on what in WinXP SP3 might be causing the write issues? I am on a corporate network and we have commercial anti-virus software installed. (AVG I think) I regularly run defraggler and have about 40 gig free on a 100 gig drive. The machine has 4 gigs of memory. Any ideas? TIA J

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • 2008 Sever Randomly reboots.

    - by Jeff
    I'm out of ideas here. We have a 2008 Server that keeps rebooting 2-3 times a day at completely random times with an "Unexpected Shutdown" event. There are no Dumps, no events leading to it just like it loses power then comes back online. I ran a Diagnostic of the power supply and it has had continuous power for months. In addition, the temperature of the processors are maxing out at 40 degrees Celsius. Anyone have any ideas how to figure out why this is restarting all the time? This is a DMZed Web server so it doesn't do too much process wise. Here are the specs: Host Name: ~~~ OS Name: Microsoft Windows Server 2008 R2 Standard OS Version: 6.1.7600 N/A Build 7600 OS Manufacturer: Microsoft Corporation OS Configuration: Standalone Server OS Build Type: Multiprocessor Free Registered Owner: Windows User Registered Organization: Product ID: ~~~ Original Install Date: 5/27/2010, 4:25:47 PM System Boot Time: 2/14/2011, 5:35:01 PM System Manufacturer: HP System Model: ProLiant DL380 G6 System Type: x64-based PC Processor(s): 1 Processor(s) Installed. [01]: Intel64 Family 6 Model 26 Stepping 5 GenuineIntel ~1586 Mhz BIOS Version: HP P62, 8/16/2010 Windows Directory: C:\Windows System Directory: C:\Windows\system32 Boot Device: \Device\HarddiskVolume1 System Locale: en-us;English (United States) Input Locale: en-us;English (United States) Time Zone: (UTC-05:00) Eastern Time (US & Canada) Total Physical Memory: 4,086 MB Available Physical Memory: 2,775 MB Virtual Memory: Max Size: 8,170 MB Virtual Memory: Available: 6,691 MB Virtual Memory: In Use: 1,479 MB Page File Location(s): C:\pagefile.sys

    Read the article

  • Application to automate Windows software installation in a test lab

    - by Marc
    I have several test environments (hyper-V) which contain a variety of windows servers. Each machine needs periodically rolling back to a given snapshot and then re-installing with the latest version of our software to test. The software installs are quite complex MSI's with a fair few option screens. I know that the installs can be driven from the command line, passing in parameters to override the wizard options. At the simplest level I suppose I could just write a batch file to kick off each install with the required parameters, however the values that are passed in do need to change from time to time (and environment to environment) so a tool with a config file and simple GUI seems like a better idea. I think what makes it slightly more painful is the multiple environments. For example one environment might contain 4 servers and need a config file with all the server names, service endpoints etc. Another environment might be a 1-box install with all names and endpoints set to localhost. So, ideally I want to be able to store different setup configurations and use them to run all the required installers with the relevant settings against the relevant machines. Before I go off to write the thing, does anyone know of an existing, simple, free tool that will let me achieve this?

    Read the article

  • Running multiple sites on a LAMP with secure isolation

    - by David C.
    Hi everybody, I have been administering a few LAMP servers with 2-5 sites on each of them. These are basically owned by the same user/client so there are no security issues except from attacks through vulnerable deamons or scripts. I am builing my own server and would like to start hosting multiple sites. My first concern is... ISOLATION. How can I avoid that a c99 script could deface all the virtual hosts? Also, should I prevent that c99 to be able to write/read the other sites' directories? (It is easy to "cat" a config.php from another site and then get into the mysql database) My server is a VPS with 512M burstable to 1G. Among the free hosting managers, is there any small one which works for my VPS? (which maybe is compatible with the security approach I would like to have) Currently I am not planning to host over 10 sites but I would not accept that a client/hacker could navigate into unwanted directories or, worse, run malicious scripts. FTP management would be fine. I don't want to complicate things with SSH isolation. What is the best practice in this case? Basically, what do hosting companies do to sleep well? :) Thanks very much! David

    Read the article

  • AMD 700, 800 series chipset. I'm lost.

    - by Shiki
    I've been an Intel / NVidia user ever since I started using computers. Intel really gone up with the prices, and they won't get cheaper. So I decided to get an AMD. But WHICH one? I mean.. not shopping question but.. what are the differences? Like: 880GMA comes only with a single PCI ex and it looks like a chinese replica (no offense). While 890FX comes with 5PCI-ex for QuadCrossfire. Also.. what's the deal with 7xx series? I mean.. its the same price. Yet its older? Or why is it 7xx? Isn't there a single chipset between? Not chinese YET it's durable/fine for long-term usage? What it should know (desktop stuff): NVidia GPU (Zalman AMP2 GTX 260^2 (one card)) Phenom 1090T cpu A somewhat good audio. Any ideas which is the chipset I'm searching for? If this sounds too much of a shopping question, feel free to edit. I just want some clarification on these chipsets.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >