Search Results

Search found 1732 results on 70 pages for 'insight'.

Page 20/70 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Where do deleted items go on the hard drive?

    - by Jerry
    After reading the quote below on the Casey Anthony trial (CNN) ,I am curious about where deleted files actually go on a hard drive, how they can be seen after being deleted, and to what extent the data can be recovered (fully, partially, etc). "Earlier in the trial, experts testified that someone conducted the keyword searches on a desktop computer in the home Casey Anthony shared with her parents. The searches were found in a portion of the computer's hard drive that indicated they had been deleted, Detective Sandra Osborne of the Orange County Sheriff's Office testified Wednesday in Anthony's capital murder trial." I know some of the questions here on Super User address third party software that can used for this kind of thing, but I'm more interested in how this data can be seen after deletion, where it resides on the hard drive, etc. I find the whole topic intriguing, so any additional insight is welcome.

    Read the article

  • Unable to configure Rails with readline

    - by Liam Berg
    1) ./configure --prefix=$HOME/.packages --with-readline-dir=$HOME/.packages 2) configure: WARNING: unrecognized options: --with-readline-dir I am trying to setup the most up-to-date version of Rails on my webhost (I do not have sudo access). Line 1 is the configure command I used for Ruby and Line 2 is the first printed line after executing 'configure'. I've googled this issue and found other people with the same problem but there aren't any real solutions. There are no warnings or errors when configuring/compiling readline-6.1. I am pretty stumped, any help/insight would be greatly appreciated. Thanks ahead of time.

    Read the article

  • Is there a way to import email from the raw email files?

    - by Chris Schmitz
    I have a client who recently switched hosts. When they switched hosts they didn't backup their email and updated their configuration settings so they lost everything. However, I was able to log in to their old hosting control panel and download their mail folder. I am wondering if there is a way to extract their emails and/or contacts from the files. I'm not sure what type of files they are, there is no extension, but the folder directory is structured like this: mail/ .Drafts/ .Sent/ .Trash/ cur/ new/ theirdomain.com/ tmp/ [email protected] maildir Inside of the theirdomain.com folder, there is a folder for each account and inside of that is a folder called "cur" which has a whole bunch of files with names like 1292945327.H169813P25958.uscentral21.myserverhosts.com,S=10117/2,S and if I preview those files I can see the actual email messages inside of them but I have no idea how to get that information from those files to an email client. Anyone know of a way to work with these files? Thanks in advance for any insight you can share!

    Read the article

  • Mono on Linux: Apache or Nginx

    - by Furism
    Hi, I'm developing an ASP.NET application that will be run under Linux/Mono for various reasons (mostly to stay away from IIS, quite frankly). Of course the first web server I had in mind was Apache. But Apache, for all its advantages, adds a lot of overhead. Also, the application I'm building needs to be highly scalable and performance is one of the main concern. Apache has, obviously, a very good reputation and its record speaks for itself, but I don't need things like Reverse Proxy or Load Balancing because dedicated network devices would be used for that. So those modules from Apache will never be used. So basically my question is: since Nginx seems to fit exactly needs, is there any caveat I should be aware of? For instance, is Nginx renowned to be particularity safe? When security flaws are detected, how fast are they patched? Any insight on the pros and cons of using either of those servers in conjunction with Mono is welcome.

    Read the article

  • Monitoring outgoing bandwidth of application

    - by jnolte
    I currently have a VPS that is consuming a ton of outgoing bandwidth and I am trying to drill down to where this may be coming from. Does anyone know of a logical way to go about finding out which pages on the site are consuming the most outgoing data. We have done a ton of front-end optimizations to the site and our google page speed rankings ar 85% so I feel we have done a pretty great job at optimizing the site for speed. Can someone lend some insight on how they have made similar optimizations? Application / Server Stack LEMP Running Varnish Cache / PHP5-FPM WordPress running w3 Total Cache Ubuntu 12.04 LTS

    Read the article

  • VPN server to access Samba4

    - by VisionIncision
    On my network I have an Ubuntu 12.04 server running Samba4, my domain is fully configured and functional. Now, I would like to enable VPN access over the internet, and have another box to do so. I have been searching on the internet for guides and information etc, but have not been successful. I have however found this guide http://www.howtogeek.com/51237/setting-up-a-vpn-pptp-server-on-debian/ but was wondering if I could adapt it somehow to enable access to my DC services. EDIT: I would need to authenticate my VPN server with my DC, if that is possible of course. Any insight would be wonderful. Regards, Jack Hunt

    Read the article

  • Apache Slow Over http, Fast Over https

    - by Josh Pennington
    I have an Apache server running Debian. I am having this very strange situation where loading a page takes about 2 to 3 times longer to load over http than https. The primary use of the website is Magento, but I am seeing similar results with other things that we have loaded on the website. I don't have the first clue where to even look on our server or what the problem could be. Does anyone have any insight as to what could be going on or where to look. Josh

    Read the article

  • Remote installation and configuration software, preferrably open source

    - by Brimstedt
    Hello, Im looking for some remote-installation software. I've looked briefly at unattended, opsi and a bunch of stuff, but there is nowhere near enough time to evaluate them all, and they are rather complex to setup so some insight would be very appreciated. This is foremost for windows clients, but linux support would be good. Something like apt-get would've been great. Requirements: Simple to setup and use Set up groups of users (developers, management, sales, etc) Chose which software to be installed for different groups Add new software to groups and it will be automatically installed on client Dependencies between software Nice-to-have: Linux client support OS-unattended installation thanks in advance

    Read the article

  • Can I fix memory leaks in browsers? (Win7 64bit IE9 Chrome)

    - by Randy
    I have a habit of opening websites in their own window, then getting interested in something else in another window while leaving the first window open with the intention of returning to finishing reading. Sometimes I'll leave browser windows open for quite a while. When I do, the browser locks-up gigs of memory until it uses so much, IE will no longer respond. I've seen one browser window use up over 2 gigs just sitting there doing nothing. Chrome appears to do it too, but it doesn't use nearly as much as IE. Is there a way to stop this? Any ideas what may be causing it? Is it a problem with the OS or the browsers? Any insight is appreciated.

    Read the article

  • Cron Permission Denied

    - by worldthreat
    good day, I have a bash script in my home directory that works properly from the command line (file structure is default media temple DV. < noted for certain permission issues) but receive this error from cron: "/home/myFile.sh: line 2: /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql: Permission denied" NOTICE: it's just line 2... it writes to the local server just fine. Below is the Bash File: #!/bin/bash mysqldump -uUSER -pPASSWORD -hHOST dbName> /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql mysql -uadmin -pPASSWORD -hlocalhost dbName< /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql can't chmod from bash (lol, yeah i tried). writing the file there and setting the permissions before the transfer is useless... i have googled the heck out of this situation and this one still seems unique.... any insight is appreciated

    Read the article

  • iTunes Title Bar is Equal to the Location of the Library file?

    - by Urda
    I have never been able to get a clear answer on why the latest edition of iTunes does this. I have my entire iTunes library located in C:\itunes\ and the library data files inside C:\itunes\!library_info for backup purposes. However when version 9 of iTunes came out it went from having iTunes as the title, to !library_info. Anyway to get around this without moving my data files away? Annoying "feature" if that is what it is. Again Apple support and forums were of no help to me. Anyone have insight on this? System Info: Windows Vista x86 Ultimate, latest updates. iTunes Version 9.0.3.15 Screenshot: http://farm5.static.flickr.com/4072/4414557544_d0b25eb64c_o.jpg Flickr Page: http://www.flickr.com/photos/urda/4414557544/

    Read the article

  • Easy Deployment Split Tunnel VPN Connection

    - by Joey Harris
    I was wondering if anybody could offer some insight as to how I can mass deploy VPN connection settings that support split tunneling. It has to work on both Mac and Windows systems though if a script is used, it obviously can be 2 separate scripts for both platforms. I will be setting up a Windows server with a file server and Exchange server and to access the file server I will have the clients go through VPN because we will have sensitive data. I don't want the servers network to be bogged down with the clients normal internet traffic so I will be needing some way to setup split tunneling on the clients without them having to put in a few commands every time to setup the static routes. Ive looked at Cisco VPN client but I want to try and stick with windows RRAS and avoid buying a Cisco VPN endpoint. Im basically looking for a good VPN client that can support split tunneling and mass deployment.

    Read the article

  • FortiGate firewall configuration with /30 and /28 networks

    - by slyderc
    I have fiber coming in from a new ISP which is being handed off via Ethernet on a single physical port. I'm having doubts about how to approach the configuration on my FortiGate 200A firewall because I've been given a /30 containing the ISP's gateway and another /28 for external IPs I can use: x.y.76.12/30 (.13 is the GW) x.y.76.64/28 (public IP space) How do I configure the FG200A's WAN1 interface to be aware of the two networks? As I only have one physical ISP port, will I need to plug it into a switch to break-out two cables and use a DMZ port on the FG200A for setting up the /28? Thanks in advance for your insight!

    Read the article

  • How to establish the real-time communication between Shopping cart running MySQL and Internal System Running PostgreSQL [closed]

    - by Andrew
    I am thinking about the way of establishing some-sort of real-time connection between MySQLpowered shopping cart and internal system that is running on PostgreSQL. Could you give me some sort of insight on this topic? For example, I can write some sort of csv export application, then enable remote MySQL for over the internet connection and then import csv to mysql directly from PC. Or upload csv and run cron on server. But this way of import-export causing delays; so I would like to link databased (or some msort). I have never done it before and would like to hear some opinions about this. Another way "just a thought" might to implement triggers that would initiate the update process via csv; but again, I would like to avoid csv. Do you have any good advise? Maybe some specific examples?

    Read the article

  • kerberos5 unable to authenticate

    - by wolfgangsz
    We have a Debian file server, configured to serve up samba shares, using winbind and kerberos. This is configured to authenticate against a Windows2003 DC. All worked fine until recently when I did a maintenance update on all packages. Since then, all attempts to connect to any of the shares (and also to just log into the box) fail. The logs contain this message, which seems to be at the root of the evil: [2009/09/14 12:04:29, 10] libsmb/clikrb5.c:get_krb5_smb_session_key(685) Got KRB5 session key of length 16 [2009/09/14 12:04:29, 10] libsmb/clikrb5.c:unwrap_pac(280) authorization data is not a Windows PAC (type: 141) [2009/09/14 12:04:29, 3] libads/kerberos_verify.c:ads_verify_ticket(430) ads_verify_ticket: did not retrieve auth data. continuing without PAC From there on it fails to find the user account on the DC, subsequently remaps the user to user nobody and then (rightly) refuses to grant access to the share. However, the following works just fine: wbinfo -a user%password I was wondering whether anybody has had this problem and could provide some insight. I would be happy to provide neutralised config files.

    Read the article

  • Correlating %RDY in esxtop to CPU Usage in Guest

    - by Joe
    We recently upgrade a number of our VmWare hosts from 4.1 to 5.5 and noticed many of the VMs saw a step-wise jump in CPU usage as shown by the guest VM. We have not yet upgraded vmwaretools on any of the guests, but after investigating a bit more we saw many of these guests with a high %RDY value (50%) when viewed under esxtop. Unfortunately Linux (the guest) just shows "high CPU usage" without any insight into what portion of that is coming from %RDY (VmWare saying, "your guest is waiting on CPU from the host"). Are there any tools, /proc entries, etc. that can shed light on that information?

    Read the article

  • Are there any statistics on the number of networks using Ethernet Jumbo frames?

    - by Russell Heilling
    I am often told by Sales Engineers and Product managers that a Layer 2 Ethernet service is not fit for purpose if the maximum supported frame size is in the 1518-1522 range (enough to support standard Ethernet frames or VLAN tagged frames). Or in other words: an MTU of 1500 is not enough (see this blog post for my definition of MTU and a short rant on how the term is often misused) I am never able to get any details from them on: a) What proportion of customers (Enterprise / SMB) require Jumbo frames b) What are typical expectations on frame size for Jumbos on WAN links? 1600, 2000, 9k, etc... I know that in Telco and Data Centre environments Jumbos are pretty common, but I am after some insight as to how common this is within Enterprise and SMB networks.

    Read the article

  • Two different sites, same IP, same top-level domain, on IIS 7.5 -- one works and the other displays HTTP 404 error

    - by user717236
    I'm running a Windows 2008 R2 box with IIS 7.5 as the web server. On IIS, I have two websites: mysubsite1.mysite.com and mysubsite2.mysite.com. There is only one IP on the server and both sites share this IP. Here is how I have the bindings configured: mysubsite1.mysite.com works fine. However, mysubsite2.mysite.com gives me the following error: Not Found HTTP Error 404. The requested resource is not found. Now, if I change the Host name field for mysubsite1.mysite.com to blank and restart the web server, both sites work! The question is why is the host name field for the first site causing an HTTP 404 error for the second site when both sites' Host name fields are filled? I would appreciate any insight. Thank you.

    Read the article

  • Issue with PHP and osx 10.7 - runs via command line but not in browser

    - by jnolte
    I recently removed MAMP as I wanted to have more control over my machine and wanted to make use of PHP5.4 I installed using the script located here I cannot now not even get my default PHP that is built in to osx to work. I am running this script with a simple In a document in my ~/Sites directory. I am really at a loss as to why this will not work. I have php5 installed in my /usr/local directory via the link provided above and it seems like the main php is installed in /usr/bin Any and all insight on how to debug this would be greatly appreciated.

    Read the article

  • What is a 'best practice' backup plan for a website?

    - by HollerTrain
    I have a website which is very large and has a large user-base. I am trying to think of a 'best practice' way to create a back up or mirror website, so if something happens on domain.com, I can quickly point the site to backup.domain.com via 401 redirect. This would give me time to troubleshoot domain.com while everyone is viewing backup.domain.com and not knowing the difference. Is my method the ideal method, or have you enacted better methods to creating a backup site? I don't want to have the site go down and then get yelled at every minute while I'm trying to fix it. Ideally I would just 'flip the switch' and it would redirect the user to a backup. Any insight would be greatly appreciated.

    Read the article

  • Adding FK Index to existing table in Merge Replication Topology

    - by Refracted Paladin
    I have a table that has grown quite large that we are replicating to about 120 subscribers. A FK on that table does not have an index and when I ran an Execution Plan on a query that was causing issues it had this to say -- /* Missing Index Details from CaseNotesTimeoutQuerys.sql - mylocal\sqlexpress.MATRIX (WWCARES\pschaller (54)) The Query Processor estimates that implementing the following index could improve the query cost by 99.5556%. */ /* USE [MATRIX] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[tblCaseNotes] ([PersonID]) GO */ I would like to add this but I am afraid it will FORCE a reinitialization. Can anyone verify or validate my concerns? Does it even work that way or would I need to run the script on each subscriber? Any insight would be appreciated.

    Read the article

  • SQL Merge Replication - Filter Sets

    - by Refracted Paladin
    I have a "working" Replication Set in SQL 2005 that we use in house to our users at remote branches on SQL Express 2005. I want to apply a filter to our biggest Set to help minimize the bandwidth impact. What I am asking is what considerations do I need to take into account before throwing a filter on there. Will it cause any issues I should be aware of? Does it affect compression adversely. Will everyone need to reinitialize after applying it? Any heads up or insight would be appreciated. Thanks,

    Read the article

  • Email alerts when hard drive fails on a Dell PowerEdge 2950 (PERC5I, SAS)?

    - by BigJoe714
    I recently purchased a used Dell PowerEdge 2950. I setup the hard drives in RAID-5 configuration. I want to be able to get an email alert if one of the drives fails. I have been trying to determine what the easiest way to setup an email alert would be. The controller card is listed as PERC5I, SAS PowerEdge. From my numerous Google searches, it looks like I need to install Dell OpenManage Essentials. However ,this looks to be a giant application with tons of bells & whistles for managing many servers, when all I really want is something for this one server. Can anyone offer me any insight into what I could do?

    Read the article

  • Spiceworks versus Request Tracker?

    - by dmackey
    We currently utilize Request Tracker for help desk ticketing, we utilize Spiceworks for asset inventorying. I am pondering whether it might be worthwhile to move from RT to Spiceworks for help desk as well. Has anyone used both systems and can provide some insight into any benefits/problems with either system? Or has general philosophical reasons why one should use one solution over the other? Of course, RT is open source and Spiceworks is not - and usually this would be a major item for me - but since Spiceworks is free and takes community involvement fairly actively its not as major of a concern for me (personally).

    Read the article

  • Nginx Static Content Server Maxing Out?

    - by Harry
    I use nginx to serve the static content for a decently busy website of mine. I have the logging disabled, and 4 worker processes enabled with 5,000 connections per worker (which should yield a max connection limit of 20,000. The server is only operating at about 10% CPU usage and 50% ram, but it's very laggy, and sometimes nginx is so slow to respond to the requests, it times out. For a small number of connections, it's fine, but once any load starts occurring (~2,500 connections), it backs up and bogs down. Is there any other bottlenecks or limits that I might be hitting? This is a FreeBSD server, and all the static files are located locally (not NFS). The NIC is an unmetered gigabit, and it's only using around 75 megabit. Any insight would be appreciated. Thanks.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >