Search Results

Search found 20155 results on 807 pages for 'things'.

Page 562/807 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • Two hosted servers, one public - VPN?

    - by Aquitaine
    Hello there, Web developer here who has to occasionally wear a system & network admin hat (small company). We currently have a single hosted server running Windows Server 2003 that runs both our web server (IIS/Coldfusion) and our database server (SQL Server 2008). We lock down the SQL server by allowing only specific IPs to connect to it. Not ideal but it's worked thus far. We're moving up to two distinct servers and I want to take the opportunity to 'get things right' and make only the web server face the public. What I need to be able to do is to allow only a handful of people to connect to the database server. Rather than using an IP allow list, I'd prefer to use a VPN to let people through so that access is based on the user and not simply the user's location. I'm leaning toward something like OpenVPN, just so I can stick with Server 2008 Web edition. Do I: Use the web server as a VPN server and set up the database server to only accept connections from the web server? Is there an extra step required to make connections to, say, db.mycompany.com route through the VPN rather than through a different connection? I'm ignorant of this part of network infrastructure stuff. Or, Set up a VPN server on the database server as the only public-facing server connection so that there aren't any routing issues to deal with? I know this is Network 101 stuff but I thought I'd ask before just blundering through it since it could affect the company a bit. Thanks very much!

    Read the article

  • How do I get a Wireless N PCi card to connect to a wireless G router?

    - by Andy
    I'm having some problems setting up a new wireless PCI card on a WinXP SP3 PC. I know that the router is configured correctly. It is a Linksys WRT54GL, using 802.11b/g. Security mode is WPA2 Personal with TKIP+AES encryption. I am able to connect to this fine using my laptop (first gen MacBook with a 802.11b built in card). The new PCI card is also Linksys, but it supports 802.11n. Card seems to be installed ok (Windows sees it fine, doesn't list any errors in Device Manager), however when it scans for available wireless networks it can't find my wireless network (the router is set to broadcast the SSID). I tried to enter the network SSID manually, but that didn't seem to help. I chose WPA2-PSK for network authentication. The only options for encryption are TKIP or AES - I've tried both, neither worked. I am sure that I typed in my wireless key correctly. At this point, I don't think the problem is with encryption, but something else. It almost seems like I need to switch the wireless card into g mode, but I haven't found a way to do that (if that is even possible/necessary - I thought n was fully backwards compatible with g). Also, the PC is in the same room as the router, and my laptop, so I don't think that it is an interference issue. Any ideas what I'm doing wrong? I'm running out of things to try at this point. :(

    Read the article

  • Two audio streams - headphones and speakers

    - by Sylvester
    What I want (this is probably hard for most to answer, as this is a very unique setup) is to have two different streams (this means audio splitter is not an option, as it will still only be one stream) of audio - one through the headphones and one through the main speakers. I can do the audio rerouting using virtual audio cables, however the problem is this: i cannot get both headphones AND speakers to play even just one stream, let alone two seperate ones. using "split front and back audio into seperate streams is not an option, as the actual MB F_PANEL is faulty (nothing to do with the case front panel, just so you know. that works fine) So, first things first. I need it to recognise the headphones as a seperate audio device so that Virtual Audio Cables will detect it and allow me to route the necessary audio to the headphones only. I also need to be able have sound play through speakers and headphones together what i want to achieve overall, is this: have the ENTIRE computer's sounds picked up by VAC, and stream them to Line1. then have Line1 stream to the headphones. that way whatever's being streamed is heard through the headphones, while the entire system sounds (including those not streamed) are played through speakers.

    Read the article

  • 403 Forbidden When Using AuthzSVNAccessFile

    - by David Osborn
    I've had a nicely functioning svn server running on windows that uses Apache for access. In the original setup every user had access to all repositories, but I recently needed the ability to grant a user only access to one repository. I uncommented the AuthzSVNAccessFile line in my httpd.conf file and pointed it to an accessfile and setup the access file, but I get a 403 Forbidden when I go to mydomain.com/svn . If I recomment out this line then things work again. I also made sure I uncommented the LoadModule authz_svn_module and verified that it was point to the correct file. Below is the Location section of my httpd.conf and my svnaccessfile httpd.conf (location section only) <Location /svn> DAV svn SVNParentPath C:\svn SVNListParentPath on AuthType Basic AuthName "Subversion repositories" AuthUserFile passwd Require valid-user AuthzSVNAccessFile svnaccessfile </Location> (I want a more complex policy in the long run but just did this to test the file out) svnaccessfile [svn:/] * = rw I have also tried just the below for the svnaccessfile. [/] * = rw I also restart the service after each change just to make sure it is taken.

    Read the article

  • Total newb having SSH and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH into remote MySQL databases, so pardon my ignorance. For months I had been using the HeidiSQL client application to remotely access a MySQL database. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 MySQL host: theHostnameOfTheRemoteServer.com MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

  • Vmware Workstation downloads as a txt file?

    - by George Mauer
    I just went to the vmware website because I want to try workstation over virtualbox. I signed up for a workstation trial and clicked download on the 64bit linux version. What downloaded is a 320 megabyte txt file VMware-Workstation-Full-8.0.2-591240.x86_64.txt What gives? Is anyone familiar with this pattern of delivering software? How do I run it? Here is the beginning of that file: #!/usr/bin/env bash # # VMware Installer Launcher # # This is the executable stub to check if the VMware Installer Service # is installed and if so, launch it. If it is not installed, the # attached payload is extracted, the VMIS is installed, and the VMIS # is launched to install the bundle as normal. # Architecture this bundle was built for (x86 or x64) ARCH=x64 if [ -z "$BASH" ]; then # $- expands to the current options so things like -x get passed through if [ ! -z "$-" ]; then opts="-$-" fi # dash flips out of $opts is quoted, so don't. exec /usr/bin/env bash $opts "$0" "$@" echo "Unable to restart with bash shell" exit 1 fi set -e ETCDIR=/etc/vmware-installer OLDETCDIR="/etc/vmware" ### Offsets ### # These are offsets that are later used relative to EOF. FOOTER_SIZE=52 # This won't work with non-GNU stat. FILE_SIZE=`stat --format "%s" "$0"` offset=$(($FILE_SIZE - 4)) MAGIC_OFFSET=$offset offset=$(($offset - 4)) CHECKSUM_OFFSET=$offset offset=$(($offset - 4)) VERSION_OFFSET=$offset offset=$(($offset - 4)) PREPAYLOAD_OFFSET=$offset

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • Linux: prevent VNC from swapping like mad

    - by Weezy
    I'm accessing a MacMini (with MacOS X 10.4) from my Linux machine using VNC and there's an issue that is driving me crazy... My Linux machine has 4 GB of ram and I run a lot of various apps on it and I've got no issue at all. It's all snappy and don't hear the hard disk swapping/read/writing too often. Now with VNC, the hard disk is swapping like mad... When I'm moving things on the OS X desktop. So I was thinking of creating a ramdisk and forcing the temp VNC files to go into that ramdisk but the problem is I can't find any temp files. I've attempted to do that: #!/bin/bash while [ true ] do lsof | grep vnc done And eyeball parse the output to try to find some temp file: no luck. The VNC version I'm using is this one: $ vncviewer -version VNC Viewer Free Edition 4.1.1 for X - built Jan 30 2009 19:33:16 Copyright (C) 2002-2005 RealVNC Ltd. No matter how much data is coming from the Mac, there should be plenty of memory (4 GB of ram) so there's really no reason to swap like crazy. This is driving me mad. Any help as to how I could solve this problem is most welcome because this is literally driving me nuts.

    Read the article

  • Best way to 'harden' embedded ext4 file server against unexpected loss of power?

    - by Jeremy Friesner
    Hi all, First, a little background: my company makes an audio streaming device that is a headless, rack-mounted Linux box with a couple of SSDs attached. Each SSD is formatted with ext4. The users can connect to the system using Samba/CIFS to upload new audio files or access existing ones. There is also custom software for streaming out audio over the network. This is all fine. The only problem is that the users are audio people, not computer people, and see the system as a 'black box', not as a computer. Which means that at the end of the day, they aren't going to ssh in to the box and enter "/sbin/shutdown -h"; they are just going to cut power to the rack and leave, and expect things to still work properly the next day. Since ext4 has journalling, journal checksumming, etc, this mostly works. The only time it doesn't work is when someone uploads a new file via Samba and then cuts power to the system before the uploaded data has been fully flushed to the disk. In that case, they come in the next day and find that their new file has been truncated or is missing entirely, and are unhappy. My question is, what is the best way to avoid this problem? Is there a way to get smbd to call "sync" at the end of every upload? (Performance on uploads isn't so important, since they only happen occasionally). Or is there a way to tell ext4 to automatically flush within a few seconds of any change to a file? (Again, performance can be sacrificed for safety here) Should I set a particular write-ordering mode, activate barriers, etc?

    Read the article

  • Building a Media Center PC with Comcast Cable...?

    - by Rob
    Alright - so this might be a stupid question but I've never been all that much into TV. I currently have Comcast cable. I've just got the 'basic' 2-60 package or whatever; I've just always plugged the cable into the back of my TV. I've never had a cable box. Recently, Comcast has been pulling channels off of my line-up. Most recently, the stole the TV Guide channel from me. I'm told this is part of a push to get customers to switch to their digital line-up. But, I'm also told it requires some sort of digital receiver for each TV you've got. I don't want to buy a bunch of these digital receivers and I don't want to pay the monthly rental fee...but I have heard of how awesome media center PCs are and some really cool things they can do. And, I've got loads of PC parts sitting around. So, can someone guide me through this a bit? Are there computer video cards or TV tuners that are going to work with Comcast's digital cable? What kind of price range are we looking at?

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    Hi all, I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

  • Determining a realistic measure of requests per second for a web server

    - by Don
    I'm setting up a nginx stack and optimizing the configuration before going live. Running ab to stress test the machine, I was disappointed to see things topping out at 150 requests per second with a significant number of requests taking 1 second to return. Oddly, the machine itself wasn't even breathing hard. I finally thought to ping the box and saw ping times around 100-125 ms. (The machine, to my surprise, is across the country). So, it seems like network latency is dominating my testing. Running the same tests from a machine on the same network as the server (ping times < 1ms) and I see 5000 requests per second, which is more in-line with what I expected from the machine. But this got me thinking: How do I determine and report a "realistic" measure of requests per second for a web server? You always see claims about performance, but shouldn't network latency be taken into consideration? Sure I can serve 5000 request per second to a machine next to the server, but not to a machine across the country. If I have a lot of slow connections, they will eventually impact my server's performance, right? Or am I thinking about this all wrong? Forgive me if this is network engineering 101 stuff. I'm a developer by trade. Update: Edited for clarity.

    Read the article

  • Installing SilverStripe on 000webhost.com (Free web host)?

    - by benwad
    Hi I'm trying to learn how to work Silverstripe so I extracted the tar file to my free hosting account. I then went on install.php and edited the permissions to meet the requirements set out in install.php but I still get two warnings from the 'webserver configuration' section: I can't tell what webserver you are running. Without Apache I can't tell if mod_rewrite is enabled. I can't tell whether mod_rewrite is running. You may need to configure a rewriting rule yourself. I looked in phpinfo() and mod_rewrite appears to be installed. I contacted the web host and they said it was to do with virtual directory paths, and I should add 'RewriteBase /' to the top of my .htaccess file in the public_html directory. However I did this and still had the same problem. The install.php script says that I can install it even with these warnings but when I press 'install' it brings me to a page with the following errors: Friendly URLs are not working. This is most likely because mod_rewrite isn't configuredcorrectly on your site. Please check the following things in your Apache configuration; you may need to get your web host or server administrator to do this for you: * mod_rewrite is enabled * AllowOverride All is set for your directory I also get this error message from the server: Warning: unlink(mysite/_config.php) [function.unlink]: Permission denied in /home/a2716553/public_html/install.php on line 701 000webhost.com says they have successfully installed Silverstripe on their user accounts without much configuration but I can't seem to find out how.

    Read the article

  • Ubuntu: how to get audio to work in both Spotify (under Wine) and Flash (in Firefox)?

    - by Jonik
    I'm running Spotify on Linux using Wine. Sound worked great (even though the sound test in winecfg failed!), until I installed alsa-oss package yesterday to get Flash sound working in Firefox. Now Spotify says: "There is a problem with your sound card. Spotify can't play music." So the question is, how to get the sound in Spotify working again, so that it also keeps working in Flash & Firefox? Tweak some ALSA settings? Spotify settings? Add/remove some packages? By the way, curiously, now that sound doesn't work in Spotify, winecfg's "Test Sound" does work! This is Ubuntu 8.04 (Hardy). Sound card / driver is probably an integrated AC'97. Please mention if any additional information about the system is needed! Update: I have Flash 10 installed (outside the packaging system, using $MOZ_PLUGIN_PATH env variable), but also had Flash 9 from flashplugin-nonfree package - and the earlier version was being used by Firefox! Based on what Mike Arthur said about Flash and alsa-oss, I removed the older Flash (flashplugin-nonfree package) and alsa-oss - and Flash sound still works, which is nice. But for some reason Spotify still doesn't play sound, even though things should now be like they were originally... Update 2: Got it working, all smoothly, finally.

    Read the article

  • Backup hardware and strategy on distributed Windows Server 2008 network

    - by CesarGon
    This question is a follow up to this. We have a Windows Server 2008 R2 domain over a network that spans two different buildings, linked by a 100-Mbps point-to-point line. Over 60 users work in the organisation. We are planning to use DFS folders and DFS replication for file serving across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. The idea is to set up a DFS file server in each building and use DFS so that all the contents stay replicated over the 100-Mbps link. We are now considering backup hardware and strategies. We are Dell customers and, after browsing the online Dell catalogue, I can see a number of backup hardware options. My main doubts are the following: Would you go for a tape library, disk backup, or are there other options worth considering? Would you perform batch backups (i.e. nightly) or would you use continuous backup (i.e. while users are working)? Would you use a dedicated backup server to which the tape library (or any other backup device) is attached, or is there any other alternative way of doing things? My experience with backup hardware and overall setup is limited, so I appreciate any good piece of advice that you may have. Thanks.

    Read the article

  • repair partition table

    - by m.sr
    Hallo. I've just overwritten my partition table of my system's hard disk. i made a cfdisk on the wrong device (/dev/sda instead of /dev/sdd), deleted all partitions, made one new primary spanning over the whole device, set its type to 07 (NTFS) and hit write. So here i am with my system running. Until i reboot, i hope/guess nothing will change - meaning: all my data is accessible (I'm currently making a dd-backup of the whole device and plan to make a .tar.gz-backup of the most important data later). I also backed up /proc/partitions, /proc/diskstats (even though i guess this is more about throughput and stuff like this ...) and /sys/block/sda/sda?/{start,size}. Some further things i know: 4 primary partitions 1st partition: ~100Mb, ext3, /boot 2nd partition: ~100Mb, "Win7 Boot Partition", ntfs(?) 3rd partition: ~20...30GB, Win7, ntfs 4th partition: ~20...30GB, luks-encrypted device The luks- de crypted device is a LVM-PV The /, /home & swap-partitions are all LVs on the (VG on the) above noted PV So my questions: What is the simplest way to just write the kernels partition table to the disk? What is the simplest way to take the above mentioned (and perhaps other I don't know of ...) data and generate the partition table? Are there any problems to take care of regarding to luks and/or lvm? Is there any data I should backup before rebooting (meanig stuff from kernel [ /sys/..., /proc/...] and so on, which could help me regenerate the partition table)? Thanks a lot! P.S.: debian sid, Kernel 2.6.34-1-amd64 from debian-experimental, 80GB Intel SSD

    Read the article

  • Mac "Steam needs to be online to update" - 404 fetching *_osx.zip.*

    - by Chris Boyle
    Since yesterday evening, when I launch Steam on OSX, a self-update progress bar appears instead (at 0 of 30MB or so). This bar does not advance, an error dialog appears: Steam needs to be online to update Please confirm your network connection and try again. The app then exits. This happens whether wifi or ethernet or both are connected, and pings to the outside world succeed throughout. If I look at the logs in Console, they are very similar to this example (though that's not mine). Specifically: Success! http://store.steampowered.com/public/client/steam_client_osx?date=718277 [...] Failed! http://cdn.store.steampowered.com/public/client/breakpad_osx.zip.27f59114a86fcd50533e1d7b128f9300947f9969 Failed! http://cdn.store.steampowered.com/public/client/steam_osx.zip.11a99384214805f2dd3be5084ba6be61d662f8ac Failed! http://cdn.store.steampowered.com/public/client/miles_osx.zip.d9fb546541f59c1fdd03962a605236b1021abab8 Requesting the first URL successfully returns some data including the filenames of the latter three, and requesting any of those gives me a 404 (I've tried multiple clients on multiple continents). Searches on Google and Twitter show about 10-20 others having this problem in the past 24 hours, but hardly the angry mob I'd expect if the problem affected all Steam OSX users. Things that have already been tried with no effect: Switching between wifi and ethernet. Killing all Steam processes including ipcserver. Moving the ~/Library/Application Support/Steam/registry.vdf file away. Requesting those URLs with other clients and from other locations. Interesting: that first URL with the date parameter returns the same content even without that parameter (thus would lead to the same 404s) suggesting that the problem is not necessarily specific to coming from a particular currently-installed version of Steam.

    Read the article

  • Windows Media Based video audio converter?

    - by acidzombie24
    This may seem like an odd question. Right now ONLY windows media player, VLC and media player classic opens and plays my audio video correctly. Virtualdub plays it back with the wrong framerate and losses the audio, Avidemux 2.5 seems to be able to dump the audio/video but the video (like all other apps) is either a bad framerate or is wrong (glitches and bad framerate or bad dump). Nothing recognizes the audio file and when playing the video Avidemux (and most other things) die. FFMPEG cant seem to split the video or audio (using copy -an and etc) and this is getting me very angry. VLC dumps the video incorrectly when i try dumping it with that too. What can i use to convert the video? its streaming so it starts at 26mins in and ends at 28 (this is where apps have the problem. They dont know this and fudge everything or crash). I manage to dump the audio with Avidemux but virtualdub and ffmpeg says unreconized codec. Even if i cant convert it (it seems compressed enough) i want to at least attach it back into an AVI.

    Read the article

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • IPSec for LAN traffic: Basic considerations?

    - by chris_l
    This is a follow-up to my Encrypting absolutely everything... question. Important: This is not about the more usual IPSec setup, where you want to encrypt traffic between two LANs. My basic goal is to encrypt all traffic within a small company's LAN. One solution could be IPSec. I have just started to learn about IPSec, and before I decide on using it and dive in more deeply, I'd like to get an overview of how this could look like. Is there good cross-platform support? It must work on Linux, MacOS X and Windows clients, Linux servers, and it shouldn't require expensive network hardware. Can I enable IPSec for an entire machine (so there can be no other traffic incoming/outgoing), or for a network interface, or is it determined by firewall settings for individual ports/...? Can I easily ban non-IPSec IP packets? And also "Mallory's evil" IPSec traffic that is signed by some key, but not ours? My ideal conception is to make it impossible to have any such IP traffic on the LAN. For LAN-internal traffic: I would choose "ESP with authentication (no AH)", AES-256, in "Transport mode". Is this a reasonable decision? For LAN-Internet traffic: How would it work with the internet gateway? Would I use "Tunnel mode" to create an IPSec tunnel from each machine to the gateway? Or could I also use "Transport mode" to the gateway? The reason I ask is, that the gateway would have to be able to decrypt packages coming from the LAN, so it will need the keys to do that. Is that possible, if the destination address isn't the gateway's address? Or would I have to use a proxy in this case? Is there anything else I should consider? I really just need a quick overview of these things, not very detailed instructions.

    Read the article

  • Is there any special way to force GoBack to work with Windows Vista and 7?

    - by dfree
    Norton/Roxio's GoBack doesn't work with Vista/7 for reasons unknown. I have tried several alternatives (Norton Ghost, RollbackRX, Norton Save and Restore), none of which offer the same functionality as GoBack. Not only does GoBack not eat up all your hard drive space while creating a legitimate fail safe for any pc problems, it also allows you to see ACTIVELY EXACTLY WHAT PROCESS ARE BEING EXECUTED ON YOUR COMPUTER. This feature (called Advanced Disk Drive Restore) also allows you to troubleshoot problems and determine causes for things in about half a second by seeing what is happening on your machine. It's how I learned everything I know about computers. GoBack also features something called Safe Try Mode where you can put it in SafeTry and then mess up the whole computer and when you come out of it, your computer will be exactly how it was before. Amazing for people who like to tinker without risking their machine stability. It also helps for that accidentally erased paper or whatever you may have erased. I believe GoBack installs a type44 partition around the drive, which loads prior to windows to allow this functionality. If you're going to recommend another program, please don't (unless it does all of the above). I've tried all the competition and nothing is as good. I just want my GoBack to work with 7 :) Any ideas of crazy ways to make this work?

    Read the article

  • Sniffing at work- How to detect

    - by coffeeaddict
    Because of the place I work has some real issues (people) especially in IT and the owner, I wonder if we are being sniffed. Is there any way to tell if on a Vista 64-bit machine: 1) In system logs some identification that would tell me that someone might log into my PC such as an Admin 2) Something in the logs that would give me a flag about maybe I'm being monitored some other way? 3) How can I be sure that my gmail, hotmail, and chat is not being sniffed. I know there are things like Simp, etc. I'm talking about specific hidden system signs either in registry or logs. Obviously I'm not going to raise any suspicion by me asking our network admin. I don't trust anyone at this company. is there a good way to basically monitor for this as an end user? Could someone log in and basically watch me work and if so, would there be any goodies left behind for me to find out if this has happened other than visual signs which would not be present...maybe some running processes?

    Read the article

  • CouchDB Errors: "undefined symbol: js_fgets" on Ubuntu 10.04

    - by MattEzell
    Hello, ServerFault Community. I have been wrestling with this problem for weeks without resolution and was hoping that someone here might be able to help. As indicated above, this is in Ubuntu 10.04 (x86) using CouchDB 0.11.0. I have build and installed CouchDB 0.11.0 from source. Everything with the dependencies and with the CouchDB install itself 'goes off without a hitch' - no errors or complaints in the configure, make or install... CouchDB seems to be running 'fine'. I can access Futon without issue and utilize all CouchDB functionality found in Futon... Unfortunately, when I attempt to use any shows/views for an installed Couch App, I get the above js_fgets error before terminal (and the Couch log) fills up with TONS of JSON. Nothing ever renders in the browser, though Firebugs reports. I have used the official instructions (paying special attention to the 10.04 instructions) and have followed pretty much every Google thread that I can find on similar issues. I have chase SpiderMonkey (and Rhino) as well as Erlang as the culprit, but despite reinstalls and tests with these components, I still cannot get past this CouchDB issue on my system... Ideas? Pointers? Suggestions? Has anyone successfully installed and used CouchDB 0.11.0 on an Ubuntu 10.04 system to RUN APPLICATIONS? I have come across multiple individuals who immediately respond 'yes, I have it installed - it works great' only to have them realize in the end (as I did) that just because Futon thinks things are working, doesn't mean CouchDB is properly handling ALL requests. Thank you for your time and assistance!

    Read the article

  • Static noise in headphones

    - by John Murdoch
    I have a Asus P6T based system. I was using the on-board sound (plugging in Logitech X-230 2.1 analog speakers in the green "front speakers" 3.5mm analog output, then plugging in my headphones in that). I was quite happy with the sound quality (didn't hear any static noise if volume was turned down to my normal listening level). Then about a week ago I started having terrible static noise from the left channel, and no normal audio on that left channel. Right channel had more static noise than usual but did have a bit of sound. I tried using the AC'97 in front of my case but that seemed to have no signal. I decided my on-board sound card has gone bad and bought an internal sound card to replace it (Startech 7.1Ch PCI). This fixed the "no sound from left channel problem", but I had much more audible static noise. I decided the card was low quality and/or it had interference from all the other things happening inside the computer case, and bought a Sweex SC016 external USB sound card. But even with that I have static noise in headphones. Positioning the USB sound card differently doesn't seem to help. Trying the other analog outputs (e.g., surround) doesn't help. The static noise in all cases is proportional to the volume. I have tried different headphones, but the situation is situation though perhaps the flavour of the static noise changes slightly. So what are my options? a) Get another, more expensive, external USB sound card hoping the quality will improve? b) Get another, more expensive, internal sound card (PCIe 1x perhaps) hoping the quality will improve? c) Get a dedicated DAC box? d) Get some Hi-Fi earphones? Suggestions? tl;dr - Two different sound cards both still have static noise in headphones.

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >