Search Results

Search found 19017 results on 761 pages for 'purchase order'.

Page 571/761 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • How does Tunlr work?

    - by gravyface
    For those of you not in the US, Tunlr uses DNS witchcraft to allow you to access US-only (and UK-only stuff like BBC radio online) services and Websites like Hulu.com, etc. without using traditional methods like a VPN or Web proxy. From their FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. In order to use Tunlr, you will have to change the DNS address. See Get started for more information. I can't really wrap my head around how this works; I have always assumed that these services performed a geolocation lookup via your client IP. Just really curious as to how this works. EDIT 2 I believe they're only proxying the initial geo check and then modifying the data stream request to include your real IP address so that the streaming is direct, not proxied.

    Read the article

  • What should be monitored to troubleshoot file sharing problems?

    - by RyanW
    I'm running into some problems with a file share used by an ASP.NET web application. With this configuration, there are 2 web servers (win2k8 web) that connect to a file server (win2k8 enterprise), reading and writing files using a file share. Recently, one of the web servers has begun encountering an error accessing the file share: IOException: The specified network name is no longer available. There does not appear to be much info on the web for explaining what's causing this and how to best fix it, so I'm looking at what I can monitor in order to get clues. I'm not sure if it's hardware, just a load issue, file size, frequency, etc. With Windows perfmon, what can I monitor on the File Server side? There's the "Files Open" object, any other good ones? What can I monitor on the web server side? EDIT: I'll add that the UNC path uses the IP address of the file server, not a name to resolve. Also the share is a single, flat directory with over 100K files.

    Read the article

  • How do I restore to a delta file (disk) on Vmware ESXi

    - by Oscar
    Using VMware Server ESXi (freebie version) I have a Virtual Machine (win 2k3 r2 server). When I first provisioned it I took a snapshot of it. I recently tried to clone the primary drive using my standard hardware-based method to grow a windows disk. (using knoppix, clone drive to a new drive, make it bootable, then I intended to extend the partition via diskpart from within windows). This process failed; I tried setting the cloned drive (via the vmware gui) to replace the original drive, boot and be done. This didn't work out so well. The machine never booted. I checked the boot order, the disk location and all the basics I usually do. As a failsafe, I then tried changing all the settings back so the machine would boot to the original drive and I could figure out (as I eventually did) a better way of growing the disk. However when I powered on the machine with the original drive, it reverted back to that initial snapshot I created; It lost all the changes since. I looked in the file system and found a few files, I think the keyfile here is one named "delta" and I'm assuming that's the disk I want, but I can't find a way to have the Virtual Machine actually use that drive/file. It isn't available to add when I go to add an existing drive. Do I need to somehow commit that delta to the original drive and then boot from it again? Can you point me in the right direction? I've since discovered the proper way of growing drives using "vmkfstools" but I need to get back to the original state of the machine to try this out. Any help would be greatly appreciated.

    Read the article

  • convert a pdf/djvu file to png's under Linux how? [closed]

    - by user66732
    Imagemagick doesn't work (Fedora 14) on one PDF file: $ convert -density 300 INPUT.PDF out.png Error: /ioerror in --showpage-- Operand stack: 1 true Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1878 1 3 %oparray_pop 1877 1 3 %oparray_pop 1861 1 3 %oparray_pop --nostringval-- --nostringval-- 141 1 319 --nostringval-- %for_pos_int_continue --nostringval-- --nostringval-- 1761 0 9 %oparray_pop --nostringval-- --nostringval-- Dictionary stack: --dict:1157/1684(ro)(G)-- --dict:1/20(G)-- --dict:75/200(L)-- --dict:75/200(L)-- --dict:108/127(ro)(G)-- --dict:288/300(ro)(G)-- --dict:22/25(L)-- --dict:6/8(L)-- --dict:22/40(L)-- Current allocation mode is local Last OS error: 27 GPL Ghostscript 8.71: Unrecoverable error, exit code 1 convert: Postscript delegate failed INPUT.PDF': @ error/pdf.c/ReadPDFImage/645.<br> convert: missing an image filenameout.png' @ error/convert.c/ConvertImageCommand/2953. $ And it doesn't work on a djvu file: $ convert -density 300 INPUT.DJVU out.png convert: no decode delegate for this image format INPUT.DJVU' @ error/constitute.c/ReadImage/532.<br> convert: missing an image filenameout.png' @ error/convert.c/ConvertImageCommand/2953. $ an extra: the output filenames. out-0.png out-1.png ... out-9.png out-10.png out-11.png .. out-123.png out-124.png is there a way to be like this?: out-000.png out-001.png ... out-009.png out-010.png out-011.png .. out-123.png out-124.png because they would be in wrong order: out-0.png out-1.png out-10.png out-11.png out-123.png out-124.png out-9.png thank you :\

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

  • mod_fcgi produces random 500 Errors

    - by DmitrySemenov
    php 5.4.7 via mod_fcgi when I run the site sometimes it works, sometimes it crashed with 500 Internal Error, this is what I see in error.log everytime I run the script [Mon Sep 24 18:50:43 2012] [warn] [client 68.231.194.198] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 24 18:50:43 2012] [error] [client 68.231.194.198] Premature end of script headers: api.php any ideas? vhost config: <VirtualHost :80> ServerAdmin [email protected] DocumentRoot "/home/www/sites/test.com/html/development" ServerName test.com ServerAlias www.test.com ErrorLog "/home/www/sites/test.com/logs/error_log" CustomLog "/home/www/sites/test.com/logs/access_log" common <IfModule mod_fcgid.c> <Directory /home/www/sites/test.com/html/development> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/www/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> FcgidMaxRequestLen 1073741824 </VirtualHost> fcgi.d conf LoadModule fcgid_module modules/mod_fcgid.so # Use FastCGI to process .fcg .fcgi & .fpl scripts AddHandler fcgid-script fcg fcgi fpl # Sane place to put sockets and shared memory file FcgidIPCDir /var/run/mod_fcgid FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm IdleTimeout 300 BusyTimeout 300 ProcessLifeTime 7200 IPCConnectTimeout 300 IPCCommTimeout 7200 PHP_Fix_Pathinfo_Enable 1 php-fcgi-starter.php #!/bin/sh PHP_CGI=/usr/local/php547/bin/php-cgi PHP_INI=/etc/php547-fastcgi.ini export PHP_FCGI_TIMEOUT=1200 #export PHP_FCGI_CHILDREN=6 export PHP_FCGI_MAX_REQUESTS=1000 exec $PHP_CGI -c $PHP_INI

    Read the article

  • Ubuntu server crashes; need help figuring how to figure out why

    - by neezer
    I have a 768 Slice at slicehost.com running Ubuntu Server 8.04.2 LTS (hardy) with a LAMP stack on it that periodically crashes, though why I am not sure. From what I can tell, there is a process that basically goes rogue and consumes all the memory on the slice, suffocating all the other programs running until the whole thing comes to a grinding halt, and I have to do a hard reboot of the slice to get it back up and running again. I can't detect any pattern for this (it seems to happen about once a month, more or less). Here's a screenshot of my console during the last crash: I would assume that a possible cause might a PHP script or an apache configuration rule that might cause the crash if triggered? How would I be able to find out which one is the offending one? I've checked and rechecked all my PHP scripts, and running them doesn't seem to trigger the crash. I've also been able to log on to my system during a crash and see what's running (with top), but I can't tell how the offending process was started, so I can't trace the root of the problem! I know my description is overly generic, but unfortunately my expertise in tracking down the source of these glitches is very limited. If you need any additional information about my system in order to help me figure this out, please let me know in the comments, and I will append it to the question. My only other lead as to the culprit here is Wordpress, which we have installed on this server. Here are the details: Wordpress 3.0.3 with the following plugins installed and activated: Addmarx - Bookmark/Share/Email Dropdown, Akismet, All in One SEO Pack, Animated Banners, Automatically publish highlights of any website, directly to your Blog, Broken Link Checker, CMS Dashboard, Collapsing Categories, Status Updater, SubHeading, Ultimate Google Analytics, VastSubCat, WP-CMS Post Control, and WP Super Cache

    Read the article

  • How to run Fujitsu P27T-7 LED monitor in its not native resolution and have perfect fonts rendering

    - by Ilia Rostovtsev
    My problem is completely opposite to anything I could find as I need to run my monitor in its NOT native resolution and have perfect font rendering. I recently got myself Ultra HD 2560x1440 27 inch monitor (Fujitsu P27T-7 LED) and I have an issue with this. I would call it personal but I'm afraid it's not as few people already agreed with me. I do programming and the text on UHD is way to small for comfortable usage. I changed the resolution to regular Full HD (1920x1080), it became just right but the text is looking slightly blur now, in comparison to both its natural UHD resolution and/or to my old 23 inch NEC. I am pretty frustrated and not sure what to do and how to make fonts look just as sleek as they should? I can't work in UHD resolution (my vision is 100% perfect), simply if calculated, picture size with Ultra HD (2560x1440) on 27 inch is around 30% smaller than Full HD (1920x1080) on 23 inch. In order to have same font size, if compared with Full HD 23 inch, 27 inch Ultra HD monitor must be around 32 inches in size. If I set my new monitor to regular Full HD 1920x1080, then the fonts' size are just perfect but the quality is not as it's blurry? Could anyone please help me out with an advise of how to solve this problem? Spec: nVidia 560 Ti with DVI-D port on Fedora 20. EDIT 1: Changing fonts doesn't really help as everything else doesn't look the way it should. EDIT 2: The monitor is buzzing on 2560x1440 so badly in case there are lots of lines on the screen, like file listing. If I type ls /usr/bin it makes such nasty irritating sound. When resolution goes to 1920x1080 it's a bit better. Any idea why?

    Read the article

  • HTC Diamond Touch sync problem

    - by Anders
    I have a HTC Diamond Touch with all my contacts etc. on it. Did however not use it for 6mo while being abroad. When I start the phone now I realize that the touch screen has stopped working. I have tried restarting, soft resetting, shutting it off etc but the touch just wont follow commands. However, I can manage the phone by buttons so it's not frozen. Hence I can get into the phone and watch contacts but not use it to call etc. The problem is, how do I get my 300 contacts out of the thing!? When I'm plugging in the phone, it lets me choose between "Sync with Outlook" and "Use as storage device". It automatically selects "Use as storage device". Now, I cannot choose to sync it with the buttons. I can not change this option afterwards either. In short, I have a phone with all of my contact data and am completely unable to get that out of it. Any tips/help/suggestions? If possible, preferably one that does not including sending the phone to a hardware workshop for three weeks in order to get it fixed:)

    Read the article

  • Ubuntu and Windows and Separate HDs, oh my!

    - by LuxuryMode
    Need some major help. Running a Dell XPS/Dimension 630i. It came with "SATA 2 RAID 0 With Dual 500GB Hard Drives." I have installed a new, third non-raided drive and installed Ubuntu on it. So now I have Windows on the original hard drive and Ubuntu Linux on the new HD. When I get to the boot menu where I can select an OS, if I select windows I get an error: "No such drive, no such disk." Also, strangely in the first place, in order to even get to the bootloader menu I have had to disable ALL ports under the RAID config. Unless I do this, I will just get to a never-ending blinking cursor. I have tried every conceivable CMOS config and nothing else works. Tried setting port 3 (the new HD w/ Ubuntu) to first hard disk boot priority. Tried disabling all other ports and enabling the Ubuntu HD port and vice versa. Here's a pic of the error I get when I try to boot to Windows: http://imgur.com/TJ1mS. Also, please note that I can actually access all files from the raided Windows drive through Ubuntu. (Someone suggested just reinstalling windows from installation CD. Agree?)

    Read the article

  • How do I fix error 1303 during TI Connect install?

    - by smoth190
    I recently purchased a TI-84 Plus graphing calculator, and I'm trying install the TI Connect software in order to connect the calculator to my computer via the USB cable. Unfortunately, I'm getting this error while trying to install the program: Error 1303. The installation has insufficient privileges to access this directory: E:\Data\Timothy\Documents\MyTIData. The installation cannot continue. Log on as administrator or contact your system administrator. However, my account is the only account on my PC, and it has administrative privileges. I've also tried running the installer with Run as Administrator, but with no luck. If I create the folder MyTIData manually, I receive this error: Error 1317. An error occurred while attempting to create the directory: E:\Data\Timothy\Documents\MyTIData I've reapplied the security settings to the E:\Data folder (and all its sub-directories) to Full for my account. I've also gone into Computer Management, and given SYSTEM full privileges for the entire disk. I've also logged out, logged back in, restarted, etc. but still, no luck. Now, I should mention that my Documents folder is not at the default location. I changed it due to my C: disk being a 90GB SSD, so I moved all my personal data onto the extra storage disk (which is ~1TB). I don't know if that is causing the issue, but it can't hurt throwing it out there. So why can't I install this program? Google'ing the problem brings up this error for various other installers (such as Visual Studio and Microsoft Office), but nothing for TI Connect. All the solutions are the same: Give the folder Full privileges...but I've already done this! I've also tried running the installer with and without the calculator plugged in, but it didn't change anything. In the prompt that contains the error, repeatedly clicking Retry or waiting a few moments before clicking Retry also produces no result.

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • How to enable hotlink protection without hardcoding my domain in the Apache config file?

    - by Jeff
    Been surfing around for a solution for a couple days now. How do I enable Apache hotlink protection without hardcoding my domain in the config file so I can port the code to my other domains without having to update the config file every time? This is what I have so far: RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www\.example\.com [NC] RewriteRule \.(gif|ico|jpe|jpeg|jpg|png)$ - [NC,F,L] ... And this is what Apache suggests: SetEnvIf Referer example\.com localreferer <FilesMatch \.(jpg|png|gif)$> Order deny,allow Deny from all Allow from env=localreferer </FilesMatch> ... both of which hardcode the domain in their rules. The closest I came to finding any info that covers this is right here on ServerFault, but the conclusion was that it cannot be done. Based on my research, that appears to be true, but I didn't find any questions or commentary dedicated soley to this question. If anyone's curious, here is the link to the Apache 2 docs that cover this topic. Note that Apache variables (e.g. %{HTTP_REFERER}) can only be used in the RewriteCond text-string and the RewriteRule substitution arguments.

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • Not getting IP from ISP on Multicast Network

    - by Johan Nielsen
    Im having an odd issue with my ISP (COMX.dk) I have a managed access gateway box (Telsay) with three 8P8C ports for use with Internet and Ip-Tv (respectively on different VLANS (so does my ISP tell me)) To utilize a port you will need to register your device's mac address through an online interface. You will then get your device paired with a static ip. I am using one port actively and I have registered another device (router). The router is configured to listen for an active dhcpd on the network. When my router get a lease I get a private ip 192.168.2.2 (not the one bound to my mac) which is odd! I unconnected my router from the gateway and connected my laptop directly. Same thing happened - I was given a private address. I did a port scan on the gateway and found port 80 to be open and browsed to the ip. I was then presented with a management interface of a Belkin wireless router (HMMM!!!!) <--by the way, not my gear At this point I called the ISP to let them know of my issue/findings - Only to be replied "Well, we cant see any rogue dhcp servers" (thinking to myself, well I can) I then decided that it could be fun to try the other port of my gateway, only to experience the same. So I reconnected my router and used the remaining port to make an observer(wireshark promic etc.) I am able to see my router trying to discover a dhcp server but I can also see my ISP's IGMP and PIMv2 packages just repeating the same pattern. Hello...Hello...Hello :) So I called them again, only to get the same response, "we dont see any rogue dhcp's...we cant see the host you are talking to (mac address of the Belkin router)...you are definitively connected through wireless?!?(no im not, no such thing as a wireless wire - i thought to myself)" My questions is, What is going on? (besides from what im reporting here) What am I seeing that the don't? What can I tell them in order for them to resolve mine/their issue?

    Read the article

  • Alfa AWUSO36H 1W dysfunctional driver

    - by BrainStorm
    I recently purchased an Alfa AWUSO36H 1W wireless USB adapter for my notebook, in order to improve signal strength and quality. I'm currently using Linux Mint 11, and the it uses the RTL8187 driver for this adapter, I'm also using a 4dbi antenna, though I have others. The problem is that this adapter does exactly the opposite of what it should, actually my internal Broadcom BCM4313 adapter works way better than the alfa. Browsing is slow, some network applications don't even work, pings against Google.com on the internal adapter runs smooth, while in the alfa it gets like 25% packets lost or more! I'm less them 50 feet from my AP, the internal adapter gets 44/70 link quality, and the alfa gets around 60/70 (iwconfig output). Also the system always sets alfa power to 20dbm(100mw), then I have to do sudo iw set reg B0 to make it 30dbm(1000mw), but apparently no significant change. I've installed wireless-compat drivers, no change either. And worst of all, in Windows 7 it works way more smoothly for browsing, though I couldn't test it properly there. I hope its a driver problem, even if it's a pain to find/compile Linux drivers for a starter, I prefer it to a hardware problem where I would need to buy another adapter, since I have no money left (except for the cantenna pieces).

    Read the article

  • xcopy Not Surpressing File/Directory Query

    - by Daniel Bingham
    Hey folks, I'm attempting to use xcopy to copy over a file from one machine to another on our network as part of a Java program. I'm calling xcopy like this: xcopy "C:\Program Files\path\to\my\file" "\\othermachine\c$\Documents and Settings\<myUserName>\Desktop\Test\path\in\directory\structure\to\file" /e /y /i Because I'm calling it from with in Java, I need all the prompts to be suppressed. For the most part, \i and \y have done exactly that. However, for this one file /i fails and I get the file or directory prompt. The result is that it hangs the entire program. I've also tried calling it with /s /t /q appended on to the existing options, to no avail. Why isn't /i working to suppress the File or Directory prompt? Is there an order I need to call the options in? Is there something else I need to do? EDIT: I should mention, the file is a text file - single line of text. It does not have an extension. It looks like this: FILE-NAME

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Broken Python installation on CentOS 5.8

    - by Beckett
    I already searched for solution to my problem via Google and stackoverflow's search facility, but haven't found anything related specifically to it. Here's the problem: I needed python 2.7.3 on CentOS 5.8 machine which has only python 2.4.3 preinstalled. Also neither there's the suitable version in it's repositories nor I can upgrade installed version. That's why I decided to build python from source code. But I've made a mistake: instead of make altinstall I did make install thus changing default version of the current installation. It was before I found this article - How to install Python 2.7.3 on CentOS 6.2 . I guess 5.8 and 6.2 versions aren't different to the extent this article is inapplicable. After installation of new python version I installed pip, but once I tried to invoke it, I got "No module named pkg_resources" error. In order to solve this issue I installed setuptools from repository. But it had only led to another error: "Distribution Not Found". My final step was to follow the guide I posted the link to, but I was unable to perform last step: easy_install-2.7 virtualenv command threw "-bash: /usr/local/bin/easy_install-2.7: .: bad interpreter: Permission denied" error. Now when I try to invoke pip or pip-2.7 both commands raise the same error with different names of binaries after "-bash:". Is there any way to fix this problem, so I could install new python version (2.7.3) alongside with the preinstalled one (2.4.3) according to the guide? Any help will be appreciated. P.S.: yum is working fine, although it needs python to function, so I hope the damage I unknowingly caused isn't very severe. Also I'm not a native English speaker, so I apologize for possible occasional grammatical and/or spelling errors.

    Read the article

  • How to setup a virtual machine in Ubuntu desktop to run Debian Server

    - by stickman
    I want to run a virtual machine in my Ubuntu desktop that runs a Debian server. The purpose of this is to generate Debian packages. I have some C++ applications that were originally developed on my Ubuntu machine, and I need to (re)compile them on a Debian server in order to: build Deb packages for deployment on a Debian server make sure that the applications will definitely work on a debian server The idea is so that I can do 90% of my development on Ubuntu (where I am more comfortable), and deploy a binary package that definitely works on Debian. BTW, I am developing on Karmic Kola (Ubuntu 9.10). [Edit] Following the advice I got so far, I have installed debootstrap and Debian 'Lenny' on /srv/chroot/debian_lenny on my machine. I am not sure this is the server version, but in any case I dont think that matters for my purposes (though it would be useful to know how to specifically install the server version). At the moment though, I am like a fish out of water, since there is no GUI, and it is only a console that I have in the chroot jail. I had a look in the home folder (I cheated, by using the KNavigator in Ubuntu), and there are no folders there - which presumably mean that no users have been set up as yet in the Debian "system". I would like to know how to do the following: Download and install the dev tools needed for (re)compiling my C++ apps Copy my projects from the Ubuntu "system" to the Debian "system" After building the binaries, I would like to create a debian binary package containing all of my binaries, so that I can install the package on a Debian server (my remote server)

    Read the article

  • Pages load in brower fine, but 404 not found reported for the page during the GET on all pages except index

    - by user885983
    I believe this question is more suited to serverfault (please correct me if not). This issue appears very similar to this question (except there are no 301 Moved Permanently for any pages). The domain is yorkshirebadges.co.uk. For example, loading yorkshirebadges.co.uk or yorkshirebadges.co.uk/index.php reports no 404s during network inspection. But every other page (/contact.php, /products.php) report a not found. Mod_rewrite is being used on the site, I checked this out but didn't see any obvious errors. It's included below for reference: RewriteEngine on RewriteRule ^store/material/([^/\.]+)/price/?([^/\.]+)?$ products.php?prodType=$1&price=$2 RewriteRule ^store/price/?([^/\.]+)?$ products.php?price=$1; RewriteRule ^store/material/?([^/\.]+)?$ products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?$ products.php?prodCat=$1 RewriteRule ^store/([^/\.]+)/price/([^/\.]+)$ products.php?prodCat=$1&price=$2 RewriteRule ^store/Type/?([^/\.]+) products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?([^/\.]+)?$ view-product-details.php?cat=$1&prodName=$2 RewriteRule ^store/([^/\.]+)/material/?([^/\.]+)?$ products.php?prodCat=$1&prodType=$2 RewriteRule analytics http://www.google.com/analytics <IfModule mod_suphp.c> suPHP_ConfigPath /home/yorkshir <Files php.ini> order allow,deny deny from all </Files> </IfModule> Chrome Network Inspection (and firebug on firefox) report 404s on all pages except the index, the server is apache2. Really scratching my head on this one!

    Read the article

  • How to play individual albums in iTunes?

    - by Herb Caudill
    I know of two ways to play one specific album in iTunes: Do a search that's specific enough to include just that album and no other tracks; press a "Play album" button. (Doesn't work in cover flow or list view.) Go to list view; turn on column browser; in View/Column Browser, make sure "Albums" is showing; double-click an album name. These are fine as far as they go, but: Double-clicking an album in cover flow will play the album, and then keep going (in alphabetical order). That's no good. In playlists like "Purchased" or "Recently Added", you can either view and play whole albums, or sort by date added; you can't do both. In general, there's no straightforward way to get from a track in a playlist to the whole album it belongs to. What I would really, really like, would be to right-click on any song or album cover, anywhere, and choose "Play album". While I'm waiting for Apple to add that, any tips for simple album-centric listening?

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • Why is .htaccess not allowed in a directory but is allowed in another?

    - by John Isaacks
    I have apache2 installed on ubuntu 10.4 inside my var/www/ directory [amung others] I have a cakephp and a dvdcatalog directories. Each of which have CakePHP 1.3 installed. I can access them both via localhost/cakephp and localhost/dvdcatalog But the dvdcatalog shows up with no css styling. They both have these files: /var/www/cakephp/app/webroot/css/cake.generic.css /var/www/dvdcatalog/app/webroot/css/cake.generic.css But when I go to http://localhost/cakephp/css/cake.generic.css it sees the file but it does not see the file when I go to http://localhost/dvdcatalog/css/cake.generic.css I think this means the cakephp folder is able to use .htaccess and the dvdcatalog is not. I setup the cakephp directory last month when I was following in the blog tutorial. I am setting up the dvdcatalog directory now for a different tutorial. So I am not sure if I am missing a step. in my /etc/apache2/apache2.conf file I have this: <Directory "/var/www/*"> Order allow,deny Allow from all AllowOverride All </Directory> Which I thought gave .htaccesss to all. Does anyone have any ideas what the problem is?

    Read the article

  • Switch to IPv6 and get rid of NAT? Are you kidding?

    - by Ernie
    So our ISP has set up IPv6 recently, and I've been studying what the transition should entail before jumping into the fray. I've noticed three very important issues: Our office NAT router (an old Linksys BEFSR41) does not support IPv6. Nor does any newer router, AFAICT. The book I'm reading about IPv6 tells me that it makes NAT "unnecessary" anyway. If we're supposed to just get rid of this router and plug everything directly to the Internet, I start to panic. There's no way in hell I'll put our billing database (With lots of credit card information!) on the internet for everyone to see. Even if I were to propose setting up Windows' firewall on it to allow only 6 addresses to have any access to it at all, I still break out in a cold sweat. I don't trust Windows, Windows' firewall, or the network at large enough to even be remotely comfortable with that. There's a few old hardware devices (ie, printers) that have absolutely no IPv6 capability at all. And likely a laundry list of security issues that date back to around 1998. And likely no way to actually patch them in any way. And no funding for new printers. I hear that IPv6 and IPSEC are supposed to make all this secure somehow, but without physically separated networks that make these devices invisible to the Internet, I really can't see how. I can likewise really see how any defences I create will be overrun in short order. I've been running servers on the Internet for years now and I'm quite familiar with the sort of things necessary to secure those, but putting something Private on the network like our billing database has always been completely out of the question. What should I be replacing NAT with, if we don't have physically separate networks?

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >