Search Results

Search found 18333 results on 734 pages for 'temporary directory'.

Page 494/734 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Java - Resize images upon class instantiation

    - by Tyler J Fisher
    Hey StackExchange GameDev community, I'm attempting to: Resize series of sprites upon instantiation of the class they're located in (x2) I've attempted to use the following code to resize the images, however my attempts have been unsuccessful. I have been unable to write an implementation that is even compilable, so no error codes yet. wLeft.getScaledInstance(wLeft.getWidth()*2, wLeft.getHeight()*2, Image.SCALE_FAST); I've heard that Graphics2D is the best option. Any suggestions? I think I'm probably best off loading the images into a Java project, resizing the images then outputting them to a new directory so as not to have to resize each sprite upon class instantiation. What do you think? Photoshopping each individual sprite is out of the question, unless I used a macro. Code: package game; //Import import java.awt.Image; import javax.swing.ImageIcon; public class Mario extends Human { Image wLeft = new ImageIcon("sprites\\mario\\wLeft.PNG").getImage(); //Constructor public Mario(){ super("Mario", 50); wLeft.getScaledInstance(wLeft.getWidth()*2, wLeft.getHeight()*2, Image.SCALE_FAST); } Thanks! Note: not homework, just thought Mario would be a good, overused starting point in game dev.

    Read the article

  • Boot from VHD with windows7 - bcdedit trouble

    - by Michiel Overeem
    I'm running Windows7 Enterprise, x64 version. I've created a windows7 vhd file with help of the following blog post hanselman blog After that, I've added it to my boot menu with help of another blog post hanselman blog This worked great. After that, i've upgraded my hdd. With help of clonezilla i've copied the old disk to the new disk. Next step was to copy the vhd to another partition. Then i updated the boot menu. However, the step C:\>bcdedit /set {guid} device vhd=[driveletter:]\<directory>\<vhd filename> fails with the message An error has occurred setting the element data. The request is not supported. what is happening?

    Read the article

  • Mac OS X file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Time Machine (OSX) doesn't back up files in Mount Point or Disk Image File

    - by Chris
    Hi all, I found this Q&A (http://superuser.com/questions/148849/backup-mounted-drive-of-an-image-in-time-machine) and this prompted me to ask the following question: I have two disk images which are scripted to be mounted on login. These two disk images are always mounted to the same location. These two disk images are encrypted TrueCrypt volumes. Time Machine (TM) will only back up the disk images the first time they are mounted, but not after that. As I modify documents within the volumes throughout the day, the modified timestamps are adjusted properly. However, TM does not back them up. TM never backs up the mount points which are two folders within my home directory. Any ideas as to why neither the mount point or the image files are backed up? Do the image files have to be closed (unmounted) after being modified for TM to back them up? Thanks, Chris

    Read the article

  • Windows 8 Media Center Pack Install Fails with DoTransmogrify failed due to error 0x80070011

    - by Conrad Frix
    When I attempt to use Add Features to install the Windows 8 Media Center pack I get the "Something Went Wrong" Message Checking %localappdata%\Microsoft\Windows\Windows Anytime Upgrade\Upgrade.log I can see the following error block 2012-10-27 18:43:13, Error WAU DoTransmogrify failed due to error 0x80070011. 2012-10-27 18:43:13, Error WAU UpgradeSKU failed. Exiting. 2012-10-27 18:43:13, Error WAU The worker process exited unexpectedly 2012-10-27 18:43:13, Error WAU Something went wrong 2012-10-27 18:43:13, Error WAU Close this wizard and try again. My understanding is that 0x80070011 means Error_Not_Same_Device. I think this may be related to the fact that C:\Users is a junction point to D:\Users Do I have to move my users directory back? Is there a workaround?

    Read the article

  • Installing perforce visual client on linux

    - by Manish
    I am from Mac background trying my hand at installing perforce client visual(P4V) on my linux box.For this I download the correct version here and untar the files. Then I cd to the directory ~/Desktop/p4v-2012-blah-blah/bin I also say chmod +x p4* After this i try running p4v (by double clicking) but I dont see anything .The file type is shown as a "text executable" but i dont know why it is not running. On mac i had done the same thing -just clicked on p4v and the client would show up(where I filled the server address and everything )But not sure what is going wrong here.Can someone give me directions? FWIW i did check out this link .

    Read the article

  • I get a "could not create lock file" error when trying to run Postgres

    - by zermy
    I recently had to replace my postgresql.conf file, and I thought I got the settings right, but when I try to run Postgresql, I get this error: ESTFATAL: could not create lock file "/var/run/postgresql/.s.PGSQL.5432.lock": No such file or directory My workaround is to go as root and create a folder called postgresql in /var/run and then change the owner of the folder to postgres. The biggest problem is that I need to do this every single time my computer starts, the folder somehow deletes itself. I tried commenting out the external pid file bit in the conf file, but that didn't change anything.

    Read the article

  • The eval(base64_decode()) virus has infected a server. Would removing executable permissions help solve the issue?

    - by Bravo.I
    The eval(base64_decode()) has infected a server. This is a PHP virus that uses the eval function in PHP and replicates itself to all the PHP files on the system as far as I'm certain. Would removing executable permissions help solve the problem?! Please answer really fast, and also, if you've got any better ideas on how to stop this virus.. I'm all ears. The virus has replicated itself to several folders in the directory and most of the other folders are actually several other websites...

    Read the article

  • Why is a ZFS pool not persisting over server restart?

    - by Chance
    I have a ZFS pool with 4 drives. It also has a 3gb ZIL and a 20GB L2ARC that are each partitions on an SDD that doubles as my Linux Mint (ver. 13) boot drive. The pool is mounted to /data. The problem I am running into is that when I restart the server the pool/directory is completely wiped despite having data in it prior. I'm afraid I'm doing something wrong in the setup, which leads me to the following questions: What would cause this? Is there anyway to get the data back? How do I stop it from happening in the future? Thank you in advance! pool: data state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sda1 ONLINE 0 0 0 sdb1 ONLINE 0 0 0 sdc1 ONLINE 0 0 0 sdd1 ONLINE 0 0 0 logs sde4 ONLINE 0 0 0 cache sde3 ONLINE 0 0 0 errors: No known data errors

    Read the article

  • Restore Default OSX Home Folder Icons

    - by Cerales
    I want to keep folders such as Pictures and Documents in my home folder in sync between computers; at the moment, I'm using Dropbox to do this. On Computer A I added a symlink to the folders in my home directory to the Dropbox folder. On Computer B I deleted the existing Pictures and Documents folders and replaced them with symlinks to the folders in the Dropbox folder. This works very well, except that in Finder I don't see the nice little icons that Lion has by default for folders of a particular kind. Is there any way to restore these?

    Read the article

  • IIS 7 on 2008r2 shows blank page

    - by sysdmxm
    I am using IIS7 on Windows Server 2008r2. I have recently installed php and IIS. I am trying to browse to the index.php but it returns a blank page. It loads the favicon and the header of my page. When I load an info.php file it is not blank. If I disable Anonymus Authentication it the same index.php returns IIS 7.5 Detailed Error - 401.2 - Unauthorized. What is strange is I installed this exact code onto another fresh IIS install I just did and it loads fine. Permissions for the directory are the same for both.

    Read the article

  • Open Google Chrome Specific Profile From Command Line Mac

    - by gradedcatfood
    I have been trying to open Google Chrome from command line but with no luck! I have tried How do I start Chrome using a specified "user profile"? My goal is to open Google Chrome with a specific profile such as "profile 1", "profile 2", or "Default" from the command line, using bash to be specific, on my Mac. UPDATE: 6/3/14 Got this to work BUT only works when opening chrome for the first time open -a Google\ Chrome --args --"profile-directory"="Profile 1" So How do you get --args to be accepted AFTER google chrome as already been launched??

    Read the article

  • Subversion 1.7 on 12.04 precise: libsasl error, compiling from source?

    - by Andrew Mao
    Background: I am a longtime Gentoo user, and this is my first time using Ubuntu (installed on a VM to avoid compiling everything from scratch). I am familiar with a Linux environment but somewhat unfamiliar with Ubuntu. I am trying to install Subversion 1.7 on Ubuntu and saw this post: Where can I find a Subversion 1.7 binary? The above post recommends using the PPA ppa:dominik-stadler/subversion-1.7. I also found the PPA ppa:svn/ppa from another link. They both cause problems for me. The issue is that any svn operation using the remote server causes the following error: svn: E170001: Unable to connect to a repository at URL 'svn+ssh://my_repo' svn: E170001: Could not create SASL context: generic failure: No such file or directory This seems to arise from a recent bug involving SVN dependency on the libsasl library, as documented by Debian users here: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=683555 and also Mac users here: https://trac.macports.org/ticket/34861 Resolution seems to involve either updating the cyrus-sasl or libsasl library to a newer version (neither of which is in the latest apt packages), or compiling subversion without SASL support. As a Gentoo user I started looking into how to compile svn from source, but it looks way more complicated on Ubuntu than I'm used to and I'm not sure what the canonical way is. My questions: Is there an obvious fix for this problem that I am overlooking? Is there a way to update the dependencies for SVN to something that works through using synaptic or apt-get? If I want to compile from scratch, how do I use the sources in the PPA instead of downloading my own source copy (i.e. the PPA has both binary and sources?)

    Read the article

  • Squid not caching

    - by Abhishek Chanda
    I am trying to configure Squid as a caching server. I have a LAN where The webserver (apache) is at 192.168.122.11 squid is at 192.168.122.21 and my client is at 192.168.122.22. The problem is, when I look at Squid's access log, all I see are TCP_MISS messages. It seems Squid is not caching at all. I checked that the cache directory has all proper permissions. What else can go wrong here? Please let me know if I need to post the configuration

    Read the article

  • I think I have multiple postgresql servers installed, how do I identify and delete the 'extra' ones?

    - by Guided33
    I seem to have a few installations of postgresql on my machine somehow. I'm not sure if this is a mistake or, if Ubuntu for some odd reason duplicates direcotories and keeps them elsewhere. I have a postgresql directory in /etc one in /usr/lib and one in /opt I'm properly confused at this point. How do I go about deleting the extra ones.which ones are he extra one? I also need to make sure that my 'pg' gem in my rails env is pointing towards the correct posgresql db server. Any thoughts on my issue would be huge.

    Read the article

  • DFS keeps constantly replicating almost all files

    - by Adrian Godong
    We have always had problems with DFS, but recently it has gotten worse (with no apparent reason) to the point it's becoming harmful. We have one master server and DFS connections to other four servers. The four severs don't modify any files, so all replications always propagate from the master to the four other servers. The replicated directory has about 900,000 files. In the recent weeks, every time we check DFS, the DSF backlogs have hundredths of thousand of files. For instance now, the master server now replicates about 700,000 to three of the four servers while the fourth one is fine. Sometimes, only one is off, sometimes two and this time three. Also, it is never the same set of servers. It is inconceivable that something periodically touches all 900,000 files. The biggest change which happens is a scheduled update of several thousand files every six hours. Does anybody have the same problem? Is it a known issue?

    Read the article

  • Setting up a chroot sftp on debian server

    - by Kevin Duke
    I'm trying to allow a user "user" to access my server by either sftp or ssh. I want to jail them into a directory with chroot. I read the instructions here however it does not work. I did the following: useradd user modify /etc/ssh/sshd_config and added Match User user ForceCommand internal-sftp ChrootDirectory /home/duke/aa/smart to the bottom of the file changed the subsystem line to Subsystem sftp internal-sftp restarted sshd with /etc/init.d/ssh restart logged in with ssh as user "user" with PuTTY Putty says "Server unexpectly closed the connection". Why is this and how can it be fixed? EDIT Following the suggestions below, I've made the bottom of sshd_config look like: Match User user ChrootDirectory /tmp yet no change. I do get a password OK but I cannot connect via ssh nor sftp. What gives?

    Read the article

  • Flex 4 + Apache Ant, Cannot Load FlashPunk Libraries

    - by SquareCrow
    I have been searching google, Apache Docs*, and FlashPunk forums looking for an answer to this: I cannot get Ant/Flex to find and compile the FlashPunk libraries. Here is my build.xml. [code] <!-- Fetch the JAR full of Flex tasks if it is not already in the source directory --> <copy file="${FLEX_HOME}/ant/lib/flexTasks.jar" todir="${SOURCE_PATH}"/> <!-- Add flextasks to the project --> <taskdef resource="flexTasks.tasks" classpath="${SOURCE_PATH}/flexTasks.jar"></taskdef> <!-- Release build Flash Player 10.1 --> <target name="build"> <!-- Build the FlashPunk library --> <echo message="building swc..." /> <compc output="FlashPunk.swc" keep-generated-actionscript="false" incremental="false" optimize="false" debug="true" use-network="false"> <include-sources dir="${FLASHPUNK_PATH}/net" includes="**/* flashpunk/utils/* flashpunk/masks/*" excludes="**/*.TTF **/*.png"/> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml"/> </compc> <echo message="building swf..." /> <mxmlc file="${SOURCE_PATH}/epOne.as" output="${OUTPUT_PATH}/epOne.swf" debug="false" incremental="false" strict="true" accessible="false" link-report="link_report.xml" static-link-runtime-shared-libraries="true"> <optimize>true</optimize> </mxmlc> </target> [/code] Results in many errors of the type "Definition net.flashpunk.masks:Grid could not be found" even though when I open the directories I can see the *.AS files right there. Sorry if this is very basic. I am piecing together knowledge of Ant from docs and tutorials. *I decided to use Ant because neither FlashDevelop for Windows nor Eclipse for Linux seemto work for me.

    Read the article

  • copSHH how to restrict user from going back from there main root

    - by minus4
    I have installed SFTP on a windows servers using copSSH and all is good and it works well however you can go back from the main root. For example when i use C:\copSSH\home{username} as that user i can go back into copSSH and into them directories too. And I have a user setup to actually be C:\inetpub\wwwroot but that user can go into the system and everything i have this set as my path /cygdrive/c/inetpub/wwwroot It would be ideal if the user could only go forward from the start directory, rather than out and about there is no write ability but there is read and download....... thanks

    Read the article

  • Read-only filesystem Recovery Mode not working

    - by purbleguy
    I have seen other posts of this before, but they didn't help. In short, today I was trying to play Colobot on my Ubuntu Trusty computer, when I tried to access the directory the game was in by terminal, bash warned me that the disk was in a read-only state. I'm like, ok... So I reboot and go into recovery mode, there I do fsck, it finds errors, but apparently fails to fix them. At that point I was getting annoyed and searched the internet, once I found an answer I ran the grub and dpkg options in recovery mode, recovery mode said it was read/write, but when I boot in, I get the same thing, read-only. So I reboot into recovery mode, and tada! It's read-only again. I can't think of anything else to do, as the other people who had the same problems had them fixed by the steps I did. I got all my important files backed up to both a seperate partition and a seperate computer, so no worries there. I just need help getting this to work, as my computer might as well be a brick if I cant do f/a on it

    Read the article

  • Connecting Snow Leopard 10.6.4 to a Linux shared folder using Samba

    - by Vittorio Vittori
    Hi, I'm trying to connect to a web server running on Linux CentOS 5.5 where I've shared a folder. I'm trying to connect to the directory with Snow Leopart 10.6.4 client without success. On CentOS I've started the Samba service and a Samba user with his password and then I've tried to connect to the server with the command smb://10.0.0.7 to reach the IP of the machine and then writing the username and password I've previously created. The server returns me the list of the shared folders with the leopard specific browser, when I click to the folder I want the browser returns this error (translated from Italian): Leopard message: Connection failed There was an error on connecting to "smb://10.0.0.7". Please verify the name or the IP of the server, and try again. How can I do to solve the connection problem?

    Read the article

  • Postgresql 9.2 where is the initdb located on Ubuntu

    - by thanikkal
    I am trying to install postgres on EC2 / EBS. I am following this article and stuck at the following step. sudo su - su postgres - /usr/pgsql-9.0/bin/initdb -D /pgdata I cant find the initdb command located at the stated location, matter of fact i cant find the pgsql* directory at all under /usr folder. Was this changed for Postgres 9.2 or is there an alternate command that would help me initdb? edit 1: I know the folder pgsql-9.0 is version specific, so i was expecting to see more like pgsql-9.2 or similar.

    Read the article

  • How can I tell which config file Apache is using?

    - by Claudiu
    I'm trying to set up virtual hosts on Mac OS X. I've been modifying httpd.conf and restarting the server, but haven't had any luck in getting it to work. Furthermore, I notice that it's not serving files in the DocumentRoot mentioned in httpd.conf (Libraries/WebServer/Documents), but in a different directory (/usr/local/apache2/htdocs). I don't see this folder mentioned anywhere in httpd.conf. Furthermore, PHP works, but the "LoadModule php5_module" line is commented out. This makes me think it's using another .conf file. How can I figure out which config is actually being loaded? Update: I just deleted that httpd.conf and apache behaves the same after restart, so it definitely wasn't using it!

    Read the article

  • Error installing nginx with passenger-install-nginx-module on ubuntu 11.10 & rails 3.1.0

    - by user938363
    Here is the error message from installing nginx with passenger-install- nginx-module (rvmsudo). The nginx is 1.0.6 installed under /opt/nginx (default). gem install passenger successfully prior. Someone has idea about the problem? thanks. /usr/bin/ld: /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/ ext/nginx/../common/libpassenger_common.a(aggregate.o): undefined reference to symbol 'round@@GLIBC_2.2.5' /usr/bin/ld: note: 'round@@GLIBC_2.2.5' is defined in DSO /usr/lib/gcc/ x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/libm.so so try adding it to the linker command line /usr/lib/gcc/x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/libm.so: could not read symbols: Invalid operation collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make[1]: Leaving directory `/tmp/root-passenger-2135/nginx-1.0.6' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong

    Read the article

  • Redirect HTTP requests based on subdomain address without changing accessed URL?

    - by tputkonen
    Let's say I have a domain: www.mydomain.com And I ordered a new domain: abc.newdomain.com Both domains are hosted in the same ISP, so currently requests to either of those addresses result in the same page being shown. I want to redirect all requests to abc.newdomain.com to folder /wp so that when users access abc.newdomain.com they would see whatever is inside folder /wp without seeing the URL change. Questions: 1) How can I achieve this using .htaccess? 2) How can I prevent users from accessing directly /wp directory (meaning that www.mydomain.com/wp would be blocked)?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >