Search Results

Search found 18756 results on 751 pages for 'generate images'.

Page 428/751 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Compiling LaTeX document makes Google Drive crash

    - by Sander
    I've the issue that when I compile a LaTeX document that is located inside my Google Drive this will after a few turns make the OSX Google Drive application crash. As this is an important document I want to keep it all the time inside the Google Drive location to ensure cloud backup but this ofcourse is not guaranteed if this makes my Google Drive crash all the time. I don't seem te be able to identify what is causing this and I was hoping that maybe some people here have any idea what might cause this? We're talking about a 8 pages document with 3 images, so nothing crazy big or complex.

    Read the article

  • How would I put together a site requiring several TB? [closed]

    - by acidzombie24
    Lets say I have a site with unmetered 100MBPS bandwidth (i assume its bits?) and the ram i require. Most plans i see offer HDD that hold 250gb and 1TB. But what happens if i compile/generate enough data that i require 10tb or 25tb? (I'd likely have two servers but...) I wouldn't be serving all of that data (well not to the public) so CDN wouldn't make sense. What do i do in this scenario? Do I need to get a custom plan from a hosting provider? (if so how do i find them?) Are there services that allow me to mount remote drives (that sounds wrong unless its a CDN so maybe not). Are there host that deals specifically with unmetered bandwidth and provides lots of disk space? Math says ~1TB is the most i'll ever need but if i happen to need more i'd like to know my options.

    Read the article

  • Where can I legally download Windows 7 OEM ISOs?

    - by Iszi
    I have a friend who has gotten a serious virus infection and needs her computer rebuilt. However, the computer did not come with any re-installation disks from the OEM and she doesn't have any good backups. I have a TechNet subscription, but I understand that the OEM product key attached to her computer should not work with the Retail or Volume License images I can get from TechNet. Is there anywhere I can legally download a copy of Windows 7 that will work with an OEM key, directly from Microsoft? I'd rather avoid having to work with the OEM's technical support and possibly have to pay for them to ship a disc that (IMHO) the system should have come with in the first place.

    Read the article

  • SD card mounted as bootable image instead of FAT 32

    - by Benny Wong
    I have an SD card that I recently used to take photos on vacation. It have taken a lot of photos using it and worked fine on the camera. However, I had forgotten that a few months before this trip, I tried to make the SD card a bootable Xubuntu USB drive. So when I plugged my SD card in to copy the photos, the card mounts as the Xubuntu image, rather than mounting as the FAT32 drive with the images on it. The files must still be on the drive. Any ideas on how to fix this? (I'm using Mac OS X) Thanks!

    Read the article

  • How/where to find all Update (msu) packages for Windows 7 (Enterprise)

    - by Rudi
    Hi, I've been using Wim2Vhd to create native boot vhd files. (I'm using this to keep several development environments ready, I'm a developer, I know -- I need help ;-) Now the first boot always ends up in several minutes installing all of the windows updates... I' know I could avoid this if I had a massive list of qfe (.msu) packages that I could supply to the command line of Wim2Vhd. Where can I find all of the updates so I can download them. I have found this: http://www.megaleecher.net/Downloading_Microsoft_Windows_7_Offline_Updates but I'd like a more official source and I'd like an easier way to download them all. I'm a developer, so if I find the information (web service?) to get all the current packages, I'll build a tool to download them to a single location and generate a list of them for use with Wim2Vhd.

    Read the article

  • Download/update webpages listed in XML sitemap

    - by unor
    I'm searching a FLOSS tool that downloads all pages (and embedded resources, e.g. images) linked in a XML sitemap (built according to http://www.sitemaps.org/). The tool should "crawl" the sitemap regularly and look for new and deleted URLs and changes in the lastmod element. So whenever a page gets added/deleted/updated, the tool should apply the changes. Some sitemaps list sub-sitemaps in sitemapindex?sitemap. The tool should understand this and load all linked sub-sitemaps and look for URLs in there. I know there are tools that allow me to extract all URLs from the sitemap, so that I could feed them to wget or similar tools (see for example: Extract Links from a sitemap(xml)). But this wouldn't help in getting noticed about updates to pages. Tracking the webpages itself for updates doesn't work, because "secondary" content on the pages changes daily, but lastmod gets only updated when relevant content changed.

    Read the article

  • Recommendations for VMWare web server environment with load balancer.

    - by Ben
    We run IIS websites on a VMWare production server that pull image content and video content from a separate IIS instance on another server (media server). The media calls (images and video) are straight http:// calls and not using a streaming application. During peak traffic periods, we clone the production server five times and have a load balancer distribute traffic to all five production servers. The media server does not get ramped up. We noticed that the processing and resources on the media server gets very taxed during this period. Would it make sense to run the IIS instance for the media server locally on the production server and have it cloned with the production servers, then have a rule on the load balancer negotiating these media calls from the website? Would it be better to allocate more resources (memory and CPUs) to the media server VM and not clone it with the production servers? Recommendations are sincerely appreciated.

    Read the article

  • Installing Sharepoint 2007 on Windows Server 2008 R2

    - by ronischuetz
    we are using in our lab a clean installation of Windows Server 2008 R2 which is running as Hyper-V instance. Today we wanted to install a clean installation of Sharepoint 2007 with SP1 on this machine and we explorer an error that we are not able to install it. The setup is comes up with an error which is described here: http://support.microsoft.com/kb/962935 but this is not our szenario. A printscreen of this message can be found here: http://www.ronischuetz.com/images/SP_2007SP1_Inst_Error.png @Microsoft, my personal point of view is that it cannot be that we need to install first 2008, then Sharepoint 2007 with SP1, then SP2 and upgrade to 2008 R2. Nobody is going be happy with this solution and I hope we find a fast way how customers can install Sharepoint direclty on 2008 R2. Anyway, thanks in advance for any further information how we should go on with this issue. The issue is listed here: http://social.msdn.microsoft.com/Forums/en-US/sharepointadmin/thread/91a6be50-9009-43c8-a37c-66cfb83d738f

    Read the article

  • Backup Client/Server Software that Syncs only the Delta?

    - by Urda
    I have a co-located server, and a desktop computer. I push small things and large ammounts of small files (like my iTunes) into a JungleDisk cloud. If a few files change there, no big deal, the file gets re-uped. For larger files JungleDisk backup isn't helpful. Things like movies and VMware images that change a lot, but I want backed up. Just not to JungleDisk since that would cost me even more money. I am looking for a product, closed or open source (preferably open source) that will sync the change, or delta, to my personal server on a schedule. That way I can keep a copy of my larger things, without paying JungleDisk a ton more since they are in the range of many Gigabytes. Right now these few items are backed up over FTP, and take forever. Both the client and server are windows environments.

    Read the article

  • Current alternative to the old CHECKSUM program

    - by faulty
    I'm looking for an application that does md5/sha hash check on specific files/folders periodically and store an index file per folder for future verification. I remember such application exist in DOS days, to detect files infected by virus. The main purpose for this is to detect corrupted copy of backup, as I understand that consumer grade hardware are not 100% error free when doing backup or file transfer from device to device. The hash can also be used to generate a list of changed files for backup. Most of the software I can find is hash manually. EDIT: Windows based application, preferably a shell extension which I can right click on a folder and do a checksum/verify all files in that folder. Even better if that can integrate with a backup/sync program like BeyondCopy

    Read the article

  • Using WebSphere CloudBurst with PowerVM to AIX virtualization over a cloud

    - by ADD Geek
    hi there we are studying the virtualization option to reduce our datacenter cost, and this research was assigned to me. we looked into alternatives and we almost reached a conclusion that PowerVM is the only option to virtualize pSeries servers. we found no signs of cloud support explicitly mentioned in any document, however there was the mention of CloudBurst. from the videos we watched and the documents we read, it seems that CloudBurst is more oriented towards Application Servers (WebSphere Software). but our environment is not relying only on WebSphere. we have some banking applications, Oracle Databases and MQ/Broaker. the question is: 1- can we virtualize the existing applications (all running AIX) on a cloud running on top of some of the existing servers? (given that we do the sizing properly) 2- is PowerVM to run on top of CloudBurst? 3- if the above solution is applicable, is this some sort of HA solution (since the VM will run on top of multiple physical boxes, while the same physical box will run multiple live images) thanks for your help

    Read the article

  • Change location of RSS Dynamic Desktops

    - by Andy
    I'm currently using CCleaner to take care of my computer, but I also have a dynamic desktop background provided by Bing (I'm running Windows 7 HP) - and unfortunately the two conflict. Whenever I 'clean' my computer using CCleaner it messes up my destop backgrounds as they are stored in the temporary internet files directory, and for some reason I don't appear to be able to get as far as the 'Enclosures' sub directory in order to tell CCleaner to exclude the directory (I can see it in Windows Explorer but not in CCleaner's directory browser). Therefore, I am looking for an alternative solution to this problem and wondered if I could change the directory to which the images were downloaded on the RSS feed. If anybody knows how to do this, I would be grateful if you could share or indeed, I would be equally as greatful if anyone knows any other ways of getting around CCleaner. Please note that I don't want to stop cleaning the whole of my temporary internet files though - I just don't want the wallpapers that have been downloaded to be deleted... Thanks in advance!

    Read the article

  • Is it possible to upgrade Server 2008 to RDP version 6.1?

    - by Dmitri K.
    Problem Server 2008 has RDP 6.0, Server 2008 R2 has RDP 6.1. My application works with RDP ActiveX control 6.1, but does not work with version 6.0 - generates access violation from mstscax.dll. Is there any way to upgrade Server 2008 (not 2008 R2) with RDP 6.1? More details: my application is written in Delphi, the ActiveX control was generated from mstscax.dll version 6.1.7601.17514. Target system where the application doesn't work, has mstscax.dll version 6.0.6002.18356. When the application tries to connect, it generate an access violation exception in mstscax.dll. The way I see it is a compatibility problem. I appreciate any ideas on how to resolve this.

    Read the article

  • How to configure squid for retrieving (and caching) directly my static resources?

    - by fabien7474
    I have an Apache/Tomcat/Spring tc Server running on CentOS EC2 VM. I would like to install squid on the same machine as a proxy for retrieving (directly i.e. without forwarding the request to Apache/Tomcat) and caching static content ONLY identified by URIs : /images, /css or /js. Other URIs should be forwarded to the normal Web Server and not cached. Since I am a newbie, I didn't find from squid documentation how to configure squid for this desired behavior (and if it is even possible). Could you please help me and tell me how should I configure squid for this purpose? Thank you.

    Read the article

  • Why do we need Hash by key? [migrated]

    - by Royi Namir
    (i'm just trying to find what am I missing...) Assuming John have a clear text message , he can create a regular hash ( like md5 , or sha256) and then encrypt the message. John can now send Paul the message + its (clear text)hash and Paul can know if the message was altered. ( decrypt and then compare hashes). Even if an attacker can change the encrpyted data ( without decrypt) - - when paul will open the message - and recalc the hash - it wont generate the same hash as the one john sent him. so why do we need hash by key ?

    Read the article

  • eAccelerator ignore my new setting?

    - by Mwebe Nkrumah
    Hi, Im using eAccelerator 0.9.5.2, CentOS 5.3, lighttpd 1.4.22 But because eAccelerator is cached in RAM, I needs too much RAM. So Im trying to cache in hard disk. (my website is not generate money, so Im thinking about cheaper solution) So, I modify /etc/php.d/eaccelerator.ini with below codes: extension="eaccelerator.so" eaccelerator.shm_size="12" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="0" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="20M" eaccelerator.shm_ttl="1800" eaccelerator.shm_prune_period="0" eaccelerator.shm_only="0" eaccelerator.compress="0" eaccelerator.compress_level="9" eaccelerator.keys="disk_only" eaccelerator.sessions="disk_only" eaccelerator.content="disk_only" So, the output of phpinfo() as below: http://img175.imageshack.us/img175/1104/screenshggot.png But after using "disk_only" in eAccelerator and restart lighttpd & php-cgi using killall, my RAM usage is still high for php-cgi. Reboot the server also not works. The data is created in cache directory, but RAM usage is still high.

    Read the article

  • Remove NVIDIA Update and UpdatusUser

    - by Ove
    Does anyone know how to remove the UpdatusUser user created by installing the NVIDIA driver? Or better yet, how to install the drivers in such a way that UpdatusUser won't even be created? I have tried choosing custom installation and unchecking NVIDIA Update, but that does not work. Also, I have tried uninstalling NVIDIA update from Programs and features in Control Panel, but that won't remove the UpdatusUser account. I have also created a post in the NVIDIA forums that contains more images and details: here (please look at the screenshots there), but I didn't get a solution to my problem. I am using Windows 7 x64 and the latest NVIDIA 295.73 driver.

    Read the article

  • How to exclude one subfolder from my RewriteRule Htaccess rules?

    - by tomaszs
    I have a .htaccess in my root of website that looks like this: RewriteEngine On RewriteCond %{HTTP_HOST} !^www\.mydomain\.pl [NC] RewriteCond %{HTTP_HOST} ^(?:www\.)?([a-z0-9_-]+)\.mydomain\.pl [NC] RewriteRule ^/?$ /index.php?run=places/%1 [L,QSA] RewriteCond %{REQUEST_URI} !^/index.php$ RewriteCond %{REQUEST_URI} !^/images/ RewriteCond %{REQUEST_URI} !^/upload/ RewriteCond %{REQUEST_URI} !^/javascript/ RewriteRule ^(.*)$ /index.php?runit=$1 [L,QSA] I've installed custom guest book in folder guests and now I would like to disable rules above for this one specific folder. So that when I type: mydomain.pl/guests I would like to go normally to actual folder guests. I understand that I need to somehow disable rules above for guests subfolder, but how do I do this?

    Read the article

  • How to get Google Desktop to index searchable pdf's

    - by user4941
    I've got a scanner that converts documents to pdfs. The pdf that gets produced is searchable. However when google desktop indexes the file it doesn't appear to be indexing any of the contents of the pdf (although it is indexing the content of other pdfs on the computer). I believe Google Desktop only indexes pdfs that are searchable and don't have images. Have others found a good way around this problem? I'm trying to get my household and office paperless, and I'd like to just scan in documents and rely on Google Desktop to find stuff.

    Read the article

  • Transferring domains when registered owner's email address is incorrect

    - by www.jacob-
    Years ago I registered some domains using a now expired university email address. The other contact details for the registered owner (postal address and phone number) are still correct. In order to change/update the email address, the registrar wants to charge £20 a domain. I would like to transfer the domains away from the current registrar. I can unlock the domains and generate an auth code. However, I cannot authorise the transfer by email as any emails sent to the registered owner's address will bounce. This seems to rule out most registrars I have tried. Are there any ways to transfer these domains without paying the £20 fee to update the registered owner's details?

    Read the article

  • Send encrypted mail using GPG by command-line?

    - by Mohammad AL-Rawabdeh
    A few days ago I asked about how I can secure email and many people advised me to use PGP tool, and I read about it and I use it. Now I want to write a batch file to send encrypted email with attachments. I know how I can generate key, exchange key with other side and encrypt email with PGP mail but until now I don't know how I can integrate PGP tool with my mail and how I can send the encrypted email. In other words, how can I send encrypted email that encrypts with PGP tool to other side by command line (batch file)?

    Read the article

  • CS3, Illustrator - Where Do X/Y Coordinates Measure from?

    - by nicorellius
    I have a project where there are several PDF files. I'm using Illustrator to make these. It seems the point/line of origin is inconsistent from image to image (file to file). Where is the point of origin, by default, in CS3 Illustrator? It would be nice if, while I was positioning images, I could just say, "OK, x coordinate is 5.5 inches in this document, so it is 5.5 in that one." But it seems this is not the case. Anyone know how Illustrator sets these parameters?

    Read the article

  • Multiple client connecting to master MySQL over SSL

    - by Bastien974
    I successfully configured a MySQL replication over SSL between 2 servers accross the internet. Now I want a second server in the same location as the replication slave, to open a connection to the master db over ssl. I used the same command found here http://dev.mysql.com/doc/refman/5.1/en/secure-create-certs.html to generate a new set of client-cert.pem and client-key.pem with the same master db ca-cert/key.pem and I also used a different Common Name. When I try to initiate a connection between this new server and the master db, it fails : mysql -hmasterdb -utestssl -p --ssl-ca=/var/lib/mysql/newcerts/ca-cert.pem --ssl-cert=/var/lib/mysql/newcerts/client-cert.pem --ssl-key=/var/lib/mysql/newcerts/client-key.pem ERROR 2026 (HY000): SSL connection error It's working without SSL.

    Read the article

  • Kickstart installation: Unable to read package metadata.

    - by yacov
    I'm trying to install a CentOS OS with kickstart using HTTP as the installation source. The kickstart server and the installed server are both running on VMs on the same machine. after the anaconda system installer starts it fails with the following message: I tried installing two different versions of Centos(5.5 and 5.2), and they both pass a CDROM media test the manual installation provides. The only errors on the kickstart server side are some errors in the httpd log I consider irrelevant: [Sat Mar 12 23:25:19 2011] [error] [client 192.168.1.112] File does not exist: /tftpboot/linux-install/platforms/CentOS5.5/images/product.img [Sat Mar 12 23:25:19 2011] [error] [client 192.168.1.112] File does not exist: /tftpboot/linux-install/platforms/CentOS5.5/disc1 I tried searching the internet for days and haven't found any solution... Does anyone have any idea?

    Read the article

  • Installing Bugzilla on Ubuntu 9.04 and Plesk

    - by makeflo
    Hey guys. I'm trying to install the latest Bugzilla version on my ubuntu server. (Want to use a subdomain like bugs.domain.com) I already installed all necessary perl modules and check_modules.pl doesn't show any errors. But when I'm running the testserver.pl script I get the following: TEST-OK Webserver is running under group id in $webservergroup TEST-FAILED Fetch of images/padlock.png failed I'm also not able to visit ANY file within the bugzilla folder from the browser. I'm always getting a 404 error. The bugzilla folder and all containing files are set to apache as the owner. I tried to enter the apache configuration form the installation guide in the http.include file of the domain and in the vhosts.conf file of the subdomain as well. I don't know what to do... Playing with plesks' suexecgroup doesn't bring any solution... I hope you can help me! Thanks in advance!

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >