Search Results

Search found 7902 results on 317 pages for 'structure'.

Page 144/317 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Where does Picasa store albums?

    - by Dan
    For people searching, the question might also be phrased: How do I restore Picasa albums from backup? When I reinstalled my computer and restored my photos from backup, some of my albums showed up, but many didn't. I've found the following info: Picasa on Windows stores (stored?) album info in these places: Vista: C:\Users\<myaccount>\AppData\Local\Google\Picasa2Albums\ XP: C:\Documents and Settings\<myaccount>\Local Settings\Application Data\google\Picasa2Albums\ I restored that folder and was still missing many of my albums. That folder also contained a folder of backups, but the most recent one was from a long time ago and I've created albums since then. According to https://support.google.com/picasa/bin/picasa.google.com/support/bin/static.py?hl=en&page=release_notes.cs, since the Dec 8, 2011 build, Picasa saves album info in .ini file(s). This probably explains the albums that I do see. http://katelharrison.blogspot.com/2012/01/how-to-restore-picasa-albums-mac.html has some great info on restoring albums on Macs, but the folder structure seems to be different there than on Windows.

    Read the article

  • htaccess Redirect / RedirectMatch with URLs that contain Special / Encoded Characters

    - by dSquared
    I'm currently in the process of applying a variety of 301 redirects in an .htaccess file for a website that recently changed its structure. Everything is working as expected, except for URLs that contain special characters, for these I am getting 404 errors. For example the following directives that have a registered trademark symbol (®) bring up 404 pages: RedirectMatch 301 ^/directory/link-with®-special-character(/)?$ somelink.com RedirectMatch 301 ^/directory/link-with%c2%ae-special-character(/)?$ somelink.com I've also tried using Redirect, RewriteRule and surrounding the urls with double quotes and nothing seems to work. Does anyone know what might be happening or the proper way to handle these types of directives? Any help is greatly appreciated.

    Read the article

  • Create a playlist in iTunes based on a hard drive folder

    - by Electrons_Ahoy
    I'd like to be able to base playlists in iTunes on a folder on my hard drive. For example, say I have this directory structure: C:\MP3s\Doctor Who Music C:\MP3s\Star Wars Music Importing all those MP3s into iTunes is really simple - at the bare bones version you can just drag the MP3 folder into the iTunes window and it does the rest. But, having done that, what I'd like to be able to do is point iTunes at each of those directories and have it turn them into their own playlists, so I end up with a Doctor Who Music and a Star Wars Music playlist based on the MP3s locations on the hard drive. Does iTunes have a way to do this, or is there a way to trick it into this with some other program? (I'm on Windows, but I'm sure Mac users would also appreciate answers to this as well.)

    Read the article

  • Mapping drives on MacOSX (Leopard/Snow Leopard) permanently inside LAN

    - by Shyam
    Hi, How do I "map" shared folders on a Mac, permanently? With map, I do not mean 'connect', but permanently add it to the system so it exists after reboot. Since workstations tend to shutdown, I wonder also the symptoms and cures in case that happens. In Linux, this can be done using the fstab file, but I noticed that volumes are mounted in a different structure than Linux. I need this to backup some workstations, doing a recursive job over a single directory, that should contain the shared folders. I use Terminal to access the main system, so by preference, the solution would be nice that works within a bash shell vs GUI. I can access all folders in Finder. Thanks!

    Read the article

  • why use branches in svn?

    - by ajsie
    i know that you could organize your files according to this structure in svn: trunk branches tags that you copy the trunk to a folder in branches if you want to have a seperate development line. later on you merge this branch back to trunk. but i wonder why me and my group should do this. why should one copy the trunk to a branch and work with this copy just to merge it back to the trunk, and mean while the code is frequently updated/commited to stay in sync with the trunk. why not just work with the trunk then? what is the benefits with creating a branch? would be great if someone could shed a light on this topic. thanks in advance

    Read the article

  • No input file specified on nginx and php-cgi

    - by Sandeep Bansal
    I'm trying to run nginx and php-cgi on my Windows PC, I've got up to the point where I want to move the html directory back two directory's so I can sort of create a structure. The only problem I have now is that PHP doesn't pick up any .php file. I have tried loading a static html file (localhost/test.html) and it works fine but localhost/info.php doesn't work at all. Can anyone give me some guidance on this? The part of the server block can be found below. server { listen 80; server_name localhost; root ../../www; index index.html index.htm index.php; #charset koi8-r; #access_log logs/host.access.log main; location ~ \.php$ { fastcgi_pass 127.0.0.1:9123; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Thanks

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Apply SharePoint template to existing site?

    - by johnnyb10
    I have several similar SharePoint sites (running on WSS 3) and I have saved one of the sites as a template. I now want to make a different site (which already exists) have the same structure as this site--the same lists, document libraries, views, etc. I know I can delete the existing site and then recreate it based on this template, but is there a way to apply this template to my existing site, so that it gets rid of its existing lists, etc., and replaces them with the ones from the template? I don't have any content in the site, and I don't want to keep any of the existing structures, so I don't care if anything gets swept away. I may need to do this with a bunch of sites in the future, so being able to apply the template rather than recreating from scratch might be very helpful.

    Read the article

  • Google Mail: Import labels when switching from POP3 to IMAP

    - by toobb
    My situation is the following: I have a Google Mail email-address and I have been using this address in Thunderbird fetching the emails with POP3 (the emails also remained on the server and where "archived"). In Thunderbird I organized my emails in folders. Now I want to switch to IMAP (with the same email address), but I want to keep the folder structure I created in Thunderbird. I could create a 2nd account in Thunderbird that uses IMAP, and then move my folders from the old account to the new one. But the problem is, that Google Mail probably does not recognize that it already has these moved emails in "All Mails". I will probably end up with two copies of every email. Does someone has good idea how to deal with that problem? Thank you!

    Read the article

  • Replace Whole Site by FTP

    - by Sam Machin
    Hi There, I've got a set of tools which periodically (about once a day) generate a complete set of static HTML pages for a site with associated folder structure etc. I then need to put those file onto the production server, my problem is that the server runs IIS(6 I think) and I only have regular FTP access. I need a way to automate the process of publishing the new site and it needs to a total replacement of the files each time its published, eg delete the whole folder & contents then put the new ones up. My source server is a ubuntu machine and I've got total control at that end, I have tried using CurlFTpFS but it seems to be too slow for what I'm trying to do and locks up. Rgds Sam

    Read the article

  • execute a command in all subdirectories bash

    - by Luigi R. Viggiano
    I have a directory structure composed by: iTunes/Music/${author}/${album}/${song.mp3} I implemented a script to strip my mp3 bitrate to 128 kbps using lame (which works on a single file at time). My script looks like this 'normalize_mp3.sh': #!/bin/bash SAVEIFS=$IFS IFS=$(echo -en "\n\b") for f in *.mp3 do lame --cbr $f __out.mp3 mv __out.mp3 $f done IFS=$SAVEIFS This works fine, if I go folder by folder and execute this command. But I'd like to have a "global" command, like in 4DOS so I can run: $ cd iTunes/Music $ global normalize_mp3.sh and the global command would traverse all subdirs and execute the normalize_mp3.sh to strip all my mp3 in all subfolders. Anyone knows if there is a unix equivalent to the 4dos global command? I tried to play with find -exec but I just managed to get an headache.

    Read the article

  • Cron Permission Denied

    - by worldthreat
    good day, I have a bash script in my home directory that works properly from the command line (file structure is default media temple DV. < noted for certain permission issues) but receive this error from cron: "/home/myFile.sh: line 2: /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql: Permission denied" NOTICE: it's just line 2... it writes to the local server just fine. Below is the Bash File: #!/bin/bash mysqldump -uUSER -pPASSWORD -hHOST dbName> /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql mysql -uadmin -pPASSWORD -hlocalhost dbName< /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql can't chmod from bash (lol, yeah i tried). writing the file there and setting the permissions before the transfer is useless... i have googled the heck out of this situation and this one still seems unique.... any insight is appreciated

    Read the article

  • IIS6 host multiple websites under same sub-domain (or something similar)

    - by user28502
    I'm trying to figure out a structure for a hosted application that i'm working on. I've got a domain lets call it app.company.com (a sub-domain company.com of course) that is setup to redirect to my IIS 6 web server. I would like to set up one website in IIS for each client that will use this application. And have the URL schema be like this: app.company.com/clientA -- would point to ClientA website in IIS app.company.com/clientB -- would point to ClientB website in IIS Do you guys have any pointers or best practices for my scenario?

    Read the article

  • How to convert series of MP3 to a M4B in a batch

    - by Artem Tikhomirov
    Hello. I have a batch of MP3 based books. Some of them divide into files according to book's own structure: chapters and so on. Some of them was just divided into equally lengthened parts. So. I've bought an iPhone, and I want to convert them all to M4B format. How could I convert them in a batch? I mean how cold I set up a process once, for each book, and then, after couple of weeks, receive totally converted library. The only able program for such conversion I've found was Audiobook Builder for a Mac. But it is pretty slow and do not support batching in principle. Solutions for any platform, please.

    Read the article

  • Windows data backups with alternating removable drives?

    - by luke
    0 vote down star I have a removable disk drive (RD1000 from Dell), and I am looking for backup software that will allow me to backup every night, and every morning switch to the alternate disk. There is only one directory structure to back up, what I want is two copies, one which I will take home with me every night, one which will be backing up every night, and when I get in in the morning, I will switch them. So for instance I have disk "a" and disk "b". On Monday night I want to go home with disk "a" and leave disk "b" in the drive, so that a scheduled back up will be written to it. On Tuesday morning I will come in and swap disks, and I will take "b" home that night, leaving "a" for the backup. And so on for the remainder of the week. FOSS software preferred, Freeware acceptable, paid software as a last-ditch effort. Oh, btw I'm stuck with Windows 2000 Thanks in advance.

    Read the article

  • How many users are "many users"?

    - by kemp
    I need to find a solution for a website which is struggling under load. The site gets ~500 simultaneous connections during peak time, and counts around 42k hits per day. It's a wordpress based site bridged with a vbulletin forum with a lot of contents and a fairly complex structure which makes intensive use of the database. I already implemented code level full page caching (without this the server just crashes), and configured all other caching directives as well as combining css files and the like to limit http requests as much as possible. I need to understand if there is more that can be done via software or if the load is just too much for the server to handle and it needs to be upgraded, because the server goes down occasionally during peak times. Can't access the server now, but it's a dedicated CentOS machine (I think 4GB ram, can't say what CPU) running apache/mysql. So back to the main question: how can I know when the users are just too many?

    Read the article

  • Minimize the chance my email is blocked/filtered as spam

    - by justSteve
    I'm running a web-based store where order confirmations are sometimes blocked and don't reach the intended user. The structure of the business model is such that our product is marketed to the end-user by a 3rd parities - affiliates how are known entities to the end-users and email is freely exchanged between end-users and our affiliates. Our confirmations being blocked is becoming a big enough problem that we are considering implementing a system where a 'confirmations' address is created within the affiliates domain, then we'd have our app send via the affiliate's mail server instead of our own. But that'd be lots of work. The idea has been raised to have our app use our affiliates' email in the FROM field but still send from our server. My thinking is that would be detected at the end-users side and blocked just as often - we dealing with institutions large enough at least some checks up at the perimeter. Is this assumption correct (more likely to be blocked) or is there a less round about way to send messages under the auspices of 3rd parties? thx

    Read the article

  • how to limit disk space per user in a PHP web application & CentOS

    - by solid
    we have a web application written in PHP and we want all our users to be able to upload images for e.g. 50MB. We will create a directory structure so that every user has its own folder like app/user1/images app/user2/images ... Now everytime a user uploads an image, we need to check if this is still allowed or not but we don't want 1000 users to continously scan our hard drive counting file sizes in their directory. So writing a script that counts all file sizes in a user directory is not an option I guess? Is there an easier way to calculate used up space per user and limit our app accordingly?

    Read the article

  • Starting/Stopping IBM WebSphere Application Server (WAS) 7 from the Command Line

    - by Christopher Parker
    I've written a script to automate the process of starting, stopping, and restarting WAS7 from the command line. Nothing starts automatically on one of our staging servers, so I have to start everything: deployment manager, node agent, app server, and Web server. The script I wrote seems to work pretty well. A coworker of mine recommended that I structure my commands differently. I'm wondering if there's a good, valid reason for doing so. First, my variables: WAS_HOME="/opt/IBM/WebSphere/AppServer" WAS_PROFILE_NAME="AppSrv01" WAS_APP_SERVER="server1" WAS_WEB_SERVER="webserver1" How I had the start commands: "${WAS_HOME}/bin/startManager.sh" "${WAS_HOME}/bin/startNode.sh" -profileName $WAS_PROFILE_NAME "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_APP_SERVER "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_WEB_SERVER I was told that I should do it like this, instead: WAS_DMGR="Dmgr01" # Added variable "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startNode.sh" "${WAS_HOME}/profiles/${WAS_DMGR}/bin/startManager.sh" "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_APP_SERVER "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_WEB_SERVER How is the second way of starting up everything for WebSphere any better or more correct than the first, original, way?

    Read the article

  • sudo displays typed password in bash script

    - by Andy
    Hullo, I have a bash script that uses sudo a few times. There's a couple of strange points about it though. It asks me for my password a few seconds after I've already entered it for a previous command. The second time I enter my password, it's echoed to the display. Here's the relevant bits of the script. sudo service apache2 stop drush sql-dump --root="$SITE_DIR" --structure-tables-key=svn --ordered-dump | grep -iv 'dump completed on' | sudo tee "$DB_DIR/${SITE_NAME}.sql" > /dev/null sudo svn diff "$DB_DIR" | less sudo svn commit -m "$MESSAGE" "$DB_DIR" sudo service apache2 start The first password is to stop apache, and it works as expected. As mentioned, the sudo tee doesn't 'remember' that I have elevated privileges, asks for the password again, and echoes it to the screen. Given that tee is all about echoing to screen, I've played around a little with simple scripts which have | sudo tee, and they all work as expected. Ideas?! TIA Andy

    Read the article

  • How to use Binary Log file for Auditing and Replicating in MySQL?

    - by Pranav
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution EDIT I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • Try exchange in real domain

    - by AndreaCi
    We (as a company) 'd like to try exchange server to replace our mail server. I downloaded the demo version from Microsoft website, but during the installation it wants administrator access to domain to edit the Active Directory database structure. The test will last for (at least) a month to see if it will bring real advantages to our management systems. Here is my question: Is it "dangerous"? If I uninstall the exchange server everything will be reverted to previous state? I'm kind of "scared" about the changes he may apply to our domain controllers.

    Read the article

  • How to change location of Maildir/ in postfix mail system

    - by adesh
    I'm using Postfix with imap and pop servers running on ubuntu linux. I want to change the the location of Maildir folder from user's home directory to some other shared folder so that all mail is in one place. I also have problem sending local mails to users who does not have a home directory (e.g. www-data), the system users created by default. I'm getting the following error. postfix/local[32123]: DCE1D221BBD: to=<www-data@********>, orig_to=<www-data>, relay=local, delay=33, delays=33/0/0/0.12, dsn=5.2.0, status=bounced (maildir delivery failed: create maildir file /var/www/Maildir/tmp/1382169296.P32123.********: Permission denied) I'd like to have a folder structure similar to this: /all-mail/<user-name>/<mail-goes-here>

    Read the article

  • How can I rewrite / redirect URL's in Glassfish V3?

    - by Jin Liew
    I'd like to simplify the URL's to access a Glassfish V3 application by removing file extensions and otherwise shortening URL's. I've already set my application as the default application, so that there is no need to include the context root in the URL. I'd like to: * Remove file extensions * Shorten the URL to files deep in the folder structure I'd like to do this using pattern matching rather than on a per file basis (Site is small at the moment but will change frequently and grow). Some examples of what I'd like to do: * foo.com/bar.html - foo.com/bar * foo.com/folder1/folder2/bar2.html - foo.com/bar2 Any help would be greatly appreciated. Thanks. Cheers, Jin

    Read the article

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >