Search Results

Search found 7902 results on 317 pages for 'structure'.

Page 85/317 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • How do I Setup Multiple Sites in HostGator Shared Hosting?

    - by cillosis
    I recently decided to consolidate all of my random projects into a single hosting account as it was starting to get very expensive to run each on an individual hosting plan. I purchased the HostGator Baby plan which allows hosting of multiple domains. You have to set it up with a root domain name which is fine (I used my portfolio domain name). As far as file structure, I wanted a folder for each site in /public_html so the structure looks like this: - public_html/ - myportfolio.com/ - ... my files ... - anothersite.com/ - ... my files ... - thirdsite.com/ - ... my files ... I setup add-on domains and pointed them to their respective folders which works fine. My problem is the root domain ex. myportfolio.com expects it's files to be contained at the root of /public_html rather than within it's folder I created. I setup a redirect to point requests for myportfolio.com to myportfolio.com/myportfolio.com/ which works initially except (at least in my WordPress installation) it still references it's root folder as public_html. TL;DR; What is the best way to go about setting up multiple site hosting in a shared hosting environment (i.e. I can't setup vhosts). Does anybody know of any tutorials or videos that walk through this more clearly? Thanks.

    Read the article

  • Routing Apache TracEnv

    - by fampinheiro
    Hello, i have a situation with many trac instances. They all have the same structure in the filesystem. PATH/trac1 PATH/trac2 PATH/trac3 i have this configuration <Location /trac/trac1> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac1 PythonOption TracUriRoot /trac/trac1 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> <Location /trac/trac2> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac2 PythonOption TracUriRoot /trac/trac2 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> <Location /trac/trac3> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac3 PythonOption TracUriRoot /trac/trac3 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> i wonder if it's possible to do something like (TracEnvParentDir is not an option) <Location /trac/{ENV}> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/{ENV} PythonOption TracUriRoot /trac/{ENV} PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> Thank you for your time. EDIT: TracEnvParentDir is not an option because my structure is the following +---projs +---trac1 ¦ +---public [instance] ¦ +---t1 ¦ ¦ +---common [instance] ¦ ¦ +---g1 [instance] ¦ ¦ +---g2 [instance] ¦ ¦ +---g3 [instance] ¦ ¦ +---g4 [instance] ¦ ¦ +---g5 [instance] ¦ +---t2 ¦ ¦ +---common [instance] ¦ ¦ +---g1 [instance] ¦ ¦ +---g2 [instance] ¦ ¦ +---g3 [instance] ¦ ¦ +---g4 [instance] ¦ ¦ +---g5 [instance] ¦ +---t3 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] ¦ +---trac2 +---public [instance] +---t1 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] +---t2 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] +---t3 +---common [instance] +---g1 [instance] +---g2 [instance] +---g3 [instance] +---g4 [instance] +---g5 [instance] I use the TracEnvParentDir on t1, t2 and t3 and TracEnv on trac1/public and trac2/public I wonder if it's possible to define a part of the url variable.

    Read the article

  • Auto-archive IMAP mail folders on OS X

    - by Pradeep
    Hi, I am trying to achieve the following. Download all messages from mail server(and remove downloaded messages from server). Downloaded messages should be in a local mailbox preserving folder structure as was defined on server. The download process should be automatic and shouldn't create duplicates. I am on OSX and looking for solutions using Apple Mail or Thunderbird or similar. So far I have found POP is not the way to go (as it looses folder structure and potentially can cause duplicates). The solution described here seems very good but isn't yet available for thunderbird or apple mail. http://getsatisfaction.com/mozilla_messaging/topics/auto_archive_and_keep_folder_structure. The other alternative is outlook which has auto archive which is paid and I think exports to pst instead of the more common mbox format. Yet another alternative is http://www.pop4.org/ which adds support for folder management to POP. Which I don't think is going to become usable soon. Any other better solutions.? Thank you

    Read the article

  • Permissions won't cascade more than 1 level

    - by Jovin_
    Running Windows Small Business Server 2011 I have a file structure with a lot of sub folders (sometimes 5-6 levels deep). I have created access groups to grant access to my users, and also deny groups to deny access to others. X Access & X Deny. These allow or deny access to a mapped network drive X: On the server I put in the groups with Full Control Allow for X Access and Full Control Deny for X Deny, I also tick the box "Apple these permissions to objects and/or containers within this container only" and have ensured that "Apply to:" is "This folder, subfolders and files". But for some reason the permissions will only apply to the next level of folders & files. ex. structure: X: Folder 1 Folder 1a Folder 2 Folder 2a If I apply the permissions to X: it'll only go to Folder 1 & 2, not 1a and 2a, I then need to manually apply the permissions to these too. Is this working as intended or am I doing something wrong?

    Read the article

  • Can an image based backup potentially corrupt data?

    - by ServerAdminGuy45
    I'm considering doing image based backups (Acronis) on production Windows systems during non-peak hours. I'm just wondering if they can potentially lead to application data corruption. Lets say that I have a database that is getting hit pretty hard. Could I potentially have the beginning blocks of the database be commit ed to the image, data inserted into the db (which changes the beginning blocks of the DB on the server but not the image), then the blocks of data committed to the image (leading to an inconsistent state). Here's an example of what I'm trying to illustrate. Imagine a simple data structure which has a number in the front which represents the number of "a"s in a file. The number and data are delimited by a "-". For example: 4-ajjjjjjjajuuuuuuuaoffffa If an "a" is changed, the datastructure resets the number in the begining of the file such as: 3-ajjjjjjjajuuuuuuuboffffa I assume acronis writes block by block being a straight up image so here is what i'm invisioning happening with my database t0: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here t1: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here (all data before this is comitted to the image) t2: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) Also notice how one of the "a"s change to a b. There are only 3 "a"s now t3: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) The final image now reads "4-ajjjjjjjajuuuuuuuboffffa", while the true data is "3-ajjjjjjjajuuuuuuuboffffa" leading to a corrupt "database". Basically changes further down the blockchain could be reflected in the image, while important header and synchronization could already be committed. The out of date header information doesn't accurately reflect the structure of the blocks to come.

    Read the article

  • I flashed my DS4700 with a 7 series firmware, now my DS4300 cannot read the disks I moved to that lo

    - by Daniel Hoeving
    In preparation for adding a number of 1Tb SATA disks to our DS4700 I flashed the controller firmware from a 6 series (which only supports up to 2Tb logical drives) to a 7 series (which supports larger than 2Tb logical drives). Attached to this DS4700 was a EXP710 expansion drawer that we had planned to migrate out to our co-location to allieviate the storage issues we were having there. Unfortunately these two projects were planned in isolation to one another so I was at the time unaware of the issue that this would cause. Prior to migrating the drawer I was reading the "IBM TotalStorage DS4000 EXP700 and EXP710 Storage Expansion EnclosuresInstallation, User’s, and Maintenance Guide" and discovered this: Controller firmware 6.xx or earlier has a different metadata (DACstore) data structure than controller firmware 7.xx.xx.xx. Metadata consists of the array and logical drive configuration data. These two metadata data structures are not interchangeable. When powered up and in Optimal state, the storage subsystem with controller firmware level 7.xx.xx.xx can convert the metadata from the drives configured in storage subsystems with controller firmware level 6.xx or earlier to controller firmware level 7.xx.xx.xx metadata data structure. However, the storage subsystem with controller firmware level 6.xx or earlier cannot read the metadata from the drives configured in storage subsystems with controller firmware level 7.xx.xx.xx or later. I had assumed that if I deleted the logical drives and array information on the EXP710 prior to migrating it to the DS4300 (6.60.22 firmware) this would satisfy the above, unfortunately I was wrong. So my question is a) Is it possible to restore the DAC information to its factory settings, b) What tool(s) would I use to accomplish this, or c) is this a lost cause? Daniel.

    Read the article

  • Can't reset Windows 7 Registry permissions.

    - by n10i
    hi all, i am trying to reset win 7 registry permissions using secedit /configure /cfg %windir%\inf\defltbase.inf /db defltbase.sdb /verbose /areas REGKEYS But i am receiving the following error: An extended error has occurred. The task has completed with an error. See log %windir%\security\logs\scesrv.log for detail info. The content Of the log file: ------------------------------------------- Friday, April 16, 2010 1:50:43 PM ----Configuration engine was initialized successfully.---- ----Reading Configuration Template info... ----Configure 64-bit Registry Keys... Configure users.default. Warning 5: Access is denied. Error taking ownership of users.default\software\SetID. Warning 5: Access is denied. Error opening users.default\software\SetID. Warning 5: Access is denied. Error setting security on users.default\software\SetID. Configure machine\software. Warning 5: Access is denied. Error setting security on machine\software. Warning 1336: The access control list (ACL) structure is invalid. Error setting security on machine\software\Macrovision. Configuration of Registry Keys was completed with one or more errors. ----Configure 32-bit Registry Keys... Configure machine\software. Warning 1336: The access control list (ACL) structure is invalid. Error setting security on machine\software\Audible. Configuration of Registry Keys was completed with one or more errors. ----Un-initialize configuration engine... plz! help me guys!

    Read the article

  • Moving symlinks into a folder based on id3 tags.

    - by Reti
    I'm trying to get my music folder into something sensible. Right now, I have all my music stored in /home/foo so I have all of the albums soft linked to ~/music. I want the structure to be ~/music/<artist>/<album> I've got all of the symlinks into ~/music right now so I just need to get the symlinks into the proper structure. I'm trying to do this by delving into the symlinked album, getting the artist name with id3info. I can do this, but I can't seem to get it to work correctly. for i in $( find -L $i -name "*.mp3" -printf "%h\n") do echo "$i" #testing purposes #find its artist #the stuff after read file just cuts up id3info to get just the artist name #$artist = find -L $i -name "*.mp3" | read file; id3info $file | grep TPE | sed "s|.*: \(.*\)|\1|"|head -n1 #move it to correct artist folder #mv "$i" "$artist" done Now, it does find the correct folder, but every time there is a space in the dir name it makes it a newline. Here's a sample of what I'm trying to do $ ls DJ Exortius/ The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ I'm trying to mv The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ into the real directory DJ Exortius. DJ Exortius already exists, so it's just a matter of moving it into the correct directory that's based on the id3 tag of the mp3 inside. Thanks! PS: I've tried easytag, but when I restructure the album, it moves it from /home/foo which is not what I want.

    Read the article

  • nginx not serving admin static files?

    - by toto_tico
    First, I want to clarify that this error is just for the admin static files. This means my problem is specific just to the static files that corresponds to the Django admin. The rest of the static files are working perfect. Basically my problem is that for some reason I cannot access those admin static files with the ngix server. It works perfect with the micro server of Django and the collect static is doing its job. This means it is putting the files on the expected place in the static folder. The urls are correct but I cannot even access the admin static files directly, but the others I can. So, for example, I am able to access this url (copying it in the browser): myserver.com:8080/static/css/base/base.css but i am not able to access this other url (copying it in the browser): myserver.com:8080/static/admin/css/admin.css I also tried to copy the admin/ directory structure into other_admin_directory_name/. Then I can access myserver.com:8080/static/other_admin_directory_name/css/admin.css Then, it works. So, I checked permissions and everything is fine. I tried to change ADMIN_MEDIA_PREFIX = '/static/admin/' to ADMIN_MEDIA_PREFIX = '/static/other_admin_directory_name/', it doesn't work. This a mistery in itself that I am exploring but still no luck. Finally, and it seems to be an important clue: I tried to copy the admin/ directory structure into admin_and_then_any_suffix/. Then I cannot access myserver.com:8080/static/admin_and_then_any_suffix//css/admin.css So, if the name of the directory starts with admin (for example administration or admin2) it doesn't work. * added thanks to sarnold observation ** the problem seems to be in the nginx configuration file /etc/nginx/sites-available/mysite location /static/admin { alias /home/vl3/.virtualenvs/vl3/lib/python2.7/site-packages/django/contrib/admin/media/; }

    Read the article

  • Need Corrected htaccess File

    - by Vince Kronlein
    I'm attempting to use a wordpress plugin called WP Fast Cache which creates static html files from all your posts, pages and categories. It creates the following directory structure inside wp-content: wp_fast_cache example.com pagename index.html categoryname postname index.html basically just a nested directory structure and a final index.html for each item. But the htaccess edits it makes are crazy. #start_wp_fast_cache - do not remove this comment <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html [L] RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond %{QUERY_STRING} ^$ RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html [L] </IfModule> #end_wp_fast_cache No matter how I try and work this out I get a 404 not found. And not the Wordpress 404, and janky apache 404. I need to find the correct syntax to route all requests that don't exist ie: files or directories to: wp-content/wp_fast_cache/hostname/request_uri/ So for example: Page: example.com/about-us/ => wp-content/wp_page_cache/example.com/about-us/index.html Post: example.com/my-category/my-awesome-post/ => wp-content/wp_fast_cache/example.com/my-category/my-awesome-post/index.html Category: example.com/news/ => wp-content/wp_fast_cache/example.com/news/index.html Any help is appreciated.

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    Hi, partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • CHKDSK is unable to fix NTFS errors

    - by HackToHell
    After my PC shutdown due to power failure, I noticed several errors in EventViewer. The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume \Device\HarddiskVolume2. and The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:. So I forced a chkdsk check at startup, and it finds a stream of error, here is the output, it is smaller than the actual log, because, Event Viewer only seems to have this much, the same line was repeated thousands of times.Here is that line. Some clusters occupied by attribute of type 0x80 and instance tag 0x4 in file 0x198f2 is already in use. Deleting corrupt attribute record (128, "") from file record segment 104690. Attribute record of type 0x80 and instance tag 0x0 is cross linked Also, even after running CHKDSK, the same errors were being reported again so I ran CHKDSK another time and it still loops the same line above, without fixing the error. Can anyone tell me how to fix it?

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • Add folder name to beginning of filename

    - by shekhar
    I have a directory structure as below: Folder > SubFolder1 > FileName1.abc > Filename2.abc > ............. > SubFolder2 > FileName11.abc > Filename12.abc > .............. > .......... etc. I want to rename the files inside the subfolders as: SubFolder1_Filename1.abc SubFolder1_Filename2.abc SubFolder2_Filename11.abc SubFolder2_Filename12.abc i.e. add the folder name at the beginning of the file name with the delimiter "_". The directory structure should remain unchanged. Note: Beginning of file name is same. e.g. in above case File*. I made below Script for /r "PATH" %%G in (.) do ( pushd %%G for %%* in (.) do set MyDir=%%~n* FOR %%v IN (File*.*) DO REN %%v "%MyDir%_%%v" popd ) Problem with the above script is that it is taking only one Subfolder name and placing it to the beginning of file name irrespective of the folder.

    Read the article

  • How can I allow the Xbox 360 to browse movies on a network drive?

    - by Roger
    I've always been able to stream movies stored on my Windows 7 computer's local hard drive to the Xbox 360 with no problems. A few weeks ago, though, I got a Seagate Dockstar NAS device and moved my movies to it. Ever since then, I can't seem them in the Xbox. I've tried several different things. When I try to add the mapped drive to the Media Player library, it says that it can't share the files. (I don't have any security on the NAS drive). I've tried creating a symlink to a local drive, but that doesn't help. Neither does adding the UNC path directly. The Dockstar seems to have its own Xbox share capability, but it doesn't respect my folder structure - it brings over every video file on the drive in a single list, including several hundred home videos that aren't playable on the Xbox. Is there any way to use Media Player (or the Zune software) to share files stored on a NAS drive? Barring this, is there a lightweight, free server that will allow me to share these files while maintaining my folder structure?

    Read the article

  • WordPress permalinks not working, everything seems fine

    - by javipas
    I have a WordPress blog I've migrated from another CMS, and I've being having a lot of problems with my permalinks structure: lots of articles give a 404, although they are there, somewhere, published. The site is www.muycomputerpro.com (MCP for short), and for example an article that should be found is: http://muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa If I do a search on the search tool at MCP, the result is there (see EnlacesMCP-1.jpg) But when I click on the link, our 404 error page appears (see EnlacesMCP-2.jpg) The weird thing is, the article is published, and the permalink is the right one, as you can see on this screenshot of the WordPress CMS: The permalink (below the title) is correct (http://www.muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa/) but it does not work. In fact, if I try to use the short link (http://www.muycomputerpro.com/?p=5023) the article does not show either. I've accessed my WordPress DB and I've search the article to see if there is something wrong there, but from what I can tell all the fields are OK, here's a screenshot: I really don't know what is causing this. The permalink structure should work (I'm using the "Custom permalink" plugin to preserve the old URLs that had a alphanumeric code at the end of the postname) and the permalink config on wordpress is "/%postname%/". I really need help :(

    Read the article

  • Making one of the folders default in Apache

    - by OmerO
    Hello, The file & directory structure of my website is as follows: /Library/WebServer/mysite/joomla .. /Library/WebServer/mysite/wiki .. /Library/WebServer/mysite/forum .. /Library/WebServer/mysite/index.php As you see, there are various applications each residing in separate folders. Now, in order to define this structure, I have made this entry in Apache http-vhosts.config file: ServerName mysite.com DocumentRoot "/Library/WebServer/mysite" ** And I already have the DirectoryIndex defined: DirectoryIndex index.html index.php, and so on. So far so good but I want this specific functionality: When someone visits mysite, he/she should automatically directed to: /Library/WebServer/mysite/joomla (and therefore /Library/WebServer/mysite/joomla/index.php) I don't want to achieve that functionality by putting a redirection code inside /Library/WebServer/mysite/index.php or /Library/WebServer/mysite/index.htm because that causes time delays (because of the redirection, of course) But in this case, the only proper way of achieving it seems to set DocumentRoot this way: DocumentRoot "/Library/WebServer/mysite/joomla" But when I set it that way, then the other folders (/wiki, /forum, etc.) are simply not served by Apache. To work around it, I put directives like: Alias /wiki /Library/WebServer/mysite/wiki .. Alias /forum /library/WebServer/mysite/forum and it did work actually the way I wanted. But... I still cannot use it that way because in this case I just couldn't manage to make the wiki use Short URLs (as described in link text) So, I have to set the DocumentRoot back to /Library/WebServer/mysite and shoud be able to assign /Library/WebServer/mysite/joomla as the "default directory" (my own terminology :) Can I do it in Apache? Is there any other way you might suggest? Thanks.

    Read the article

  • Recover data from Dynamic Disk (MBR) bigger than 2TB

    - by Helder
    Here is the situation: Promise Array FastTrak TX4310 with 3 disks (750 GB each) in RAID5. This comes to around 1500 GB of data. Last week I had the idea of expanding the RAID with an additional 750 GB disk. This would bring the volume to around 2250 GB. I plugged the disk and used the Webpam software to do the RAID expansion. However, I didn't count with the MBR 2TB limit, as I didn't remembered that the disk was using MBR instead of GPT and I didn't check it prior to the expansion. After a couple of days of expansion, today when I got home, the disk in Windows disk manager showed the message "Invalid disk" and when I try to activate it, it says "The operation is not allowed on the Invalid pack". From what I figured, the logical volume on the RAID expanded, and passed that info to the Windows layer and I ended up with an "larger than 2TB" MBR disk. I'm hopping that somehow I can still recover some data from this, and I was wondering if I can "rewrite" the MBR structure back to the 1500 GB partition size, so I can access the partition in Windows. Right now I'm doing an "Analyse" with TestDisk, as I hope the program will pickup the old 1500 structure and allow me to somehow revert back to it. I think that even though the Logical Drive in the RAID is bigger than the 2TB, I can somehow correct the MBR to show the 1500 GB partition again. I had a similar problem once, and I was able to recover the data using a similar method. What do you guys think? Is it a dead end? Am I totally screwed because there is the extra RAID layer that I'm not counting? Or is there other way to move with this? Thanks all!

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • Migrating JBoss installation and install it on a PHP server

    - by David Martinez
    I'm configuring a new dedicated server that is going to run 3 sites, 2 of then are migrating from a old server. Each site have it's own domain and dedicated ip. 2 of this sites are already up and running on php (one of then use cakePHP), the third site is a migration from an old server and it runs on JBoss. 1) Is it possible to have both Jboss and php running on the same Apache instance, or would I have to install a new one? 2) Can I just move the old JBoss server directory to the new server and start the server with the shell script? From what I red here JBoss is distributed as a zip/tgz file with the server structure, so moving it from the old server to the new one should be the same. I want to do this because the old server is already configured, and it have 2 JBoss instances. I didn't develop this site and I don't have experience with JBoss. I have some documentation of the site, but it is not much, mostly server structure and the technology they used. The new server runs on CentOS with CPanel, I have full root access to the server. This question is similar to this one How can I run JBoss Application Server and Apache on the same server? but there he didn't have a dedicated IP for each domain.

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Is there a learnable filter in Thunderbird for non-spam messages (as in Opera Mail)?

    - by Debilski
    One feature I like very much about Opera Mail is that you can have learnable filters for any purpose. So not only can you filter spam messages but also messages that your friends sent you or info mails from web platforms without having to enter each and every mail address you want to filter. It actually works quite satisfying and you can combine it with string filtering, too. It does a few mistakes in the beginning but then improves quickly after you have removed some of the false positives. However, there are a couple of drawbacks with the Opera Mail module. The filters are only ‘virtual’. So, with IMAP there is no easy way of mirroring the filtered structure back on the server and when I’m using webmail I’ll see the whole unfiltered mess in the inbox folder. Opera’s not using the OS X address book — and neither does it use ldap (which is not too important for me at the moment). So, I’m not specifically looking for a Thunderbird solution here, a way to fix things in Opera or Apple Mail should do as well. Or some other E-Mail program I don’t even know about yet. (So, to be clear, I’d like to have: OS X Address Book integration, Learnable filters for any type of filtering, Ability to push filtered folder structure to the IMAP server) But I thought like, if it is possible then most probably there would be an add-on for Thunderbird available. Any ideas?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >