Search Results

Search found 25998 results on 1040 pages for 'home folder'.

Page 176/1040 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • New Location for .NET 4 GAC

    - by Ricardo Peres
    .NET 4 newcomers may have realised that the old GAC location (%WINDIR%\Assembly) does not contain .NET 4 global assembly cache assemblies. Indeed, they have moved to %WINDIR%\Microsoft.NET\Assembly. It is worth noting that this folder does not use the shell extension that the older one uses, which prevents us from directly looking at the folder's contents, which, IMO, is nice (I mean, the new behavior). The old folder continues to host pre-.NET 4 assemblies.

    Read the article

  • Wildcards not being substituted

    - by user21463
    #!/bin/bash loc=`echo ~/.gvfs/*/DCIM/100_FUJI` rm -f /mnt/fujifilmA100 ln -s "$loc" /mnt/fujifilmA100 For some reason the variable * doesn't get substituted with the only possible value and gets given the value /home/chris/.gvfs/*/DCIM/100_FUJI. Does anyone have an idea of why? Please note: If global expansion fails, the pattern is not substituted. I ran the commands: chris@comp2008:~$ loc=`echo ~/.gvfs/*/DCIM/100_FUJI ` chris@comp2008:~$ echo $loc /home/chris/.gvfs/gphoto2 mount on usb%3A001,008/DCIM/100_FUJI So we can see the expansion should work I have now switched to using: loc = `find ~/.gvfs -name 100_FUJI ` I am just curious why it doesn't work as is. Debugging output using sh -x echo /home/chris/.gvfs/*/DCIM/100_FUJI loc=/home/chris/.gvfs/*/DCIM/100_FUJI rm -f /mnt/fujifilmA100 ln -s /home/chris/.gvfs/*/DCIM/100_FUJI/mnt/fujifilmA100

    Read the article

  • Compiling Gnucash 2.6.3 in Ubuntu 14.04

    - by wolveryn
    Downloaded the debian file from source forge and followed instructions, where these errors appear, I re-downloaded the file several times with same error. I want to install the latest Gnucash not the one available on software center. Thank you for your support. /qof/gnc-date/qof print date dmy buff: There are some differences between distros in the way they namelocales, and this can cause trouble with the locale-basedformatting. If you get the assert in this function, run locale -aand make sure that en_US, en_GB, and fr_FR are installed and thatif a suffix is needed it's in the suffixes array.** ERROR:test-gnc-date.c:465:test_gnc_setlocale: code should not be reached FAIL GTester: last random seed: R02Sd8d3d0e67be954baa8ec75d81a14c0e3 /bin/bash: line 1: 18889 Terminated MALLOC_CHECK_=2 MALLOC_PERTURB_=$((${RANDOM:-256} % 256)) gtester --verbose test-qof make[5]: *** [test-nonrecursive] Error 143 make[5]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof/test' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof/test' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src' make: *** [check-recursive] Error 1

    Read the article

  • How to install Joomla in Ubuntu as localhost?

    - by Leon-TastyDev
    I have installed the lamp-server but i stuck in one certain step.. Were it says remove installation folder if i click the button "Remove installation folder:" it says "error" and i guess this is because it doesn't have root previlages. So i run sudo nautilus go to the folder var/www, and remove the folder installation.. So now when i try to access my localhost No configuration file found and no installation code available. Exiting... I am confused, it asked me to remove it and now that is removed it needs it..:?

    Read the article

  • VirtualHost and 403 Forbidden problem with Apache

    - by Joaquín L. Robles
    Hi people, I have the exactly same problem as in 403 Forbidden Error when accessing enabled virtual host, but I tried the provided solution and still receiving 403 Forbidden, my site is called project, and it's cofiguration file in /etc/apache2/sites-available is the following: <VirtualHost *:80> ServerAdmin webmaster@reweb ServerName www.online.project.com ErrorLog /logs/project-errors.log DocumentRoot /home/joarobles/Zend/workspaces/DefaultWorkspace7/online.project.com.ar/public <Directory /home/joarobles/Zend/workspaces/DefaultWorkspace7/online.project.com.ar> Order Deny,Allow Allow from all Options Indexes </Directory> Tried to change the owner of /home/joarobles/Zend/workspaces/DefaultWorkspace7/online.cobico.com.ar to www:data:www-data with recursion, but still getting this error in project-errors.log: [Mon Jan 31 01:26:01 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/joarobles/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Any idea? Thanks!

    Read the article

  • Html.RenderAction Failed when Validation Failed

    - by Shaun
    RenderAction method had been introduced when ASP.NET MVC 1.0 released in its MvcFuture assembly and then final announced along with the ASP.NET MVC 2.0. Similar as RenderPartial, the RenderAction can display some HTML markups which defined in a partial view in any parent views. But the RenderAction gives us the ability to populate the data from an action which may different from the action which populating the main view. For example, in Home/Index.aspx we can invoke the Html.RenderPartial(“MyPartialView”) but the data of MyPartialView must be populated by the Index action of the Home controller. If we need the MyPartialView to be shown in Product/Create.aspx we have to copy (or invoke) the relevant code from the Index action in Home controller to the Create action in the Product controller which is painful. But if we are using Html.RenderAction we can tell the ASP.NET MVC from which action/controller the data should be populated. in that way in the Home/Index.aspx and Product/Create.aspx views we just need to call Html.RenderAction(“CreateMyPartialView”, “MyPartialView”) so it will invoke the CreateMyPartialView action in MyPartialView controller regardless from which main view. But in my current project we found a bug when I implement a RenderAction method in the master page to show something that need to connect to the backend data center when the validation logic was failed on some pages. I created a sample application below.   Demo application I created an ASP.NET MVC 2 application and here I need to display the current date and time on the master page. I created an action in the Home controller named TimeSlot and stored the current date into ViewDate. This method was marked as HttpGet as it just retrieves some data instead of changing anything. 1: [HttpGet] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } Next, I created a partial view under the Shared folder to display the date and time string. 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<dynamic>" %> 2:  3: <span>Now: <% 1: : ViewData["timeslot"].ToString() %></span> Then at the master page I used Html.RenderAction to display it in front of the logon link. 1: <div id="logindisplay"> 2: <% 1: Html.RenderAction("TimeSlot", "Home"); %> 3:  4: <% 1: Html.RenderPartial("LogOnUserControl"); %> 5: </div> It’s fairly simple and works well when I navigated to any pages. But when I moved to the logon page and click the LogOn button without input anything in username and password the validation failed and my website crashed with the beautiful yellow page. (I really like its color style and fonts…)   How ASP.NET MVC executes Html.RenderAction In this example all other pages were rendered successful which means the ASP.NET MVC found the TimeSolt action under the Home controller except this situation. The only different is that when I clicked the LogOn button the browser send an HttpPost request to the server. Is that the reason of this bug? I created another action in Home controller with the same action name but for HttpPost. 1: [HttpPost] 2: [ActionName("TimeSlot")] 3: public ActionResult TimeSlot(object dummy) 4: { 5: return TimeSlot(); 6: } Or, I can use the AcceptVerbsAttribute on the TimeSlot action to let it allow both HttpGet and HttpPost. 1: [AcceptVerbs("GET", "POST")] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } And then repeat what I did before and this time it worked well. Why we need the action for HttpPost here as it’s just data retrieving? That is because of how ASP.NET MVC executes the RenderAction method. In the source code of ASP.NET MVC we can see when proforming the RenderAction ASP.NET MVC creates a RequestContext instance from the current RequestContext and created a ChildActionMvcHandler instance which inherits from MvcHandler class. Then the ASP.NET MVC processes the handler through the HttpContext.Server.Execute method. That means it performs the action as a stand-alone request asynchronously and flush the result into the  TextWriter which is being used to render the current page. Since when I clicked the LogOn the request was in HttpPost so when ASP.NET MVC processed the ChildActionMvcHandler it would find the action which allow the current request method, which is HttpPost. Then our TimeSlot method in HttpGet would not be matched.   Summary In this post I introduced a bug in my currently developing project regards the new Html.RenderAction method provided within ASP.NET MVC 2 when processing a HttpPost request. In ASP.NET MVC world the underlying Http information became more important than in ASP.NET WebForm world. We need to pay more attention on which kind of request it currently created and how ASP.NET MVC processes.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Important Note for Enablement Service Pack 1 for UPK 3.6.1

    - by marc.santosusso
    The following was originally posted to one of the UPK communities on LinkedIn. Since this post generated some feedback that this information was not well-known, I thought it would be good to repost, which I've done with permission from Earl Sullivan. This is an FYI for those who have UPK 3.6.1 and applied the Enablement Pack 1. There is a manual database update that is needed to be run. Here is the information: To correct an issue with permissioning in the Library, this Service Pack, issued in March 2010, also contains scripts to update the database on the Oracle Database or MicrosoftSQL server. Once you have run the Setup.exe file for the Service Pack, the necessary script files can be found at the root of the folder where the Developer is installed. These scripts must be run manually according to the instructions below. To update a database located on an Oracle Database server manually: Run the Setup.exe to install the files for the Service Pack. Start SQL*Plus and login with the system account. At the command prompt, enter the path to the AlterSchemaObjects.sql script located at the root of the folder where the Developer is installed. and append the following parameters: schema_owner - There is a limit of 20 characters on the schema owner name. You can find this information in the web.config file located in the Repository.WS in the folder where the server is installed. password - The existing schema owner password. Statement with generic parameters: @C:\AlterSchemaObjects.sql schema_owner password 4. Run the AlterSchemaObjects.sql script. To update a database located on a Microsoft SQL server manually: Run the Setup.exe to install the files for the Service Pack. Log in to the database using the database administrator account. Open and edit the AlterDBObjects.sql file located at the root of the folder where the Developer is installed. Replace the ODServer text with the username used when the database was installed. You can find this information in the web.config file located in the Repository.WS folder in the folder where the server is installed. Change the database from master to the name of the existing Developer database and run the AlterDBObjects.sql script. Note: The database name is the initial catalog in the connection string in the web.config file. Editor's note: The database update fixes a problem with permissions where the permissions for a user will be incorrectly updated when a group that the user was removed from has their permissions changed.

    Read the article

  • DFS replication initial step problem

    - by vn
    Heya, I just setup DFS on my network and it's working fine, and now I'm trying to setup DFS-R on a test folder, but then at the end of the procedure (all went fine, selected my 2 folders, primary folder, replication topology and such) I get this error message (roughly translated from french) : Unable to define security on the replicated folder. The shared administration folder doesn't exist. I'm also wondering if there's any required security on the folders to replicate so that DFS-R can access it. I was trying to add SYSTEM in the security, but it won't find it/allow me. The folder has many many files and folders on the primary DFS pointer, but none on the 2nd, just created it with quite the same rights. Note that the primary DFS pointer is on a 2008 server and the DFS service and the secondary DFS pointer are on a 2008r2. Any help is very appreciated, thanks.

    Read the article

  • FAVICON: Favicon not showing up

    - by Russ
    I'm using a .png favicon file and it is not showing up on my site. Doing a grep, I see the following in home.htm which looks right for me(I have also confirmed it's in the HEAD section within home.htm): home.htm: <link rel="shortcut icon" type="image/png" href="favicon.png"> The favicon.png file is in the same directory as the home.html file. Any suggestions are welcome! Thanks all. In case the file info is revealing for anyone, I'll attach it here:

    Read the article

  • FTP Upload ftpWebRequest Proxy

    - by Rodney Vinyard
    Searchable:   FTP Upload ftpWebRequest Proxy FTP command is not supported when using HTTP proxy     In the article below I will cover 2 topics   1.       C# & Windows Command-Line FTP Upload with No Proxy Server   2.       C# & Windows Command-Line FTP Upload with Proxy Server   Not covered here: Secure FTP / SFTP   Sample Attributes: ·         UploadFilePath = “\\servername\folder\file.name” ·         Proxy Server = “ftp://proxy.server/” ·         FTP Target Server = ftp.target.com ·         FTP User = “User” ·         FTP Password = “Password” with No Proxy Server ·         Windows Command-Line > ftp ftp.target.com > ftp User: User > ftp Password: Password > ftp put \\servername\folder\file.name > ftp dir           (result: file.name listed) > ftp del file.name > ftp dir           (result: file.name deleted) > ftp quit   ·         C#   //----------------- //Start FTP via _TargetFtpProxy //----------------- string relPath = Path.GetFileName(\\servername\folder\file.name);   //result: relPath = “file.name”   FtpWebRequest ftpWebRequest = (FtpWebRequest)WebRequest.Create("ftp.target.com/file.name); ftpWebRequest.Method = WebRequestMethods.Ftp.UploadFile;   //----------------- //user - password //----------------- ftpWebRequest.Credentials = new NetworkCredential("user, "password");   //----------------- // set proxy = null! //----------------- ftpWebRequest.Proxy = null;   //----------------- // Copy the contents of the file to the request stream. //----------------- StreamReader sourceStream = new StreamReader(“\\servername\folder\file.name”);   byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); ftpWebRequest.ContentLength = fileContents.Length;     //----------------- // transer the stream stream. //----------------- Stream requestStream = ftpWebRequest.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close();   //----------------- // Look at the response results //----------------- FtpWebResponse response = (FtpWebResponse)ftpWebRequest.GetResponse();   Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription);   with Proxy Server ·         Windows Command-Line > ftp proxy.server > ftp User: [email protected] > ftp Password: Password > ftp put \\servername\folder\file.name > ftp dir           (result: file.name listed) > ftp del file.name > ftp dir           (result: file.name deleted) > ftp quit   ·         C#   //----------------- //Start FTP via _TargetFtpProxy //----------------- string relPath = Path.GetFileName(\\servername\folder\file.name);   //result: relPath = “file.name”   FtpWebRequest ftpWebRequest = (FtpWebRequest)WebRequest.Create("ftp://proxy.server/" + relPath); ftpWebRequest.Method = WebRequestMethods.Ftp.UploadFile;   //----------------- //user - password //----------------- ftpWebRequest.Credentials = new NetworkCredential("[email protected], "password");   //----------------- // set proxy = null! //----------------- ftpWebRequest.Proxy = null;   //----------------- // Copy the contents of the file to the request stream. //----------------- StreamReader sourceStream = new StreamReader(“\\servername\folder\file.name”);   byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); ftpWebRequest.ContentLength = fileContents.Length;     //----------------- // transer the stream stream. //----------------- Stream requestStream = ftpWebRequest.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close();   //----------------- // Look at the response results //----------------- FtpWebResponse response = (FtpWebResponse)ftpWebRequest.GetResponse();   Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription);

    Read the article

  • VSDB to SSDT part 4 : Redistributable database deployment package with SqlPackage.exe

    - by Etienne Giust
    The goal here is to use SSDT SqlPackage to deploy the output of a Visual Studio 2012 Database project… a bit in the same fashion that was detailed here : http://geekswithblogs.net/80n/archive/2012/09/12/vsdb-to-ssdt-part-3--command-line-deployment-with-sqlpackage.exe.aspx   The difference is we want to do it on an environment where Visual Studio 2012 and SSDT are not installed. This might be the case of your Production server.   Package structure So, to get started you need to create a folder named “DeploymentSSDTRedistributable”. This folder will have the following structure :         The dacpac and dll files are the outputs of your Visual Studio 2012 Database project. If your database project references another database project, you need to put their dacpac and dll here too, otherwise deployment will not work. The publish.xml file is the publish configuration suitable for your target environment. It holds connexion strings, SQLVARS parameters and deployment options. Review it carefully. The SqlDacRuntime folder (an arbitrary chosen name) will hold the SqlPackage executable and supporting libraries   Contents of the SqlDacRuntime folder Here is what you need to put in the SqlDacRuntime folder  :      You will be able to find these files in the following locations, on a machine with Visual Studio 2012 Ultimate installed : C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin : SqlPackage.exe Microsoft.Data.Tools.Schema.Sql.dll  Microsoft.Data.Tools.Utilities.dll Microsoft.SqlServer.Dac.dll C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.SqlServer.TransactSql.ScriptDom\v4.0_11.0.0.0__89845dcd8080cc91 Microsoft.SqlServer.TransactSql.ScriptDom.dll   Deploying   Now take your DeploymentSSDTRedistributable deployment package to your remote machine. In a standard command window, place yourself inside the DeploymentSSDTRedistributable  folder.   You can first perform a check of what will be updated in the target database. The DeployReport task of SqlPackage.exe will help you do that. The following command will output an xml of the changes:   "SqlDacRuntime/SqlPackage.exe" /Action:DeployReport /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml /OutputPath:./ChangesToDeploy.xml      You might get some warnings on Log and Data file like I did. You can ignore them. Also, the tool is warning about data loss when removing a column from a table. By default, the publish.xml options will prevent you from deploying when data loss is occuring (see the BlockOnPossibleDataLoss inside the publish.xml file). Before actual deployment, take time to carefully review the changes to be applied in the ChangesToDeploy.xml file.    When you are satisfied, you can deploy your changes with the following command : "SqlDacRuntime/SqlPackage.exe" /Action:Publish /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml   Et voilà !  Your dacpac file has been deployed to your database. I’ve been testing this on a SQL 2008 Server (not R2) but it should work on 2005, 2008 R2 and 2012 as well.   Many thanks to Anuj Chaudhary for his article on the subject : http://www.anujchaudhary.com/2012/08/sqlpackageexe-automating-ssdt-deployment.html

    Read the article

  • Can preventing directory listings in WordPress upload folders cause Google ranking drops when they cause 403 errors in Webmaster Tools?

    - by Kelly
    I recently moved to a new host that blocks crawling to my uploads folders but (hopefully) allows the files in the folder to be crawled. I now show many 403 errors for each folder in the uploads folder in my Webmaster Tools. For example, http://www.rewardcharts4kids.com/wp-content/uploads/2013/07/ shows a 403 error. For example, I can access this file: http://www.rewardcharts4kids.com/wp-content/uploads/2013/07/lunch-box-notes.jpg but I cannot access the folder it is in. My rankings went down after I moved to this host and I am wondering if: this could be the reason. is this how files/folders are supposed to be set up?

    Read the article

  • HP-UX - custom rsync path

    - by stack_zen
    Hi. There are a range of HP-UX 11.11 hosts I'm unable to install rsync (I'm limited to a non-privileged user) I've extracted both rsync binary and libpopt.sl, libiconv.sl, libintl.sl from the depots into one of that user's directories: /home/zenith/rsync/ Problem is, I can't get my RH Linux box communicating with it: rsync -e --rsync-path=/home/zenith/rsync/rsync --compress=9 -pgtov --filter=+rs_/'*.log' --exclude='*' [email protected]:/home/zenith/service/logs/ /u01/rsync_depot/service/192.102.14.18/ /usr/lib/dld.sl: Can't find path for shared library: libintl.sl /usr/lib/dld.sl: No such file or directory sh: 1644 Abort(coredump) I've added to the remote host .profile export SHLIB_PATH=/usr/lib:/home/zenith/rsync export PATH=$PATH:/home/zenith/rsync but still, no libintl.sl is found. How can I initialize the correct env variable/ get this to work?

    Read the article

  • Ubuntu one, music streaming

    - by iknowshall
    I'm running Ubuntu 11.10, and just signed up for Ubuntu One and Ubuntu One Music. Generally file sync is working fine, but music has been a complete failure so far. Here's what I'm looking at: After 24 hours, not a single mp3 or ogg file from my laptop has sync'd to the Cloud Music folder. I have 4 gigs of data used, which is definitely not enough to include music files. That's about the right amount for my docs and photos. With music, it should be more like 13 gigs. No music shows up on the Ubuntu One Music app on my phone, nor on the web view of my Ubuntu One Cloud folder I followed all the online instructions I could find. Basically, on my laptop, right click on the Music folder and select "Ubuntu One Synchronize this folder" The Music folder now has a green check mark on it indicating its sync'd The Ubuntu One app on my laptop says "File Sync is up to date" I did provide credit card information, and on the Ubuntu One website "Music Streaming" is listed as one of my services. So what am I doing wrong? Just read this post as I'm experiencing the same problem, the answer for this post says that due to the large number of files trying to be uploaded, that it could take up to a week. However I am, as above also being told that "File sync is up to date" Can somebody confirm that this is a bug and that I just need to wait for File sync to actually be up to date? Cheers

    Read the article

  • Apache2's recursive directory permission requirement

    - by Sn3akyP3t3
    The experience I've had thus far is from Ubuntu 10.04 and 12.04 64 bit OS so if there are other OS differences I'd like to know if this is an OS specific problem or not. The issue I've experienced is mostly confusion. Once the cause of the problem is identified and corrected there are no further related problems experienced. The symptom is Error 403 forbidden. Typically the cause is attempting to use a directory other than /var/www/ for content. The cause is simply permissions, but its puzzling why the required permissions must persist from at least one level deeper than root onward till the current working directory where the content is stored. For example: Alias /example/ "/home/user/permissions/can/be/confusing/with/apache/" <Directory /home/user/permissions/can/be/confusing/with/apache/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> With www-data being the user that spawned apache and "user" being a member of the www-data group. Thus, if ownership of /home/user/* is user:user then all that is necessary to display content with apache is permssions of read and execute. So d---r-x--- should suffice, but for practical purposes I'm using drwxr-x--- for most. However, if all directories /home/user/* are permissions of drwxr-x-- and /home/user/ itself has permissions of drwx------ then content will always fail with error 403. This is strange because it doesn't follow what I would consider traditional logic of permissions which should only be applicable to the current working directory or a particular file in that directory and not any directory further back in the chain. Is this by design or is it a bug?

    Read the article

  • Is there a way to import email from the raw email files?

    - by Chris Schmitz
    I have a client who recently switched hosts. When they switched hosts they didn't backup their email and updated their configuration settings so they lost everything. However, I was able to log in to their old hosting control panel and download their mail folder. I am wondering if there is a way to extract their emails and/or contacts from the files. I'm not sure what type of files they are, there is no extension, but the folder directory is structured like this: mail/ .Drafts/ .Sent/ .Trash/ cur/ new/ theirdomain.com/ tmp/ [email protected] maildir Inside of the theirdomain.com folder, there is a folder for each account and inside of that is a folder called "cur" which has a whole bunch of files with names like 1292945327.H169813P25958.uscentral21.myserverhosts.com,S=10117/2,S and if I preview those files I can see the actual email messages inside of them but I have no idea how to get that information from those files to an email client. Anyone know of a way to work with these files? Thanks in advance for any insight you can share!

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Installing 12.04 within 11.04

    - by user288752
    I recently installed 11.04 from an installation disk (overwriting Windows in the process). I know 11.04 is no longer supported but I had no problems subsequently upgrading it to 12.04 (via 11.10) a couple of months ago on another device. This time though, things are different. I can't upgrade through update manager because Ubuntu then tells me I have no internet connection, which is (obviously incorrect). I have tried to circumvent the problem by downloading the 12:04 iso from ubuntu.com directly but now I'm troubled by something else. The download is succesfull but after mounting the iso I can't interact with it. When I try to access the Wubi it gives me the following message: Archive: /home/lars/.cache/.fr-7g75Fe/wubi.exe [/home/lars/.cache/.fr-7g75Fe/wubi.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /home/lars/.cache/.fr-7g75Fe/wubi.exe or /home/lars/.cache/.fr-7g75Fe/wubi.exe.zip, and cannot find /home/lars/.cache/.fr-7g75Fe/wubi.exe.ZIP, period. What am I doing wrong here?

    Read the article

  • So you want a French Site?

    - by juanlarios
    I thought I would write a quick write up of how to create a french site in SharePoint 2007. I'm not talking about a Variation but just a plain French Site from the ground up. There were some gotchas that I felt were worth blogging about. First:  go to Microsoft Telnet Article and follow the install instructions. Make sure that when you get to the download page that you select "French" as part of the drop down and you download and install the right language pack. I noticed that if you did not click the "change" button enven though I selected the 'french' language pack, it reverted back to the english language pack.   Second: You will notice a couple of things. When you go to central admin you will see the following:    Now you can pick between french site or english. You will get this if you install other language packs and they will be listed in the drop down. You will notice that you now have french headings and frech listings of sites. You see "Publishing" as a heading because I have a custom site definition that I deployed as a french site. Third: As you start navigating around and trying to create document libraries or sites you will start getting errors. Errors like the following: "Cannot make a cache safe URL for "SelectorControls.js", file not found. Please verify that the file exists under the layouts directory. " Troubleshoot issues with Windows SharePoint Services. Once you resolve the issue with this "js" file, you will find that there are other js files that are missing. The only problem is that if you are not fluent in French or the language you are trying to deploy, Well, you'll have a tough time understanding error messages as they will all be in the new language you are trying to deploy. So let's just talk about what happened when you installed the language pack. In the 12 Hive:  12/Template    you will now see a 1033 folder and a 1036 folder. The 1036 folder is the folder that was created and added as part of the language pack. What the above error is saying is that now that it's looking at the 1036 folder, well, it's missing some files. The nice thing is that these files are included in the 1033 folder (which is the English Language Pack). Simply copy and paste the controls from the one folder to the other. There will be more than one conflict so you will have to move serveral controls over. Can't remember how many but simply add them as error messages come up. I had to add some navigation controls and some content selectors.   Now that's all that you need to install the Frech Language pack anc reate site collections that are entirely in a another language. Do not mistake this with Variations, where you can have multiple language sites. For those of you doing a little bit extra with this, let me share what I was doing extra and what I needed to get it working for me. I had had a custom site definition which was obviously not showing up in my selection of french sites. I was under the impression that all sites in English would show up in french and that the sites were simply routed to a new Resource file for french content. And that is the case but there is a little extra that needs to be done if you have a custom site definition deployed:  First: Under hive 12/Template/1033/XML  there is a listing of site definition files that are deployed to the English side of things. If you navigate to 12/Template/1036/XML  and open one of the site definitions you will see that they are similar and reference the existing site definitions installed on the server, except that they have some french added to descriptions and names. Simply copy the xml file of your custom template to the 1036 folder to have it show up as a selection when you select French as the dropdown entry when create a site colleciton. You can go ahead and change the description and name to suit the language it's under.    Second: As part of my site definition, I packaed up several list templates, that were saved as STP files. When you navigate to the list template listing, well, the templates are for English sites, not French so I cannot create document libraries based on the template. What now? well here comes KWIzCom to the rescue! They seem to have put out a "STP language converter" where you can take a site template or list template and convert it to any target language you are after. It's a free download, Use it and you're good to go.  One thing I will mention is that when I convereted the English documents I whent ahead and converted them to French-Canadien. And it didn't work! so I finally figured out that the French Version it was expecting in the french site was "French-France". Don't know why that is, it's just what needs to be done to get that working. When I did that, I was able to use the List templates that I created in the English site for the French Site.   Hope it helps , good luck!

    Read the article

  • Unsatisfied Link Error and missing .so files when starting Eclipse

    - by Keidax
    I upgraded to the 12.04 beta yesterday. Now, when I try to start Eclipse, I get the splash screen and then this error message: An error has occurred. See the log file /home/gabriel/.eclipse/org.eclipse.platform_3.7.0_155965261/configuration/1335382319394.log . The log file says something like this: java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-gtk-3740 in java.library.path no swt-gtk in java.library.path Can't load library: /home/gabriel/.swt/lib/linux/x86_64/libswt-gtk-3740.so Can't load library: /home/gabriel/.swt/lib/linux/x86_64/libswt-gtk.so followed by many more error messages. The /home/gabriel/.swt/lib/linux/x86_64/ directory exists, but is empty. I also tried reinstalling eclipse with no success. Any ideas?

    Read the article

  • How do I compile lbflow 1.1?

    - by Ali.A
    After using "sh ./configure" command, I encountered another error during lbflow package installation (a scientific one). The sequence of operations is here with error: ./configure --disable-gts sudo make [sudo] password for alireza: make all-recursive make[1]: Entering directory `/home/alireza/lbflow-1.1' Making all in src make[2]: Entering directory `/home/alireza/lbflow-1.1/src' source='lbflow.cpp' object='lbflow-lbflow.o' libtool=no \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ g++ -DHAVE_CONFIG_H -I. -I. -I.. -c -o lbflow-lbflow.o `test -f 'lbflow.cpp' || echo './'`lbflow.cpp **../depcomp: line 432: exec: g++: not found** **make[2]: *** [lbflow-lbflow.o] Error 127 make[2]: Leaving directory `/home/alireza/lbflow-1.1/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/alireza/lbflow-1.1' make: *** [all] Error 2** Do you have any idea to troubleshoot this problem? (And please notice that i have installed both g++ and gcc. it says g++: not found, but i have installed g++ from Ubuntu Software Center!)

    Read the article

  • How to start /usr/bin/bitcoind on boot?

    - by André
    I'm trying to get /usr/bin/bitcoind to start on boot but without success. I have this script on /etc/init/bitcoind.conf description "bitcoind" start on filesystem stop on runlevel [!2345] oom never expect daemon respawn respawn limit 10 60 # 10 times in 60 seconds script user=andre home=/home/$user cmd=/usr/bin/bitcoind pidfile=$home/.bitcoin/bitcoind.pid # Don't change anything below here unless you know what you're doing [[ -e $pidfile && ! -d "/proc/$(cat $pidfile)" ]] && rm $pidfile [[ -e $pidfile && "$(cat /proc/$(cat $pidfile)/cmdline)" != $cmd* ]] && rm $pidfile exec start-stop-daemon --start -c $user --chdir $home --pidfile $pidfile --starta $cmd -b -m end script After creating this script I've run the command: sudo initctl reload-configuration When I restart Ubuntu the "bitcoind" does not start. I only can start "bitcoind" running manually the command: sudo start bitcoind Any clues on how to start "bitcoind" on boot?

    Read the article

  • is there a way to group desktop icons on the task bar

    - by Memor-X
    i have a folder on the desktop which has a bunch of programs i use frequently, i can't pin all these programs to the taskbar themselves as there are too many for the screen width that it'll just make the taskbar scrollable i am wondering if i can do one of the following pin the icons to the taskbar under 1 icon pin the folder to the taskbar separately to the Windows Explorer button which when there are no folders open will open up the libraries and if there are folders it'll show me the folders open, this way if i have 5 folders open and my frequently used programs folder i can just click on the frequently used programs folder icon on the taskbar and be given that folder only i'm trying to reduce the number of clicks, scrolling or scanning across the task bar i need to do in order to find a program

    Read the article

  • Copy New Files Only in .NET

    - by psheriff
    Recently I had a client that had a need to copy files from one folder to another. However, there was a process that was running that would dump new files into the original folder every minute or so. So, we needed to be able to copy over all the files one time, then also be able to go back a little later and grab just the new files. After looking into the System.IO namespace, none of the classes within here met my needs exactly. Of course I could build it out of the various File and Directory classes, but then I remembered back to my old DOS days (yes, I am that old!). The XCopy command in DOS (or the command prompt for you pure Windows people) is very powerful. One of the options you can pass to this command is to grab only newer files when copying from one folder to another. So instead of writing a ton of code I decided to simply call the XCopy command using the Process class in .NET. The command I needed to run at the command prompt looked like this: XCopy C:\Original\*.* D:\Backup\*.* /q /d /y What this command does is to copy all files from the Original folder on the C drive to the Backup folder on the D drive. The /q option says to do it quitely without repeating all the file names as it copies them. The /d option says to get any newer files it finds in the Original folder that are not in the Backup folder, or any files that have a newer date/time stamp. The /y option will automatically overwrite any existing files without prompting the user to press the "Y" key to overwrite the file. To translate this into code that we can call from our .NET programs, you can write the CopyFiles method presented below. C# using System.Diagnostics public void CopyFiles(string source, string destination){  ProcessStartInfo si = new ProcessStartInfo();  string args = @"{0}\*.* {1}\*.* /q /d /y";   args = string.Format(args, source, destination);   si.FileName = "xcopy";  si.Arguments = args;  Process.Start(si);} VB.NET Imports System.Diagnostics Public Sub CopyFiles(source As String, destination As String)  Dim si As New ProcessStartInfo()  Dim args As String = "{0}\*.* {1}\*.* /q /d /y"   args = String.Format(args, source, destination)   si.FileName = "xcopy"  si.Arguments = args  Process.Start(si)End Sub The CopyFiles method first creates a ProcessStartInfo object. This object is where you fill in name of the command you wish to run and also the arguments that you wish to pass to the command. I created a string with the arguments then filled in the source and destination folders using the string.Format() method. Finally you call the Start method of the Process class passing in the ProcessStartInfo object. That's all there is to calling any command in the operating system. Very simple, and much less code than it would have taken had I coded it using the various File and Directory classes. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Partitioning During Installation, Reinstalling Ubuntu, Backup donwloaded Repositories

    - by user209645
    I am new to Ubuntu and I have just finished installing 13.10 amd64 on my PC. This will replace my Windows 7 as my only OS. I just want to clarify some issues that've been bugging me. I tried reading posts with the same topics but I just can't wrap my head to it yet. I partitioned my 80GB drive into: /root: 30GB (sorry for the confusion, I actually meant /) /home: 40GB /var : 3GB swap : 4GB (2GB of mem) Please correct me if I'm wrong about these: All of the users' documents are saved in their respective folders in /home. But say I want to clean install (format) Ubuntu, I don't need to make backups of /home and /var as they are on separate partitions. But when re-installing, do I just choose /root and format and it will recognize all the partitions (not making another /home and /var inside /root)? Downloaded packages (from all the repositories) and all their dependencies are saved in /var. So after re-installing on the same PC (assuming I'm offline), it will just use the latest updates in /var if I choose to update? And if all the installed apps and their dependents are all in there, all I need to do is re-install them without encountering errors? I have also read that you can back them up using aptoncd and then adding the DVD to the sources. So if I download all the high ranking apps using Synaptic, could I then have an all-in-one DVD installer? 30GB for / is excessive because the bulk of files will either be in /home (personal, downloads, music, videos) or /var (updates, packages, installed apps)? Please excuse me for asking such a question but I really want to explore and learn more.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >