Search Results

Search found 14206 results on 569 pages for 'compressed folder'.

Page 107/569 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • How to backup iTunes on Windows to folder/share, i.e. without "Back Up to Disc"? No DVD writer avail

    - by Chris W. Rea
    (Surprised I didn't already find an answer here to this!) I've got a computer on which I'd like to back up the iTunes library – music, movies, apps, everything. We're talking multiple gigabytes. Unfortunately, it seems that iTunes' own built-in "Back Up to Disc" feature (the only backup feature I can find in iTunes) only functions with a CD or DVD writer/burner. The computer in question does not have a DVD burner. While it has a CD burner, attempting to back up to CDs would require dozens of discs plus more time than I'm willing to spend swapping them. So: What is the recommended way to back up an entire iTunes library on a Windows computer, to a non-CD/DVD location such as an external hard drive or a network shared folder? Then, once such a backup has been performed, what is the process for restoring the library – e.g. after the computer has been repaved with a new version of Windows – so that iTunes is resurrected whole and recognizes devices it syncs with? Thank you!

    Read the article

  • Sharing my home folders with other users on the same PC

    - by Stephen Myall
    After reviewing similar questions on the same subject Im still none the wiser. I want to share my music, pictures and video folders with other users on my pc. I am using 11.10 and will be upgrading to 12.04. The method I have tried is to right click on the folder (as Administrator), select "Sharing Options" check all the necessary fields and give the share a name like "music-shared". Another dialog pops up then and I select "Set nautilus Permissions". When the other user logs on they go to their Home folder click on the network and can see the "music-shared" folder, but they get a message that the do not have the necessary permissions to view the content. Im sure I'm missing something simple. My Home folder is encrypted and i am willing to unencrypt to make this work Unlike other questions on this site, I dont have a partition etc. i would be grateful for any help.

    Read the article

  • Cannot save tar.gz file to usr/local

    - by ATMathew
    I'm using the following instruction to install and configure Hadoop on Ubuntu 10.10. http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#installation I tried to save the compressed tar.gz file to /usr/local/ but it just won't save. I've tried saving the tar.gz in my home folder and desktop and copying the files to the desired folder, but I get an error that tells me I don't have permission. How do I save and extract a tar.gz folder to /usr/local/hadoop?

    Read the article

  • How to assign hotkeys in OSX-snow-leopard

    - by 2816
    I am trying to assign a hotkey to open a folder in snow leopard. For example, in windows i could simply press the Windows+E to open the My Computer folder (or manually assign whatever folder i wanted to open). Is there a way to get this same behavior in OSX? I want to be able to launch applications, and open folder with my own keyboard mappings. For launching applications I use automator, create a service that receives no input to 'launch application' (from the utilities library). Then i can assign a keyboard shortcut to this service. Now i can launch applications with keyboard shortcuts. I still dont know how to open a folder. I know this can be done using quicksilver - but am looking for an organic approach that does not require any additional installs.

    Read the article

  • Windows Live SkyDrive: How To Move or Copy Files Between Folders

    - by Gopinath
    Microsoft has very simple and easy to use interface to move files between folders in Windows Operating system. But their own cloud storage service,Windows Live SkyDrive, complicated these simple and daily used operations. We need a guide to figure out how to perform basic copy/move operations. Couple of years ago we wrote about moving files between folders in old version of SkyDrive but the guide does not hold good today as SkyDrive has gone through many user interface changes in the recent past. Today one of our readers asked us how to move/copy files in the latest version of SkyDrive and here are the steps to be followed 1. Login to your Windows Live SkyDrive 2. Select the file you want to Move or Copy by clicking on the information icon (see 2 in below image) 3. After selecting the information icon, expand Information section displayed on the right side panel to access Move and Copy options (see 3 in the below image). 4. To move the selected file to another folder, select Move option and Sky Drive will guide you through folder selection user interface for choosing the target folder. 5. Once you navigate to the target folder where you want to move the file click on “Move this file into <<Target Folder>>”. 6. You are done. Dear Microsoft, SkyDrive provides us tonnes of free storage but please make it’s user interface a bit better so that we don’t need to write guides to perform basic operations. Hope you listen to your customers. This article titled,Windows Live SkyDrive: How To Move or Copy Files Between Folders, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • What You Said: Your Tech Spring Cleaning Routines

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your tech spring cleaning routine; now we’re back to highlight your tips, tricks, and techniques. What tools rule the spring cleaning roost? Compressed air and microfiber cloths are the tools of choice by a wide margin. D^Angelo highlights the software and physicals tools he uses: Backup all the important stuff just to be safe (c:/, my documents, desktop, drivers) Cleaning the dust with some office depot compressed air, avoiding spinning the fans. Use a brush for that small places (fans, memory, capacitors, etc). Use some dielectric spray on the motherboard. If the pc turned on without problems its time to use ccleaner, Check if there is a toolbar installed or another unusual software that I don’t want. Run my antivirus software or malwarebytes; some defraggler maybe. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • How to flip a BC6/BC7 texture?

    - by postgoodism
    I have some code to load DDS image files into OpenGL textures, and I'd like to extend it to support the BC6 and BC7 compressed formats introduced in D3D11. Since DirectX and OpenGL disagree about whether a texture's origin is in the upper-left or lower-left corner, my DDS loader flips each image's pixels along the Y axis before passing the pixels to OpenGL. Flipping compressed textures presents an additional wrinkle: in addition to flipping each row of 4x4-pixel blocks, you also need to flip the pixels within each block. I found code here to flip BC1/BC2/BC3 blocks, and from the block diagrams on MSDN it was easy to adapt the BC3-flipping code to handle BC4 and BC5. The BC6 and BC7 formats look significantly more intimidating, though. Is there a similar bit-twiddling trick to flip these formats, or would I have to fully decompress and recompress each block?

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • Using Sandcastle to build code contracts documentation

    - by DigiMortal
    In my last posting about code contracts I showed how code contracts are documented in XML-documents. In this posting I will show you how to get code contracts documented with Sandcastle and Sandcastle Help File Builder. Before we start, let’s download Sandcastle tools we need: Sandcastle Sandcastle Help File Builder Install Sandcastle first and then Sandcastle Help File Builder. Because we are generating only HTML based documentation we upload to server we don’t need any other tools. Of course, we need Cassini or IIS, but I expect it to be already there in your machine. Open your project and turn on XML-documentation for project and contracts. Now let’s run Sandcastle Help File Builder. We have to create new project and add our Visual Studio solution to this project. Now set the HelpFileFormat parameter value to be Website and let builder build the help. You have to wait about two or three minutes until help is ready. Take a look at your documentation that Sandcastle generated – you see not much information there about code contracts and their rules. Enabling code contracts documentation Now let’s include code contracts to documentation. Follow these steps: Open Sandcastle folder and make copy of vs2005 folder. Open CodeContracts folder (c:\program files\microsoft\contracts\) and unzip the archive from sandcastle folder. Copy all unzipped files to Sandcastle folder. Create (yes, create new) and build your Sandcastle Help File Builder documentation project again. Open help. In my case I see something like this now. As you can see then contracts are documented pretty well. We can easily turn on code contracts XML-documentation generation and all our contracts are documented automatically. To get documentation work we had to use Sandcastle help file fixes that are installed with code contracts and if we had previously Sandcastle Help File Builder project we had to create it from start to get new rules accepted. Once the documentation support for contracts works we have to do nothing more to get contracts documented.

    Read the article

  • Commands don't have permission when using absolute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance. Edit: It seems that the root of the problem is acl set on perent folder /srv/samba, which indeed does not grant permissions to deluge (but neither denies it). getfacl /srv/samba # file: srv/samba # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:mask::rwx default:other::--- If I grant the permission also to this folder, it suddenly starts to work so I believe that the acl on /srv/samba is somehow denying the permissions to deluge. So the question is: how do I set acl to both /srv/samba and /srv/samba/video so that sambaclients have access to whole /srv/samba and subdirectories and deluge has access only to /srv/samba/video and subdirectories?

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • Installing Midnight Commander from sources (no root privileges)

    - by ouroboros
    I tried to configure ./configure --prefix=/localfolder glib-2.26.1/ make make install but it fails at make stage. trying to configure mc-4.6.1/ and make doesn't obviously work. What are the steps I need to make in order to install midnight comander for my local user in a custom folder? Make for glib gives me these errors /usr/bin/msgfmt: found 2 fatal errors cp: cannot stat `test.mo': No such file or directory gmake[4]: *** [test.mo] Error 1 gmake[4]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio/tests' gmake[3]: *** [all-recursive] Error 1 gmake[3]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio' gmake[2]: *** [all] Error 2 gmake[2]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio' gmake[1]: *** [all-recursive] Error 1 gmake[1]: Leaving directory `/remote/folder/mc/glib-2.26.1' gmake: *** [all] Error 2

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • Easy Profiling Point Insertion

    - by Geertjan
    One really excellent feature of NetBeans IDE is its Profiler. What's especially cool is that you can analyze code fragments, that is, you can right-click in a Java file and then choose Profiling | Insert Profiling Point. When you do that, you're able to analyze code fragments, i.e., from one statement to another statement, e.g., how long a particular piece of code takes to execute: https://netbeans.org/kb/docs/java/profiler-profilingpoints.html However, right-clicking a Java file and then going all the way down a longish list of menu items, to find "Profiling", and then "Insert Profiling Point" is a lot less easy than right-clicking in the sidebar (known as the glyphgutter) and then setting a profiling point in exactly the same way as a breakpoint: That's much easier and more intuitive and makes it far more likely that I'll use the Profiler at all. Once profiling points have been set then, as always, another menu item is added for managing the profiling point: To achieve this, I added the following to the "layer.xml" file: <folder name="Editors"> <folder name="AnnotationTypes"> <file name="profiler.xml" url="profiler.xml"/> <folder name="ProfilerActions"> <file name="org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.shadow"> <attr name="originalFile" stringvalue="Actions/Profile/org-netbeans-modules-profiler-ppoints-ui-InsertProfilingPointAction.instance"/> <attr name="position" intvalue="300"/> </file> </folder> </folder> </folder> Notice that a "profiler.xml" file is referred to in the above, in the same location as where the "layer.xml" file is found. Here is the content: <!DOCTYPE type PUBLIC '-//NetBeans//DTD annotation type 1.1//EN' 'http://www.netbeans.org/dtds/annotation-type-1_1.dtd'> <type name='editor-profiler' description_key='HINT_PROFILER' localizing_bundle='org.netbeans.eppi.Bundle' visible='true' type='line' actions='ProfilerActions' severity='ok' browseable='false'/> Only disadvantage is that this registers the profiling point insertion in the glyphgutter for all file types. But that's true for the debugger too, i.e., there's no MIME type specific glyphgutter, instead, it is shared by all MIME types. Little bit confusing that the profiler point insertion can now, in theory, be set for all MIME types, but that's also true for the debugger, even though it doesn't apply to all MIME types. That probably explains why the profiling point insertion can only be done, officially, from the right-click popup menu of Java files, i.e., the developers wanted to avoid confusion and make it available to Java files only. However, I think that, since I'm already aware that I can't set the Java debugger in an HTML file, I'm also aware that the Java profiler can't be set that way as well. If you find this useful too, you can download and install the NBM from here: http://plugins.netbeans.org/plugin/55002

    Read the article

  • How can I get something like Nautilus's "spatial" behaviour in 11.10 and later?

    - by Dylan McCall
    Until recently, Nautilus had an optional "spatial" mode as an alternative to its usual browser mode. Users could enable it by opening Nautilus's preferences and selecting "Open each folder in its own window." Choosing this option resulted in a stripped down interface, and folders would always open in the same place on the screen. It looked something like this: That checkbox is still there, but it doesn't do the same thing: while every folder opens in a new window, those windows are all the same size and positioned the same way. Nautilus doesn't restore the size and position of each folder, which is what spatial mode was all about. This difference really shows if someone opens a folder in a new window, and then opens the same folder again. Nautilus will create another window instead of focusing the existing window. I help someone who is currently using Ubuntu 10.04, and he will be upgrading to 12.04 in the future. He is very comfortable with Nautilus's spatial behaviour and I think he would be disappointed to lose it. Is there an alternative file manager, or perhaps some less obvious option, that will give him an interface similar to the one he is comfortable with?

    Read the article

  • Powershell: Install-dotNET4 function

    - by marc dekeyser
    This function will download and install ,NET 4.0. It uses the Get-Framework-Versions function to determine if the installation is necessary or not. Internet Connectivity will be required as the script auto downloads the setup file (and sleeps for 360 seconds... I had a function in there to monitor for install completion at first, turns out the setup file spawns so many childprocesses the function just got confused and locked up -_-)Alternatively you could drop the installation file in the folder specified on the $folderPath variable too. That will skip the download and use the file. This function easily adapts in to other versions f.e. I use it for Powershell 3 installs as well!Function install-dotNet4 () {    if(($InstalledDotNET -eq "4.0") -or ($InstalledDotNET -eq "4.0c")){        write-host ".NET 4.0 Framework is already installed" -foregroundcolor Green    } else{            #set a var for the folder you are looking for        $folderPath = 'C:\Temp'        #Check if folder exists, if not, create it        if (Test-Path $folderpath){            Write-Host "The folder $folderPath exists." -ForeGroundColor Green        } else{            Write-Host "The folder $folderPath does not exist, creating..." -NoNewline -ForegroundColor Red            New-Item $folderpath -type directory | Out-Null            Write-Host " - done!" -ForegroundColor Green        }        # Check if file exists, if not, download it        $file = $folderPath+"\dotNetFx40_Full_x86_x64.exe"        if (Test-Path $file){            write-host "The file $file exists." -ForeGroundColor Green        } else {            #Download Microsoft .Net 4.0 Framework            Write-Host "Downloading Microsoft .Net 4.0 Framework..." -nonewline -ForeGroundColor DarkYellow            $clnt = New-Object System.Net.WebClient            $url = "http://download.microsoft.com/download/9/5/A/95A9616B-7A37-4AF6-BC36-D6EA96C8DAAE/dotNetFx40_Full_x86_x64.exe"            $clnt.DownloadFile($url,$file)            Write-Host " - done!" -ForegroundColor Green        }        #Install Microsoft .Net Framework        Write-Host "Installing Microsoft .Net Framework..." -nonewline -ForegroundColor DarkYellow        $dotNET4 = $folderPath+"\dotNetFx40_Full_x86_x64.exe /quiet /norestart"        Invoke-Expression $dotNET4        write-host " - done!" -ForegroundColor Green        start-sleep -seconds 360    }}

    Read the article

  • Is there a clean separation of my layers with this attempt at Domain Driven Design in XAML and C#

    - by Buddy James
    I'm working on an application. I'm using a mixture of TDD and DDD. I'm working hard to separate the layers of my application and that is where my question comes in. My solution is laid out as follows Solution MyApp.Domain (WinRT class library) Entity (Folder) Interfaces(Folder) IPost.cs (Interface) BlogPosts.cs(Implementation of IPost) Service (Folder) Interfaces(Folder) IDataService.cs (Interface) BlogDataService.cs (Implementation of IDataService) MyApp.Presentation(Windows 8 XAML + C# application) ViewModels(Folder) BlogViewModel.cs App.xaml MainPage.xaml (Contains a property of BlogViewModel MyApp.Tests (WinRT Unit testing project used for my TDD) So I'm planning to use my ViewModel with the XAML UI I'm writing a test and define my interfaces in my system and I have the following code thus far. [TestMethod] public void Get_Zero_Blog_Posts_From_Presentation_Layer_Returns_Empty_Collection() { IBlogViewModel viewModel = _container.Resolve<IBlogViewModel>(); viewModel.LoadBlogPosts(0); Assert.AreEqual(0, viewModel.BlogPosts.Count, "There should be 0 blog posts."); } viewModel.BlogPosts is an ObservableCollection<IPost> Now.. my first thought is that I'd like the LoadBlogPosts method on the ViewModel to call a static method on the BlogPost entity. My problem is I feel like I need to inject the IDataService into the Entity object so that it promotes loose coupling. Here are the two options that I'm struggling with: Not use a static method and use a member method on the BlogPost entity. Have the BlogPost take an IDataService in the constructor and use dependency injection to resolve the BlogPost instance and the IDataService implementation. Don't use the entity to call the IDataService. Put the IDataService in the constructor of the ViewModel and use my container to resolve the IDataService when the viewmodel is instantiated. So with option one the layers will look like this ViewModel(Presentation layer) - Entity (Domain layer) - IDataService (Service Layer) or ViewModel(Presentation layer) - IDataService (Service Layer)

    Read the article

  • apt-get update bzip2 errors

    - by Tejas Kale
    I installed Ubuntu 11.10 today on my Lenovo w500. After that when i tried running sudo apt-get update This is the error i am getting. Get:117 http://ftp.jaist.ac.jp oneiric-security/universe TranslationIndex [73 B] 99% [48 Sources bzip2 0 B] [22 Sources bzip2 5,294 kB] 1,983 kB/s 0s bzip2: Compressed file ends unexpectedly; perhaps it is corrupted? *Possible* reason follows. bzip2: Inappropriate ioctl for device Input file = (stdin), output file = (stdout) It is possible that the compressed file(s) have become corrupted. You can use the -tvv option to test integrity of such files. You can use the `bzip2recover' program to attempt to recover data from undamaged sections of corrupted files. I found the following similar question : Errors while updating Ubuntu 11.10 , But the solutions mentioned ( changing the download server, running apt-get clean, apt-get autoclean) and have also tried removing the /var/cache/apt/archives/lists direcotry. As a result of this, I am unable to install any new packages.

    Read the article

  • How to do simple multitasked loop processing over filenames with PowerShell?

    - by Ville Koskinen
    I'm batch transcoding some 50 GB of video files on a USB hard disk which is connected to a wlan router. The drive is mapped as a network drive on my Windows 7 laptop. The speed handicap of the wlan causes some parts of the processing to become unnecessarily slow, so I would like to do the following with PowerShell: List the names of the files on the network drive to be transcoded Copy the first file to a temporary folder on my laptop Simultaneously Transcode the file in the folder Begin copying the next file from the network drive to the temporary folder After transcoding and copy have both ended, Delete the file which has been transcoded from the temporary folder Begin transcoding next file in the temporary folder Loop until all files have been processed How would I be able to do this with PowerShell? The multitasking part is an obstacle for my skill/persistence combination.

    Read the article

  • How can I tweak this split-tar-gz-archive script?

    - by Sai
    I came across this shell script that can be used to split an entire directory across multiple compressed files. I am interested in making a minor tweak to this script. Lets say I have n GB of files in a directory. I do not want the contents of a particular folder to be split across multiple tar files but inside the same file. I want the n MB folder to be fit within a single compressed file and the remaining (n GB -n MB) split across multiple files. I am not sure whether it is possible with this script. I was looking forward to some suggestions. Though the script has been well documented, it is quite complex to understand for me

    Read the article

  • Empty /var/log after running cron bash script

    - by Ortix92
    I wrote a little bash script and all of a sudden my /var/log folder is completely empty except for the log I created for the bash script. This is the script I'm running every hour with cron: #!/bin/bash STL_DIR=/path/to/some/folder/i/hid LOGFILE=/var/log/stl_upload.log now=`date` echo "----------Start of Transmission----------" 2>&1 | tee -a $LOGFILE echo "Starting transfer at $now" 2>&1 | tee -a $LOGFILE rsync -av -e ssh $STL_DIR [email protected]:/users/path/folder 2>&1 | tee -a $LOGFILE echo "----------End of transmission----------" 2>&1 | tee -a $LOGFILE printf "\n" 2>&1 | tee -a $LOGFILE I want to be clear that I'm not 100% certain this is related to the empty logs folder. So if anyone could give me a pointer as to what could be going on about the reason why my log folder is empty, that'd be great.

    Read the article

  • 'bzip2' error while downloading OpenCV2.4.2 in 12.04

    - by Saurabh Deshpande
    While downloading OpenCV2.4.2 in 12.04, I get the following error message: bzip2: Compressed file ends unexpectedly; perhaps it is corrupted? *Possible* reason follows. bzip2: Inappropriate ioctl for device Input file = (stdin), output file = (stdout) It is possible that the compressed file(s) have become corrupted. You can use the -tvv option to test integrity of such files. You can use the `bzip2recover' program to attempt to recover data from undamaged sections of corrupted files. tar: Child returned status 2 tar: Error is not recoverable: exiting now

    Read the article

  • Unexpected deletion of directory

    - by Anubhav Chaturvedi
    I find that somehow the Downloads directory in /home/user/ is deleted. on using $locate Downloads, it shows the existence of directory without any existence of files within. now when i manually create directory named Downloads, $ locate Downloads shows the directory as well as the files the original folder had. also there is no hidden Downloads folder nor can i access the folder or its files this behavior is quite unexpected .... please help.

    Read the article

  • Symlink are using both locations?

    - by Tiago Rossi
    Ive made a research and didnt found any answers, so I decided to ask here. To make you know, the /dev/sda2 disk of my WHM/Cpanel webserver got 100% full. The /var/ folder are the /dev/sda2 and I've found the reason of that isse are the /var/lib/mysql folder. To fix it I need to move the /var/lib/mysql folder from /dev/sda2 to /home/ where I have a lot of space in disk. Then I used the command lines: service mysql stop cp -r -p /var/lib/mysql/ /home/databasesmysql/ mv /var/lib/mysql /var/lib/mysql.backup/ ln -s /home/databasesmysql/ /var/lib/mysql service mysql start Ok, now to check if its running at the new location I just renamed the /var/lib/mysql to /var/lib/mysql.backup and MySQL stopped working. Also when I rename the /home/databasesmysql/ folder MySQL also stop to work. I dont know whats happening, the symlink are using both locations? Thanks very much.

    Read the article

  • Project of Projects with team Foundation Server 2010

    - by Martin Hinshelwood
    It is pretty much accepted that you should use Areas instead of having many small Team Projects when you are using Team Foundation Server 2010. I have implemented this scenario many times and this is the current iteration of layout and considerations. If like me you work with many customers you will find that you get into a grove for how to set these things up to make them as easily understandable for everyone, while giving the best functionality. The trick is in making it as intuitive as possible for both you and the developers that need to work with it. There are five main places where you need to have the Product or Project name in prominence of any other value. Area Iteration Source Code Work Item Queries Build Once you decide how you are doing this in each of these places you need to keep to it religiously. Evan if you have one source code file to keep, make sure it is in the right place. This makes your developers and others working with the format familiar with where everything should go, as well as building up mussel memory. This prevents the neat system degenerating into a nasty mess. Areas Areas are traditionally used to separate out parts of your product / project so that you can see how much effort has gone into each. Figure: The top level areas are for reporting and work item separation There are massive advantages of using this method. You can: move work from one project to another rename a project / product It is far more likely that a project or product gets renamed than a department. Tip: If you have many projects, over 100, you should consider categorising them here, but make sure that the actual project name always sits at the same level so you know which is which. Figure: Always keep things that are the same at the same level Note: You may use these categories only at the Area/Iteration level to make it easier to select on drop down lists. You may not want to use them everywhere. On the other hand, for consistency it would be better to. Iterations Iterations are usually used to some sort of time based consideration. Here I am splitting into Iterations with periodic releases. Figure: Each product needs to be able to have its own cadence The ability to have each project run at its own pace and to enable them to have their own release schedule is often of paramount importance and you don’t want to fix your 100+ projects to all be released on the same date. Source Code Having a good structure for your source even if you are not branching or having multiple products under the same structure is always a good idea. Figure: Separate out your products source You need to think about both your branches as well as the structure of your source. All your code should be under “Source” and everything you need to build your solution including Build Scripts and 3rd party tools should be under your “Main” (branch) folder. This should them be branched by “Quality”, “Release” or both to get the most out of your branching structure. The important thing is to make sure you branch (or be able to branch) everything you need to build, test and deploy your application to an environment. That environment may be development, test or even production, but I can’t stress the importance of having everything your need. Note: You usually will not be able to install custom software on your build server. Store any *.dll’s or *.exe’s that you need under the “Tools\Tool1” folder. Note: Consult the Branching Guidance for Team Foundation Server 2010 for more on branching Figure: Adding category may be a necessary evil Even if you have to have a couple of categories called “Default”, it is better than not knowing the difference between a folder, Product and Branch. Work Item Queries Queries are used to load lists of Work Items out of TFS so you can see what work you have. This means that you want to also separate queries out by Product / project to make it easier to Figure: Again you have the same first level structure Having Folders also in Work Item Tracking we do the same thing. We put all the queries under a folder named for the Product / Project and change each query to have “AreaPath=[TeamProject]\[ProductX]” in the query instead of the standard “Project=@Project”. Tip: Don’t have a folder with new queries for each iteration. Instead have a single “Current” folder that has queries that point to the current iteration. Just change the queries as you move from one iteration to another. Tip: You can ctrl+drag the “Product1” folder to create your “Product2” folder. Builds You may have many builds both for individual products but also for different quality's. This can be further complicated by having some builds that action “Gated Check-In” and others that are specifically for “Release”, “Test” or another purpose. Figure: There are no folders, yet, for the builds so you need a good naming convention Its a pity that there are no folders under builds, some way to categorise would be nice. In lue of that at the moment you can use a functional naming convention that at least allows you to find what you want. Conclusion It is really easy to both achieve and to stick to this format if you take the time to do it. Unless you have 1000+ builds or 100+ Products you are unlikely run into any issues. Even then there are things you can do to mitigate the issues and I have describes some of them above. Let me know if you can think of any other things to make this easier.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >