Search Results

Search found 3241 results on 130 pages for 'extract'.

Page 95/130 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Error installing Sony Remote Play

    - by Iszi Rory or Isznti
    I'm trying to install Remote Play software to connect my laptop to my PS3. I've found a guide with instructions which seem to be in fairly wide use (found similar walk-throughs on numerous other sites), for running the software on a non-Vaio PC. Tech-Recipies: Playstation 3 – Use Remote Play on any Windows 7 PC The setup essentially goes like this: Download Remote Play software. Download patch by NTAuthority. Install Remote Play as normal. Reboot. Extract NTAuthority patch to Remote Play program folder. Manually register patched DLLs via CLI. Run Remote Play software. Sadly, my problem is early in - Step 3. I had to use Google to find the software download, as the link from Tech-Recipies seems broken. I found the download on Sony's site here: Sony eSupport: Remote Play with PlayStation®3 After downloading and running the software, I hit "Next" at the welcome screen and "I Agree" at the EULA screen. After this, a popup informs me that Setup is checking my computer's information. Then, Setup terminates with this error: I'm running Windows 7 Ultimate x64. Is anyone familiar with this error in this software? Is there a way to work around it? Did I perhaps pick the wrong download from Sony's site?

    Read the article

  • bind9 "error sending response: host unreachable"

    - by wolfgangsz
    of course), I have a number of DNS servers, all running bind9 (9.5.1, to be specific) under fedora. 4 of them are slaves, fed by a common master for our public DNS. These are all located on the public gateways of our various offices. One of them has tons of messages in its log files similar to these: Jul 21 17:26:18 gateway named[3487]: client 10.171.3.8#52500: view internal: error sending response: host unreachable I wonder where that comes from. The firewall is open on port 53 between the two machines (10.171.3.8 is an internal DNS server located on a Windows Domain Controller). The internal domains do NOT list the gateway as a name server (so there should not be any attempts of replicating the domains), and the gateway does not handle any internal DNS. The clients in these messages vary between the two domain controllers on the internal network and a third internal name server (running bind9 on debian in a different segment of the network). Any pointers are highly welcome. In response to the first reply: The issue with this really is that tcpdump doesn't show any problems. Here is an extract from "tcpdump -i any port 53" 09:13:38.283308 IP valine.aminocom.com.61815 ns-pri.ripe.net.domain: 14075 PTR? 166.225.58.95.in-addr.arpa. (44) 09:13:42.007410 IP gateway-eng.aminocom.com.37047 alanine.aminocom.com.domain: 35410+ PTR? 12.3.172.10.in-addr.arpa. (42) At the same time, the DNS log shows: Jul 22 09:13:38 gateway named[3487]: client 10.171.3.6#61300: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.172.3.12#56230: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.171.3.8#55221: view internal: error sending response: host unreachable Jul 22 09:13:49 gateway named[3487]: client 10.171.3.8#51342: view internal: error sending response: host unreachable So clearly at 09:13:40 there were two unsuccessful attempts to connect to internal machines (10.172.3.12 and 10.171.3.8, both are DNS servers), but nothing in the tcpdump output.

    Read the article

  • How should secret files be pushed to an EC2 (on AWS) Ruby on Rails application?

    - by nikc
    How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk? I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using: git aws.push The following files are in the .gitignore: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html Quoting from that link: Example Snippet The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp: sources: /etc/myapp: http://s3.amazonaws.com/mybucket/myobject Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .elasticbeanstalk .ebextensions directory: sources: /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz That config.tar.gz file will extract to: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message: Error message: No such file or directory - /var/app/current/config/database.yml Exception class: Errno::ENOENT Application root: /var/app/current

    Read the article

  • WRTU54G-TM router with 3rd party firmware; Can custom firmware include stock binary portions?

    - by dlamblin
    I've been doing a lot of reading online about the Linksys WRTU54G-TM router model that I now own. It seems getting a custom firmware onto it is not a problem. But no one is talking about retaining the Voip features (yet). So far they're all disappointed that it's not a SIP machine and used GSM over IPSec. Personally I don't care about using it with non-t-mobile. If I take the original firmware, shouldn't I be able to extract it, and it's SquashFS image, and then move all of the t-mobile specific binaries for enabling the calling features over to a custom firmware installation (maybe OpenWRT)? You might ask why, and the reason is, that if I do this I could retain my calling features, which I do want, and ssh to the router and use it to run additional software, as any OpenWRT router could do. Does anyone know if this can be done, and how the firmware's binaries could be gotten at and installed correctly? Update I have found someone working on 3rd party WRTU54G-TM firmware. I am still interested in my second part of the questions, that is can't the stock firmware images be pulled apart and have the close-source, if any, binary kernel modules moved into another more flexible custom firmware?

    Read the article

  • Create new folder for new sender name and move message into new folder

    - by Dave Jarvis
    Background I'd like to have Outlook 2010 automatically move e-mails into folders designated by the person's name. For example: Click Rules Click Manage Rules & Alerts Click New Rule Select "Move messages from someone to a folder" Click Next The following dialog is shown: Problem The next part usually looks as follows: Click people or public group Select the desired person Click specified Select the desired folder Question How would you automate those problematic manual tasks? Here's the logic for the new rule I'd like to create: Receive a new message. Extract the name of the sender. If it does not exist, create a new folder under Inbox Move the new message into the folder assigned to that person's name I think this will require a VBA macro. Related Links http://www.experts-exchange.com/Software/Office_Productivity/Groupware/Outlook/A_420-Extending-Outlook-Rules-via-Scripting.html http://msdn.microsoft.com/en-us/library/office/ee814735.aspx http://msdn.microsoft.com/en-us/library/office/ee814736.aspx http://stackoverflow.com/questions/11263483/how-do-i-trigger-a-macro-to-run-after-a-new-mail-is-received-in-outlook http://en.kioskea.net/faq/6174-outlook-a-macro-to-create-folders http://blogs.iis.net/robert_mcmurray/archive/2010/02/25/outlook-macros-part-1-moving-emails-into-personal-folders.aspx Update #1 The code might resemble something like: Public WithEvents myOlApp As Outlook.Application Sub Initialize_handler() Set myOlApp = CreateObject("Outlook.Application") End Sub Private Sub myOlApp_NewMail() Dim myInbox As Outlook.MAPIFolder Dim myItem As Outlook.MailItem Set myInbox = myOlApp.GetNamespace("MAPI").GetDefaultFolder(olFolderInbox) Set mySenderName = myItem.SenderName On Error GoTo ErrorHandler Set myDestinationFolder = myInbox.Folders.Add(mySenderName, olFolderInbox) Set myItems = myInbox.Items Set myItem = myItems.Find("[SenderName] = " & mySenderName) myItem.Move myDestinationFolder ErrorHandler: Resume Next End Sub Update #2 Split the code as follows: Sent a test message and nothing happened. The instructions for actually triggering a message when a new message arrives are a little light on details (for example, no mention is made regarding ThisOutlookSession and how to use it). Thank you.

    Read the article

  • Connect bluetooth headphones both to PC and phone at the same time

    - by Sergiy Byelozyorov
    I have recently bought Philips SHB6110. Extract from the 13th page of manual: Therefore you can connect your Bluetooth stereo headset. with a Bluetooth stereo enabled phone to both listen to music and lead calls, or with a Bluetooth phone that does not support Bluetooth stereo (A2DP) to lead calls and at the same time to a Bluetooth audio device (Bluetooth enabled MP3 player, Bluetooth audio adapter etc.) to listen to music. Make sure to pair the phone first with your Bluetooth headset, then turn both the phone and headset off to then pair the Bluetooth audio device. With the SwitchStream feature you can listen to music and monitor your calls at the same time. Even while listening to music, you will hear a ring tone when receiving a call and can switch to the call simply by tapping the button. The manual however doesn't specify how do I connect to both device at the same time. I use Toshiba Satellite Pro P300-1CG laptop with Belkin Mini Bluetooth Adapter and Nokia N95 phone. Operating system is Windows 7 64-bit and I have Skype installed. Both phone and compute can be used for listening to music and talking on the phone (on PC via Skype). Best solution would be if I could connect to PC and phone as the same time and monitor calls both mobile and Skype calls while listening music from Winamp. If that is not possible, then I would like at least to be able to listen music from PC, while monitoring calls from mobile. So, please tell me how do I connect both PC and phone to headphones?

    Read the article

  • How to use my computer as a Headset device for my phone with Bluetooth?

    - by TheJelly
    I want to extract the audio from my phone (the analog TV and FM/AM receiver mainly) and play it through my computer speakers. There is a headphone jack but it is of non-standard size (probably a micro-jack) and I do not have access to a shop that sells that kind of equipment in my area so doing this with Bluetooth is the only solution I can foresee. Both my laptop and my phone support A2DP but for some reason the service (from the phone) does not show up while I add a new connection and the phone does not let me initiate a connection with any other profile except FTP (although it detects other services in the service list like A2DP and works perfectly fine with other profiles like DUN, HID, OPP, SSP if the connection is started through the computer). I am currently using the latest version of the Toshiba stack, I have tried using WIDCOMM but it refuses to install drivers for both the internal Bluetooth (which is a Broadcom device) and the USB Bluetooth that I use on my desktop. The standard Microsoft stack (generic driver) does install but it does not work with both of my devices as they do not detect any Bluetooth devices when scanning. With BlueSoleil (the default stack that came with the USB Bluetooth) I could set my device as "headset" instead of only "laptop/desktop", and this allowed both my phones to detect my laptop as a device they can use as a headset, but the problem with this stack was that only the older phone could actually connect to my laptop and that the internal Bluetooth could not be used. Basically, I want to set the device type as a "headset" for my phone using the Toshiba stack like I did with BlueSoleil. Is there any way this could be done? Thanks. Image: Device type selection http://i.stack.imgur.com/drjC6.jpg

    Read the article

  • Resource reference passing in puppet

    - by paweloque
    Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Deploy', } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Smoke Test', } In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Deploy["Deploy ${release_name}"], } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Smoke_test["Smoke Test ${release_name}"], } And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc.. Is it possible to achieve this in puppet?

    Read the article

  • "cannot receive new filesystem stream: invalid backup stream" error when unpacking flash archive on solaris 10

    - by Bovril
    I've searched around but i'm having no luck with some peculiar behavior with a flash archive. I'm using HP Server Automation 9.14 to deploy the OS. I'm creating a Solaris 10 flash archive to create a snapshot default build in our environment. I create the flash archive with # flar create -c -S -n g8-solaris10-u10 g8-solaris10-u10.flar It seems to create the file without any problems (exit status 0). When deploying to a new system (same hardware), it extracts to a point and then bails. The last error in the log I can see is Extracted 2047.00 MB ( 82% of 2488.98 MB archive) ERROR: Could not read file (172.27.118.100:/media/opsware/sunos/flar/g8-solaris10-u10.flar ERROR: Errors occurred during the extraction of flash archive. The file /tmp/flash_errors contains the list of errors encountered ERROR: Could not extract Flash archive ERROR: Flash installation failed The error log contained the following message cannot receive new filesystem stream: invalid backup stream A previous version of this flash archive (1.8gb) worked ok, so I suspect size may be a factor. The source system (the one the flash archive is an image of) is an HP BL460C GEN8 some more info below. OS version Info # uname -a SunOS testhostname 5.10 Generic_147441-01 i86pc i386 i86pc # who -r . run-level 3 Oct 15 08:15 3 0 S disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 17841 alt 2 hd 255 sec 63> /pci@0,0/pci8086,3c06@2,2/pci103c,3355@0/sd@0,0 Specify disk (enter its number): Specify disk (enter its number): zpools # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 136G 24.6G 111G 18% ONLINE - Zones # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared The file size of 2047 seems suspiciously close to 2048, which is concerning. Any help would be greatly appreciated. Thanks

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • How do I export address book from N97

    - by mplungjan
    Hi, I need to copy my numbers from my private Nokia to my office BB. I have not found a way to export my phone numbers from ovi or elsewhere. On Mac iSync stopped working with snow leopard and OVI on windows does not export. I do not mind using a windows suggestion. I lost a description on how to use the ovi backup files in another program. What I have done so far terminal: sudo open -a iSync.app - it launched but iSync said "this device is not supported by iSync" went here http://europe.nokia.com/support/product-support/isync/compatibility-and-download found a plugin (I am sure that was not there a while ago :| ) Checked software version 22.0.110 installed plugin Ran iSync which found and installed my N97 device successfully. synced. It stopped with The connection was lost while talking to the phone. http://discussions.europe.nokia.com/t5/Nseries-and-S60-Smartphones/N97-iSync-Multimedia-Transfer-Modem/m-p/568560 no news since Jan 2010. Tried to download and install http://best-vcard.en.softonic.com/symbian but the installer fails :( I simply do not understand why Nokia is giving us such a hard time. I would not have considered switching from Nokia if Mac had been better supported. It is so frustrating that they just seem not to care losing Nokia fanbois like me - especially since I am this outspoken on the net and what i say on popular forums gets indexed by google fast. I am very close to just go iPhone here. Hope someone has Nokia's ears UPDATE: I Downloaded NbuExplorer from sourceforge. It will extract everything from an OVI backup into VCF, VCS and VMG files. Very useful software and free.

    Read the article

  • WRTU54G-TM router with 3rd party firmware; Can custom firmware include stock binary portions?

    - by dlamblin
    I've been doing a lot of reading online about the Linksys WRTU54G-TM router model that I now own. It seems getting a custom firmware onto it is not a problem. But no one is talking about retaining the Voip features (yet). So far they're all disappointed that it's not a SIP machine and used GSM over IPSec. Personally I don't care about using it with non-t-mobile. If I take the original firmware, shouldn't I be able to extract it, and it's SquashFS image, and then move all of the t-mobile specific binaries for enabling the calling features over to a custom firmware installation (maybe OpenWRT)? You might ask why, and the reason is, that if I do this I could retain my calling features, which I do want, and ssh to the router and use it to run additional software, as any OpenWRT router could do. Does anyone know if this can be done, and how the firmware's binaries could be gotten at and installed correctly? Update I have found someone working on 3rd party WRTU54G-TM firmware. I am still interested in my second part of the questions, that is can't the stock firmware images be pulled apart and have the close-source, if any, binary kernel modules moved into another more flexible custom firmware?

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Font for Wine that supports the entire character set of the Win32 Console?

    - by Brian Campbell
    I would like to be able to display in the Wine console all characters that the Win32 console can display. I've written a small test program to print out all 8-bit characters: #include <stdio.h> int main(int argc, char *argv[]) { int i, j; for (i = 0; i <= 0xF0; i+=0x10) { for (j = i; j <= i + 0x0F; ++j) printf("%2x:%c", j, (char)j); printf("\n"); } getchar(); return 0; } Under Wine, the best I can do so far is using Andale Mono: While this is what I see on Windows Server 2008: Is there anywhere I can legally download a font that will allow me to view all of those characters under Wine? edit I've found a set of DOS fonts that includes a CP437 font, which should cover the character set I'm interested in. However, even if I install this font, wineconsole doesn't seem to recognize it. Is there any way I can get wineconsole to use this font, or convert this font to a format that wineconsole can use? Or is there any way I can extract fonts from DOSEMU for use in Wine? Oh, and I should probably mention that I'm on Mac OS X 10.6.2, installing Wine via MacPorts, using the wine-devel package. more information I have tried installing some console fonts that should cover the full character set as Mac OS X fonts (such as the NewDOS font listed above, and a font I tried converting from the fonts supplied by DOSEMU). Wine does not seem to pick up on new fonts installed in Mac OS X. Is there a way to register new fonts I've installed with Wine? Would manually editing the system.reg file that seems to contain font mappings work, or is there something else I'd need to do? bump Bounty ends soon, I'm still looking for an answer for this. Does anyone use the Wine console for complex text user interfaces?

    Read the article

  • Windows xp recovery console without Ntfs.sys? (0x00000024 BSOD)

    - by Kalle
    I have two physical disks in a computer, for simlicity lets call them C and D. C: got Windows XP and D: got some data. The problem is that whenever i have D: connected i can't boot windows. I get some BSOD called 0x00000024/NTFS_FILE_SYSTEM. Same thing if i boot up windows with D: disconnected and then connect it once windows has loaded. The KB article about this problem says that i have to run chkdsk but i can't get to somewhere where i can run this because i get a BSOD whenever the disk is connected! Even the recovery-console BSODs if D: is connected. The final option in the KB is to boot the computer on Windows 2000 Setup disks where you edit some file to manually disable the ntfs.sys driver and then run chkdsk. The problem is that i don't have any floppy drive. Is there any way to boot the built in recovery console with ntfs.sys disabled or to burn the floppy version to a cd after you've extracted and modified it on the harddrive? Right now the Windows xp bootable floppy creator(2) is asking me which floppy drive to extract to which i can't answer because i have none :/ Other solutions to the root problem is also appreciated :) (2) ht tp://www.microsoft.com/downloads/details.aspx?FamilyID=55820edb-5039-4955-bcb7-4fed408ea73f&displaylang=en

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • Unknown user in terminal

    - by Giles B
    Im having a strange problem with the terminal in OS X. When I open the terminal the username at the command prompt is: unknown-04-0c-ce-e3-0d-c2: ~ I can't pinpoint when this first started or why unfortunately. I usually use iTerm for web development purposes but this also occurs in the normal OS X Terminal app. Any ideas/help would be really appreciated. Thanks Update: Thanks to @fayadfami and @aliasgar for the correct answers and steering me in the right direction. Also this forum post helped http://forums.macrumors.com/showthread.php?t=152407 The extract from the right post: Having run into the exact same issue myself, and having come across this thread while attempting to figure it out, I thought I'd post the answer. OS X is initially setting your hostname to what's set for your Computer Name in Sharing; however, if you're set up for DHCP and you match a current lease on your DHCP server (i.e., match the IP address of another recent user), OS X will then set your hostname to whatever the DHCP server currently has for that lease. This freaked me out incredibly at first, as I had just reformatted (having just purchased my first Mac and wanting to see how the installer worked) and knew I had not yet changed the Computer Name in Sharing -- yet my system hostname at the Terminal prompt was indeed changed to what I had previously set, pre-format. I grepped around, not finding the name anywhere save log entries; I thought either the format didn't actually properly wipe everything, or I was losing my mind. Finally I logged into my router (it's a Linksys WRT54GS running OpenWRT), and found the hostname in the current leases file. I then manually set my Mac's IP to something different, and volia! -- the hostname was back to what I expected. I hope this helps save someone from the same paranoia I went through.

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • Resources for Smartphone Security

    - by Shial
    My organization is currently working on improving our data and network security due to increasing HIPAA laws and a general need to get a better grasp on controlling our health related information. We are a non-profit working with people with developmental disabilities so we handle a lot of medical related information. One area that has been identified as a risk is our use of smartphones, specifically at this time Windows Mobile 6.1 devices from T-Mobile. We do not utilize the VPNs on the phones so there isn't any way they can access our databases or file servers (username/password for VPNs is not the domain logons). What would be exposed however is the particular user's email account since you could extract out the username/password and access the email either on the device or on our web email (Exchange 2003) which could contain HIPAA protected confidential information about clients and services and this would be an incident that would have to be reported. What resources or ideas would help us secure these devices? I'm not worried about data interception (using SSL) but more about physical theft or loss of the device. Are there websites that I just have not found with guidelines and suggestions or particualar products that would help protect us? I also don't want to limit the discussion to windows Mobile either. I myself am looking at an android 2.0 device and there is always the eventual possibility we could get pushed to enable the VPNs. I know this is a subject that likely won't have any particular correct answer and it is something we should all be aware of since there devices are sitting outside of our immediate control most of the time.

    Read the article

  • Recover LVM2 volume group after one HDD failed

    - by Bernd
    I had two HDDs, each one containing a LVM partition which formed a volume group. Then I had two LVs, one for my / directory and one for my /home/ directory. Yesterday where I had my / dir failed. I'm trying to recover at least my /home/ dir. What I've done so far: Boot a live system Extract LVM2 metadata from the working HDD using dd Copy metadata to /etc/lvm/backup/vg0 Now I'm trying to do this: pvcreate --restore /etc/lvm/backup/vg0 --uuid "[uuid of my working hdd]" /dev/sdb2 But I always get: Couldn't find device with uuid '[uuid of broken hdd]'. Couldn't find device with uuid '[uuid of working hdd]'. Device /dev/sdb2 not found (or ignored by filtering). I confirmed that /dev/sdb2 exists and I've commented out all filtering settings from /etc/lvm/lvm.conf so I don't know what might be causing pvcreate not to find the device. So: What might be the problem? Is it even possible to restore this partition? (As I'm writing this I'm starting to think it's impossible D:) Edit: Okay, looks like I've got it figured out. I was using a Ubuntu 8.10 CD (yeah, I know it's not supported anymore) and it seems that was the problem. When I started from a Ubuntu 10.04 CD everything worked 'fine', I could mount my LVM partitions partially without problems. (Will answer the question in 4 hours. But if anyone has still got some hints/tips, please share! :)

    Read the article

  • Analyze a BSOD (irql_less_than_or_equal)

    - by Bruno Reis
    Hello. About 2 months ago I bought a new system and built it at home: Mother board: XFX X58i Processor: Core i7 920, using the stock cooler Memory: 3x2GB Corsair DDR3 1600 Video card: NVIDIA GTS 250 (1GB) Hard disk: 2x WD 500GB, 7200rpm I have 2 screens plugged into the video card, and the system is connected to a 550W PSU. Nothing is overclocked. After building the system, I stressed it a lot with Prime95 and rthdribl to check its stability. All my tests were perfect. So I reinstalled Win 7 x64 Professional and started using it normally. The first week (2010-03-15) I got the infamous irql_less_than_or_equal BSOD. Ten days after (2010-03-24) I got another one. Then on 2010-04-09, 2010-05-04. Since 2 days ago it became worse: I got one bluescreen per day! (2010-05-12, 2010-05-13, 2010-05-14). I installed BlueScreenView to try to obtain some information, but I'm not able to extract any useful information apart from the bug check string (irql_less_than_or_equal), and that it was caused by ntoskrnl.exe (the first three at ntoskrnl.exe+71f00, the last 4 at ntoskrnl.exe+70600 -- which I suspect could be the same thing, as Microsoft could have patched this file in the mean time, so the address of the function causing it changed). Then I stressed my memory sticks with memtest, they worked perfectly. After booting, I've stressed my GPU with FurMark and RTHDRIBL, everything was fine. Then I stressed the CPU with 4 instances of Prime95 while monitoring the temperature -- that never exceeded 85oC with the case closed --, everything fine. Finally I've stressed the whole system with HeavyLoad for a looooong time, everything worked just fine. So, I have stressed most of the components of the system, but couldn't get any useful information from it. Do you have any hint on what else can I do to find the culprit? Thanks Bruno

    Read the article

  • How do I install the latest version of packages in Ubuntu?

    - by Roman
    For example I want to install the latest version of "numpy". I type the following: "sudo apt-get install python-numpy". When I type this the first time it installs something and if I type this the second time it writes that I have already the latest version of numpy. However, I see that my version of numpy is 1.1.1. and I know that it NOT the latest version. Why it happens and how this problem can be solved? I can find the *tar.gz file with the latest version, I can extract files with the archive and than I need to rune one of the scripts which will be somewhere among the extracted files. But I do not like this way. It is too complicated. I do not know where I should put all these files, I do not know which dependencies I should install before I run the script for the installation of numpy, I do not know where numpy will be put after installation and so on. Is there an easy way to get the latest version of numpy?

    Read the article

  • Where to get grub files without using grub-install

    - by Jacky
    I am in a particular situation. I have a MacBook Pro with no internal CD drive and both MacOS X (minimal setup) and Linux (my main system) is installed. During a cross-upgrade to Ubuntu 12.04 I messed up grub, so that my /boot/grub directory is basically empty. This means I can't boot Linux on the laptop anymore but only get into grub rescue. Normally this is no issue as you'd just boot from a rescue CD or USB stick, but unfortunately with a MacBook Pro this is not possible (I have reFIT installed and it attempts to boot, but it fails and the manual says that Apple's EFI firmware is not able to handle this situation). From MacOS X, however, I still have write access to the Linux partition. I've now been trying to figure out how to populate the /boot/grub folder with the necessary files, to no avail so far. The ISO image of Ubuntu 12.04 contains an EFI folder which is not what I am looking for, instead I need the normal.mod files for the grub version of Ubuntu 12.04. I do not have any other machine to set up a virtual machine of Ubuntu 12.04 to extract this from after a grub-install, so I am asking for ideas here how to solve this mess. P.S.: I installed the Linux previously when I still had a working internal CD drive. This is gone now.

    Read the article

  • Error upgrading Ubuntu server from Intrepid to Jaunty

    - by Martin
    I'm trying to upgrade an old ubuntu server from 8.10 (Intrepid) to 9.04 (Jaunty). But it fails. root@server1:/# do-release-upgrade Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Does anyone have an idea why I get this error and how to fix it? UPDATE: I think i might have tracked the problem down. My /etc/update-manager/meta-release looks like this: [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed If i go to http://changelogs.ubuntu.com/meta-release it has this info for Jaunty: Dist: jaunty Name: Jaunty Jackalope Version: 9.04 Date: Thu, 23 Apr 2009 12:00:00 UTC Supported: 0 Description: This is the 9.04 release Release-File: http://archive.ubuntu.com/ubuntu/dists/jaunty/Release ReleaseNotes: http://changelogs.ubuntu.com/EOLReleaseAnnouncement UpgradeTool: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz UpgradeToolSignature: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz.gpg Those links starting with archive.ubuntu.com are broken since jaunty is EOL. I guess i could fix this by copying this file, replacing "archive" with "old-releases", host the modified file somewhere and change the url in the meta-release file. Is this a good solution or will it make me run into worse problems?

    Read the article

  • `:Zone.Identifier` files keep on appearing in Windows XP virtual machine

    - by Jonathan Reno
    I have a Windows XP Home Edition guest and a Linux Mint 13 host. I use VirtualBox and the ~/Public directory is shared with the guest. It sometimes happens that I use IE on the guest system to download files (until I get a better Windows browser). All of the downloaded files go the the L:\ drive (the ~/Public directory). When they are finished downloading, Windows Explorer adds a :Zone.Identifier file for each file I download. When I extract a downloaded ZIP archive on the guest (on drive L:\), Windows creates a :Zone.Identifier file for every file in the extracted directory. This even occurs if I use the host to move a file to the ~/Public directory. The shared ~/Public directory is on an ext4 partition and the colon character is supposed to be illegal in file names in Windows, but not on the ext4 partition. Is there any way to stop Windows from putting all this rubbish on my filesystem? (I might have to create a shell script to clean up after Windows' act.) Here is what I see in Windows Explorer: By the way, if I were running a Mac OS X host (where colons are illegal file name characters) this would be even more horrendous.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >