Search Results

Search found 23658 results on 947 pages for 'mixed case'.

Page 476/947 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Viewing zip archive contents using 'less' on OS X.

    - by multihead
    I couldn't help but notice that the 'less' program on all of the recent distributions of Linux that I've used (Ubuntu and Gentoo in this case) allow me to view the contents of ZIP and TAR archives, while the install of 'less' that I have on OS X (and Solaris) instead produce a "foo.zip may be a binary file. See it anyway?", which proceeds to spit out the raw binary data instead of a nice file structure listing. Google has not produced much in the way of helpful results -- it's tricky to search for 'less' in this context. I downloaded and built the latest version from greenwoodsoftware.com, but even it refuses to show the contents of these archives. I didn't come across any related configure/build options either. Any ideas? Thanks!

    Read the article

  • How to automatically copy a file uploaded by a user by FTP in Linux (CentOS)?

    - by Buttle Butkus
    Outside contractor says they need read/write/execute permissions on part of the filesystem so they can run a script. I'm ok with that, but I want to know what they're running, in case it turns out there is some nefarious code. I assume they are going to upload the file, run it, and then delete it to prevent me from finding out what they've done. How can I find out exactly what they've done? My question specifically asks for a way of automatically copying the file, which would be one way. But if you have another solution, that's fine. For example, if the file could be automatically copied to /home/root/uploaded_files/ that would be awesome.

    Read the article

  • I cannot change the grub Default item from OS-1, but I can from OS-2 (dual-boot 10.04 on both)

    - by fred.bear
    My 10.04 system (OS-1) got into a tangle the other day, so I installed a second, dual-boot 10.04 (OS-2), so that I could trouble-shoot the hung system... In case it is relevant to my question, I'll mention that since I got OS-1 working again, it has shown a few battle wounds from its ordeal (.. actually the ordeal was mine ... trying to figure it all out ;) ... I lost some custom settings, but not all. (For the curious: the hangup was caused by rsync writing 600 GB to OS-1's 320 GB drive.. The destination drive was unmounted at the time, and rsync dutifully wrote directly to /media/usb_back; filling it to capacity... I have since, ammended my script :) Because the dual-boot MBR was prepared by OS-2, it is first on the grub list.. However, I want OS-1 to be the default OS to boot... From OS-1, I tried two methods to change the grub-menu's defaule OS. eg. Directly editing /etc/default/grub (then update-grub) Running 'Startup Manager' (then update-grub) Neither of these methods had any effect... so I started OS-2, and tried method 1... It worked! Why can I not change the grub menu from OS-1? .. or if it can be done, How?

    Read the article

  • Is it too late to start your career as a programmer at the age of 30 ?

    - by Matt
    Assuming one graduated college at 30 years old and has 5 years of experience (no real job experience, just contributing to open source and doing personal projects) with various tools and programming languages, how would he or she be looked upon by hiring managers ? Will it be harder to find a job considering that (I got this information looking at various websites, user profiles on SO and here, etc.) the average person gets hired in this field at around 20 years old. I know that it's never too late to do what you're passionate about and the like but sometimes it is too late to start a career. Is this the case? Managers are always looking for fresh people and I often read job descriptions specifically asking for young people. I don't need answers of encouragement, I know the community here is great and I wouldn't get offended by even the most cold answers. Please don't close this as being too localized, I'm not referring to any specific country or region, talk about the region you're in. I would also appreciate if you justified your answer.

    Read the article

  • Varnish cache and PHP session; setting header?

    - by StCee
    Varnish by default would not cache page with cookies. I read on some posts that one workaround for PHP pages is to set header('Cache-Control: public, s-maxage=60'); in php pages. But would it makes Varnish cache the page with the session cookie? Session is started on that page, and although there is nothing personal on that page, I would still want the session to persist in case the user would do something private later. So is there a way to cache the page without the session cookie? And still be able to pass session between pages? I can imagine some sort of weird solution with hidden form, but I would prefer if it can be done with VCL configuration or header setting. Thanks a lot!

    Read the article

  • What browser feature is this exploiting and how to stop it ?

    - by ldigas
    http://raffa991.ra.funpic.de/lol/ Warning: It is some kind of an annoying "you are an idiot" sign combined with a lot of popup message boxes. Open with care! In any case, it crashed my firefox 3.5.4. (or to be more precise, made it unusable) ... I don't know about other browsers. Since it's been a while since something that stupid did something like that, I'm wondering ... what weakness is that thing using (Javascript ?), and how to protect oneself from it ?

    Read the article

  • How to add network printer remotely without knowing the IP?

    - by Steve
    Assume your friend from over 100km away asked you to add a network printer to his computer since you're so tech savvy. How would you add network printer remotely in this case? You would need: 0. Remote connection to your friend's computer 1. Printer IP and brand/model names 2. Respective drivers downloaded either from manufacturer's website or Windows Update driver Question is, how would you find out the IP address of the printer without bothering your friend too much with technical steps? Since your friend isn't as tech savvy as you - they wouldn't know which buttons to press to get IP address.

    Read the article

  • Incremental search for un/accented characters

    - by user38983
    Does emacs have an incremental search mode, where searching for a character will search for itself and for any other versions of the character with accent marks, similar to how Google Chrome (at least v27) will do when searching in a page? Alternatively, is there an additional library or piece of elisp code that can put incremental search in such a mode? For example, incremental search for: 'manana', would find 'manana' or 'mañana' 'motley crue', would also find 'Mötley Crüe' (with case-sensitivity off). Even a solution that only covers a subset of these characters would be helpful.

    Read the article

  • Motherboard with embedded hdmi problems with Windows 8 Consumer Preview (64bit)

    - by duluca
    I'm specifically referring to GIGABYTE GA-E7AUM-DS2H LGA 775 NVIDIA GeForce 9400 HDMI Micro ATX with a Core 2 Duo chip. This computer is connected to a Sharp Aquous TV using HDMI. It all worked fine with Windows 7 64 bit. In Device Manager I see GeForce 9400 and have installed the latest NVidia drivers (295.73 WHQL). However, when I click on the change the screen resolution to 1920x1080, I see that Windows 8 thinks that it's using some other graphics card with Microsoft Basic Display Driver. This was made clear, when I tried to launch the NVidia tools and it claimed that the current monitor (in this case my TV) wasn't attached to the GeForce 9400 card. In Device Manager, there's a "Coprocessor" and "Unknown device" without drivers, but I've no idea what they are. I've run the original CD that the motherboard came with no success. Any ideas?

    Read the article

  • Many user stories share the same technical tasks: what to do?

    - by d3prok
    A little introduction to my case: As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the function call mechanism would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. Coming to the problem: The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of "function call mechanism" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! I've proposed a solution, but I'm absolutely not satisfied: We created a "Design story" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them). What would you do, or even better what have you done, in situations like this? (a small foot-note: following a suggestion I've moved this question from stackoverflow)

    Read the article

  • Ultimate way to use Picasa in a home network

    - by luisfarzati
    I've been trying a lot of approaches but still didn't find any effective solution. I want gigs of photos in a network drive (a IOMega Home Media Network Drive, plugged to my wifi router). I'd like to do 2 things: Do a Picasa import process of all the photos in the drive, making Picasa organize all the files in a year/month folder structure physically. Ideally, the import target directory should be the same network drive, otherwise I should move all the imported files in my local computer back to the drive myself. Share the Picasa database over the network, by uploading it to the network drive. Have me and other members of the family point our Picasas to the network database, and see the photos as well as make changes (tag faces, create logical albums, etc) into it. Is ANY possibility to accomplish this? Or should I be looking for another photo management app, and in that case do you know such one? Thank you!

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Losing file permissions after rebooting Windows 7

    - by SMTF
    I have a User directory full of files that are not accessible permission wise for the user who's home directory it is. Said user can't run Explorer. For example, it provides an error complaining that permission is not available for required files. I tried various ways to give said user permission of his home directory and things are fine until after rebooting the machine; the permissions reset to the previous state and the problem persists. I followed the solution outlined here. And again things worked until I reboot the machine. I'm in this mess because I replaced a corrupted user profile as outlined here. The original user and the new replacement on are/where both admin accounts. In case it is relevant I will mention that the Users directory is not on the C volume but a D volume on the same machine. Any insight is appreciated.

    Read the article

  • How to revert to "last known good configuration"

    - by Ripley
    Hi Guys. I failed to install ubuntu 10.04 with WUBI, for some reason it's showing me the root partion is not defined. I'm bored to fight with it so I just removed ubuntu in windows. However this installation made my original Windows XP cripple, a normal boot will end up with a blue screen, error code 7E, I'm still able to boot with the 'last known good configuration' tho. My understanding is booting like this will recover things and I'm supposed to be good when reboot, while this is not the case for me, I have to choose the 'boot from last known good configuration' each and every time to work around the blue screen. Could you suggest how could I resolve this? I feel it's foolish having to waste 10 more seconds each time starting the OS.

    Read the article

  • FTP blocked by firewall on windows 8.1 update 1 public network

    - by amik
    I've recently upgraded to Windows 8.1 u1. I connect to VPN to one of my projects, over which I connect to FTP server (using Total Commander 8.51a). Now, when I try to connect, Total Commander hangs on "Download" (in case of passive connection on "PASV" command). I've figured out that the problem is somehow caused by firewall, because it works if I disable firewall or I set the VPN network location to "private" (which I don't want, it is not enough trusted network for me) I tried to add total commander to firewall exception for total commander, both to inbound and outbound rule, but with no success. I have no more ideas how to configure the firewall to make FTP work properly, can you plese help me? thanks in advance.

    Read the article

  • Can GnomeKeyring store passwords unencrypted?

    - by antimeme
    I have a Fedora 15 laptop with the root and home partitions encrypted using LUKS. When it boots I have to enter a pass phrase to unlock the master key, so I have it configured to automatically log me in to my account. However, GnomeKeyring remains locked, so I have to enter another pass phrase for that. This is unpleasant and completely pointless since the entire disk is encrypted. I've not been able to find a way to configure GnomeKeyring to store its pass phrases without encryption. For example, I was not able to find an answer here: http://library.gnome.org/users/seahorse-plugins/stable/index.html.en Is there a solution? If not, is there a mailing list where it would be appropriate to plead my case?

    Read the article

  • Can I install fresh Linux accross partitions (LUKS & LVM) and preserve/use existing home user?

    - by xtian
    With an existing LUKS encrypted logical volume partitioned hard disk dual boot to Windoz and Linux (Fedora 15), is it necessary to "start over" with the LUKS setup when upgrading the system? I recall some note about dividing the Linux installation over different partitions would help to preserve the home data in future update (I can't find this now) Before I try it, is this possible and intended use case for partitioning a Linux installation? # lsblk -fa NAME FSTYPE LABEL MOUNTPOINT sda [80G] +-sda1 [system W95 FAT 32] vfat +-sda2 ext4 /boot +-sda3 [52.4G] crypto_LUKS +-luks-de25ac97-6a32-4b79-a6a0-296a39376b3b (dm-0) LVM2_member +-cryptVG-root (dm-1) [21.5G] ext4 / +-cryptVG-swap (dm-2) [5.4MB] swap [SWAP] +-cryptVG-data (dm-3) [25.6G] ext4 /home

    Read the article

  • Hard Disk recovery

    - by Shaihi
    I have 3 disks of the same type model and year of production. All the disks were used part of a generic solution of an IBM server solution. My problem is that all 3 disks suffered the same malfunction at the same exact time and are now dis-functional. I went to two different expert's laboratories and got the same answer: To recover the data they need another identical disk from which they can take spare parts. Can my case really be that clinical? Anyway, I am not sure if this question belongs to this forum, but I am looking to buy the following disk: IIBM ESERVER XSERIES IBM P/N 24P3707 IBM FRU 24P3708 146.8GB USCSI 10K RPM PART NOMBER 9V2005-027 I already bought a disk with the same part number, but the labs said that apparently I need a disk that was manufactured in the same factory. That means that all the numbers have to be exactly the same. If anybody know where I can purchase such a disk (the information on the lost disks is really important to me), please tell me the place.

    Read the article

  • xen 4.0 squeeze fails to start guests with: launch_vm: SETVCPUCONTEXT failed

    - by mcr
    As Chris Benninger says over at: http://www.benninger.ca/?p=58 lots and lots of people have the problem with Squeeze and xen4.0 telling them: launch_vm: SETVCPUCONTEXT failed (rc=-1) but nobody seems to know what the solution is. I don't know either, but at least here, a solution might get recorded. In my case, I can start one guest machine. An identical configuration for a second machine fails. Whichever one I start first, is the one that runs, the other gets the error. I've got at least a dozen other systems (at my work) running great with Squeeze and 64-bit XEN, but not this new machine at home.

    Read the article

  • Should I use my real name in my open source project?

    - by Jardo
    I developed a few freeware programs in the past which I had signed with my pseudonym Jardo. I'm now planning to release my first open source project and was thinking of using my full real name in the project files (as the "author"). I thought it would be good to use my name as my "trademark" so if someone (perhaps a future headhunter) googles my name, they'll find my projects. But on the other side, I feel a bit paranoid about disclosing my name (in the least case I could be getting a lot of spam to my email, its not that hard to guess your private email from your name). What do you think can be "dangerous" on disclosing your full name? What are the pros and cons? Do you use your real name or a pseudonym in your projects? I read this question: What are the advantages and disadvantages to using your real name online? but that doesn't apply to me bacause it's about using your real name online (internet discussions, profiles, etc.) where I personally see no reason to use my real name... And there is also this question: Copyrighting software, templates, etc. under real name or screen name? which deals with creating a business or a brand which also doesn't apply to me because I will never sell/give away my open source project and if someone else joins in, they can write their name as co-author without any problems...

    Read the article

  • Computer to act as keyboard

    - by Joe
    Title explains it. Imagine this example, Host computer connects to a Client computer via male/male usb connection. Client computer acknowledges this connection as a new device, in this case a keyboard. The host computer can now send key events to the client computer and the client computer would process them as a normal keyboard event. I did a whole lot of searching in the internet and really have drove down many dead ends. Any tips would be appreciated. Note* this is a physical connection. The client computer should not have to install any software for this to function (The host will completely spoof as a keyboard).

    Read the article

  • Make nginx avoid cache if response contains Vary Accept-Language

    - by gioele
    The cache module of nginx version 1.1.19 does not take the Vary header into account. This means that nginx will serve the same request even if the content of one of the fields specified in the Vary header has changed. In my case I only care about the Accept-Language header, all the others have been taken care of. How can I make nginx cache everything except responses that have a Vary header that contains Accept-Language? I suppose I should have something like location / { proxy_cache cache; proxy_cache_valid 10m; proxy_cache_valid 404 1m; if ($some_header ~ "Accept-Language") { # WHAT IS THE HEADER TO USE? set $contains_accept_language # HOW SHOULD THIS VARIABLE BE SET? } proxy_no_cache $contains_accept_language proxy_http_version 1.1; proxy_pass http://localhost:8001; } but I do not know what is the variable name for "the Vary header received from the backend".

    Read the article

  • Store system passwords with easy and secure access

    - by CodeShining
    I'm having to handle several VPS/services and I always set passwords to be different and random. What kind of storage do you suggest to keep these passwords safe and let me access them easily? These passwords are used for services like databases, webserver user and so on that run customers' services, so it's really important to keep them in a safe place and strong. I'm actually storing them in a google drive spreadsheet file, describing user, password, role, service. Do you know of better solutions? I'd like to keep them on a remote service to make sure I don't have to make backup copies (in case my hdd would fail somehow). I do work on *nix platforms (so windows specific solutions are not a choice here).

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Using mod_rewrite for a Virtual Filesystem vs. Real Filesystem

    - by philtune
    I started working in a department that uses a CMS in which the entire "filesystem" is like this: create a named file or folder - this file is given a unique node (ex. 2345) as well as a default "filename" (ex. /WelcomeToOurProductsPage) and apply a template assign one or more aliases to the file for a URL redirect (ex. /home-page-products - can also be accessed by /home-page-products.aspx) A new Rewrite command is written on the .htaccess file for each and every alias Server accesses either /WelcomeToOurProductsPage or /home-page-products and redirects to something like /template.aspx?tmp=2&node=2345 (here I'm guessing what it does - I only have front-end access for now - but I have enough clues to strongly assume) Node 2345 grabs content stored in a SQL Db and applies it to the template. Note: There are no actual files being created on the filesystem. It's entirely virtual. This is probably a very common thing, but since I have never run across this kind of system before two months ago, I wanted to explain it in case it isn't common. I'm not a fan at all of ASP or closed-sourced systems, so it may be that this is common practice for ASP developers. My question, that has taken far too long to ask, is: what are the benefits of this kind of system, as opposed to creating an actual file hierarchy? Are there any drawbacks to having every single file server call redirected? To having the .htaccess file hold rewrite rules for every single alias?

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >