Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 754/1338 | < Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >

  • How to rename everything matching a certain string in a folder

    - by lostiniceland
    Hello Everyone I am running Linux and I have some basic console knowledge but my current problem is quite difficult and I dont know how to achieve this. I want/need to rename everything within a folder that matches a given string. By everything I mean folders/files content within a file content in hidden files Basically I want to refactor a Java-project. Sure, I could use Eclipse to handle the replacing, but this leaves out the folders or resources outside of my workspace. I was thinking of a script that could do the job for me but this seems rather tricky. For instance when it comes to folder-/file-rename I want to replace only the part of the name that matches my string, the rest should remain untouched. Maybe someone already has something like this in his/her script-collection :-) Thanks in advance Marc

    Read the article

  • Any way to program a system which automatically restores home / sql database

    - by Mirage
    I have made two shell scripts Script 1: It does all Home directory backups with name username_home_date.tar.gz Script 2: It does SQL backups of all sites every 3 hrs. username_databse_date.sql.gz Now currently if I want to restore the site, I have to copy the tar file to /home/username and then untar there with all the permissions as well and then manually import the database. Is there any way (for instance a program, system or script) that I can just select which backup I want to restore and do automatically? Maybe like a cPanel addon thing.

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • DDDNorth2 Bradford, 13th October 2012 - Async Patterns presentation and source code

    - by Liam Westley
    Many thanks to Andy Westgarth and his team for organising a fantastic conference at the rather elegant Bradford University School of Management. Also, a big congratulations to all the delegates who gave up there free time to come and hear us speak and who were, in general, enthusiastic and asked some cracking questions to keep us speakers on our toes. For those who attended my Async my source code and presentation are now available on GitHub, https://github.com/westleyl/DDDNorth2-AsyncPatterns If you are new to Git then the easiest client to install is GitHub for Windows, a graphical UI for accessing GitHub. Personally, I also have TortoiseGit installed – the file explorer add-in that works in a familiar manner to TortoiseSVN. As I mentioned during the presentation I have not included the sample data, the music files, in the source code placed on GitHub but I have included instructions on how to download them from http://silents.bandcamp.com and place them in the correct folders. What I forgot to mention is that Windows Media Player by default does not play Ogg Vorbis and Flac music files, however you can download the codec installer for these, for free, from http://xiph.org/dshow. I am planning to break down this little project into a series of blog posts, with each pattern being a single blog post over several weeks. In these I will flesh out the background behind the pattern, the basic goal being achieved and how to monitor the progress of the sample data being processed. Basically, what I said during the presentation and is missing from the slides.

    Read the article

  • Can't detect collision properly using Rectangle.Intersects()

    - by Daniel Ribeiro
    I'm using a single sprite sheet image as the main texture for my breakout game. The image is this: My code is a little confusing, since I'm creating two elements from the same Texture using a Point, to represent the element size and its position on the sheet, a Vector, to represent its position on the viewport and a Rectangle that represents the element itself. Texture2D sheet; Point paddleSize = new Point(112, 24); Point paddleSheetPosition = new Point(0, 240); Vector2 paddleViewportPosition; Rectangle paddleRectangle; Point ballSize = new Point(24, 24); Point ballSheetPosition = new Point(160, 240); Vector2 ballViewportPosition; Rectangle ballRectangle; Vector2 ballVelocity; My initialization is a little confusing as well, but it works as expected: paddleViewportPosition = new Vector2((GraphicsDevice.Viewport.Bounds.Width - paddleSize.X) / 2, GraphicsDevice.Viewport.Bounds.Height - (paddleSize.Y * 2)); paddleRectangle = new Rectangle(paddleSheetPosition.X, paddleSheetPosition.Y, paddleSize.X, paddleSize.Y); Random random = new Random(); ballViewportPosition = new Vector2(random.Next(GraphicsDevice.Viewport.Bounds.Width), random.Next(GraphicsDevice.Viewport.Bounds.Top, GraphicsDevice.Viewport.Bounds.Height / 2)); ballRectangle = new Rectangle(ballSheetPosition.X, ballSheetPosition.Y, ballSize.X, ballSize.Y); ballVelocity = new Vector2(3f, 3f); The problem is I can't detect the collision properly, using this code: if(ballRectangle.Intersects(paddleRectangle)) { ballVelocity.Y = -ballVelocity.Y; } What am I doing wrong?

    Read the article

  • Mobile Web Applications – A guide for professional development

    - by JuergenKress
    (Tobias Bosch, Stefan Scheidt, Torsten Winterberg / Opitz Consulting Deutschland GmbH). There is a real hype around mobile solutions. Smartphones and tablets are everywhere. Frontend architecture is changing quickly to adopt cross browser technologies like HTML5 and extensive JavaScript-based development. In this book we introduce our software development process to build test-driven Single-Page JavaScript Web Applications, which will be the future next to native apps. We start with a short introduction of our RYLC showcase (know from our SOA articles), give a very short introduction to JavaScript, then talk about jQuery Mobile, Angular JS, Testing, Backend-communication and we close with deploying our RYLC-Webapp as a hybrid app using the PhoneGap (Cordova) framework. Don’t expect too much theory – it’s a practical guide explaining how RYLC Web App was built, to kickstart your own development. Currently only available in German as paperback and eBook. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: adf mobil

    Read the article

  • Connecting a USB laptop to a RJ45 serial port

    - by Jon
    We are about to get our first managed switch at work (Procurve 2520G-24-PoE), and this lowly programmer gets to put on his admin hat and try to configure it. The switch has an RJ45 serial port for console access. My laptop has USB ports but no serial port. In fact, there isn't a single computer in the office with a serial port. I've seen USB-to-DB9 adapters, but I need to go from USB to RJ45 (serial). How would I go about accomplishing this? Do I need two adapters? Will USB-to-DB9 and then DB9-to-RJ45 work? Thanks in advance.

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • IIS 6 on x64 and long URLs

    - by mausch
    I have a very long URL on a site hosted on Windows 2003 x64 that looks like this: http://myhost/a_very_very_long_url_around_300_chars_long (i.e. a single, very long segment around 300 chars long) Problem is, I'm getting a 400 Bad Request response from HTTP.SYS (it doesn't even reach IIS). I can tell because these requests show up in system32\LogFiles\HTTPERR, e.g: 2009-09-17 19:51:29 200.123.179.9 3636 192.168.129.50 80 HTTP/1.1 GET /a_very_very_long_url_around_300_chars_long 400 - URL - I tried setting UrlSegmentMaxLength in the registry and this fixes the issue on my Windows 2003 x86 box but not on the x64 production server. I tried this on another Win2k3 x64 server and it also failed. Any hints?

    Read the article

  • What is the maximum number of TCP connections I can have in Windows Server 2008?

    - by evilfred
    I would like to have as many connections (single connections from many different clients) as humanly possible in a server running on Windows Server 2008, in order to support a Comet-style application. The application is written in C#. The connections will not be chatty, they just need to be open (and stay open). Buying boatloads of memory and fast CPUs are not a problem. As far as I can tell, I will be limited to 65k simultaneous open connections per NIC - the maximum number of ports. Is this accurate? Or can I go beyond 65k connections / NIC somehow? It seems like there are server products for Linux at least that support hundreds of thousands of connections. How do they do this?

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • How to calculate bandwidth limits per user on WiFi network

    - by Lars
    A typical 802.11g access point can provide around 25 Mbps of bandwidth. How is the bandwidth shared among the users? Furthermore, how many users can be served by a single access point using 802.11g in an environment with low interference, and average web activity from the users? The goal is to use bandwidth limitation to avoid starvation for some users in case some of the users start to download a file or stream HD video or some other bandwidth intensive activity. Can someone break down the math on this?

    Read the article

  • Slap the App on the VM for every private cloud solution! Really ?

    - by Anand Akela
    One of the key attractions of the general session "Managing Enterprise Private Cloud" at Oracle OpenWorld 2012 was an interactive role play depicting how to address some of the key challenges of planning, deploying and managing an enterprise private cloud. It was a face-off between Don DeVM, IT manager at a fictitious enterprise 'Vulcan' and Ed Muntz, the Enterprise Manager hero .   Don DeVM is really excited about the efficiency and cost savings from virtualization. The success he enjoyed from the infrastructure virtualization made him believe that for all cloud service delivery models ( database, testing or applications as-a-service ), he has a single solution - slap the app on the VM and here you go . However, Ed Muntz believes in delivering cloud services that allows the business units and enterprise users to manage the complete lifecycle of the cloud services they are providing, for example, setting up cloud, provisioning it to users through a self-service portal ,  managing and tuning the performance, monitoring and applying patches for database or applications. Watch the video of the face-off , see how Don and Ed address some of the key challenges of planning, deploying and managing an enterprise private cloud and be the judge ! ?

    Read the article

  • Cursor seems to freeze in the first attempt of typing - Unity 3D, 12.04

    - by Denis
    It happens in the first attempt of typing, no matter is after the startup, or 5 minutes later, or then after. The cursor (or maybe it's the system) seems to freeze, no matter the application I use, taking up 5 sec to appear what is typed. Subsequently, everything is normal, using another applications. @Anwar Shah suggested it could be a daemon waiting to run before the lauching of the first application. Turning off Zeitgest didn't help. It occurs only with Unity-3d. Tested with Unity-2d, everything is fine. Tried to change some Compiz settings, nothing worked, although not tested with every single parameter. Also I deactivated Ati proprietary driver, no effect. My system: AMD E350 1.6Gh, 2G-Ram, ATI graphics - Ubuntu 12.04, 64bits. Update 1: the cursor is blinking normally before I start typing. After the first character (which is not showed), seems to freeze, taking 5 seconds to get normal again. Very annoying, specially when you want to access login sites. Update 2: I tested on a different and old machine (Athlon 64 4800 x2, 4Gb ram, no problems - takes 2 seconds, acceptable. I think it could be related to my specific hardware (Samsung RV415), but not sure about it. Anyone experiencing something similar? Is that what I should expect, or can be fixed or improved? Thanks.

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • What is an appropriate language for expressing initial stages of algorithm refinement?

    - by hydroparadise
    First, this is not a homework assignment, but you can treat it as such ;). I found the following question in the published paper The Camel Has Two Humps. I was not a CS major going to college (I majored in MIS/Management), but I have a job where I find myself coding quite often. For a non-trivial programming problem, which one of the following is an appropriate language for expressing the initial stages of algorithm refinement? (a) A high-level programming language. (b) English. (c) Byte code. (d) The native machine code for the processor on which the program will run. (e) Structured English (pseudocode). What I do know is that you usually want to start your design implementation by writing down pseuducode and then moving/writing in the desired technology (because we all do that, right?) But I never thought about it in terms of refinement. I mean, if you were the original designer, then you might have access to the original pseudocode. But realisticly, when I have to maintain/refactor/refine somebody elses code, I just keep trucking with the language it currently resides in. Anybody have a definitive answer to this? As a side note, I did a quick scan of the paper as I havn't read every single detail. It presents various score statistics, can't find where the answers are with the paper.

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • Configuring BitLocker on existing Windows Server 2008

    - by neildeadman
    On our file server, we want to enable BitLocker so that we can encrypt a single drive and I know that we have to also encrypt the drive the system is installed on. In my tests with VMs, I have to create a partition on the same disk as Windows is installed either before I install Windows or after (I can use the BitLocker Drive Preparation Tool). This tool shrinks the C: drive to make way for a 1.5GB partition. I'm a little concerned about doing this with a live system so wanted to see if it was possible to get BitLocker to use a partition on another drive?

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Are the only types of data "sources" static and dynamic?

    - by blunders
    Thinking that there might be others, but not sure -- but before getting into that, let me explain what I mean by static and dynamic data sources. Static (or datastore) - Meaning that the data's state is non-changing, and if was changed, that would be a new state, and the old data would be considered stateless; meaning it no longer is known to exist, or not exist. Another way of possibly looking at a static data source might be that if read and written back without modification, the checksum for before and after should be exactly the same regardless of the duration of time between the reading and rewriting of the data. Examples: Photos, Files, Database Record, Dynamic (or datastream) - Meaning that the data's state is known to be in flux, and never expected to be the same per input. Example: Live video/audio feed, Stock Market feed, First let me say, the above is a very loose mapping of the concepts, and I'd welcome any feedback. Next, onto the core of the question, that being are these the only two types of data sources. My guess, is that yes, they are -- but that there are hybrid versions of the two. That being, streaming data that has a fixed state. For example, the data being streamed has a checksum given and each unique checksum is known to be a single instance of static data. On the flip side, static data could be chained via say a version control system; when played back, each version might be viewed as a segment of a stream; thing is, the very fact that it can be played back makes the data source static. Another type might be that the data source is being organically discovered, and it's simply unknown what the state is. Questions, feedback, requests -- just comment, thanks!!

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • Timeouts when connecting to SQL Server since installing SP1 for Windows 7

    - by Julien
    Hi, I just installed SP1 for windows 7 and I have severe performance degradation when connecting to SQL Server 2005 since then. Establishing connection takes more than 30 seconds while it's instantaneous on another computer. Firewall is disabled and I didn't make any change to the configuration. It happens both when trying to connect with a hostname and with an ip address. Everything else seems to be fine (for instance, I'm have no issue connecting to other computers with remote desktop) What can cause such a problem? Thanks in advance! Edit : uninstalling the SP1 solves the issue instantly.

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

< Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >