Search Results

Search found 54055 results on 2163 pages for 'multiple files'.

Page 224/2163 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • scp all files starting with 'file' from a server

    - by user209691
    Hi, I use this command to copy all files whose names start with 'file' from a server. scp -vp me@server:/location/files* ./ But i got a 'No Match' error. probably Concerning the '' in the command. How can i protect the '' for ssh to understand that this refers to a list of files and not taking it as a filename. Thx August

    Read the article

  • Excel - Chart that sums the values in multiple rows for each series

    - by Chaulky
    Suppose I have a spread sheet that looks something like this... Now, I'd like to create a column chart that has 3 series, one for each country. Then, I want series for each category, but I want to plot the total, not each individual order total. So, something like this (excuse the horrible artwork)... The data label placement isn't all the important, the key is that for each Category (Bikes and Clothes) I chart the total for each country, not individual values from the "Order Total" column. Is this possible? Is it possible to do the same idea, but to switch Country and Category around?

    Read the article

  • mysql is not using multiple cpus

    - by mhost
    Our MySQL server has been using a lot of CPU lately (it's reached 100% several times and stays there for a while) and I noticed that it the CPU load is all on one core of one cpu. I was hoping to spread that out to all 4 on my server. I have been tweaking the MySQL settings to use more ram and less cpu, but it still occasionally reaches very high CPU usage. It seems like everything about the topic refers to thread_concurrency (which I've read is a solaris only setting). What can I do in Linux? Thanks.

    Read the article

  • How do I effectively write to 146 output files in C++ using cstdlib library

    - by Elpezmuerto
    I have a very large binary file and I need to create separate files based on the id within the input file. There are 146 output files and I am using cstdlib and fopen and fwrite. FOPEN_MAX is 20, so I can't keep all 146 output files open at the same time. I also want to minimize the number of times I open and close an output file. How can I write to the output files effectively? I also must use the cstdlib library due to legacy code.

    Read the article

  • Boost Include Files in VC++

    - by Dr. K
    For the last few years, I have been exclusively a C# developer. Previously, I developed in C++ and have a C++ application that I built about 3 years ago using VS2005. It made extensive use of the Boost libraries. I recently decided to brush off the old app and rebuild it in VS2008 with the latest version of Boost (the latest version with the "easy" installation program from BoostPro Computing), 1.39. Previously when I had the program running I was at 1.33. Also, the last time the program was running was at least 2 OS installations ago. The Boost installation is located on my machine at: "C:\Program Files\boost\boost_1_39". Anyway, I have done the following: Set the project's "Additional Include Directories" directory to "C:\Program Files\boost\boost_1_39" Added "C:\Program Files\boost\boost_1_39" to VS2008's Tools - Options - Projects and Solutions - VC++ Directories - Include Files I have a number of Boost includes in my stdafx.h file. The compiler fails upon attempting to open the first one - #include <boost/algorithm/string/string.hpp> I have confirmed that the above file is indeed located at "C:\Program Files\boost\boost_1_39\boost\algorithm\string\string.hpp" I continue to get: fatal error C1083: Cannot open include file: 'boost/algorithm/string/string.hpp': No such file or directory Any tips on what else to check would be greatly appreciated. Again, this is an application that compiled fine a few years ago, but the source has now been moved to a new machine/compiler.

    Read the article

  • Apache2 WebServer not allowing me to view website/files in /var/www

    - by CitadelCSAlum
    I used to be able to access websites/files that were stored in the directory /var/www I have not used this for a while, but now I have a need to store, media in this directory or in the directory/var/www/images I noticed that my apache web server wasnt running correctly so I did a complete package removal and then reinstalled, but I am still unable to access a test page inde.html in the directory /var/www/index.html by going to http://myipaddresshere/index.html Is there some initial configuration I need to do to allow me to store HTML and media files in this directory and be able to access them from the browser? I dont remember having to do anything before.

    Read the article

  • How to share datastores between multiple exchange servers?

    - by Johan
    I have an Exchange 2003 box that is seriously overstressed. I want to transfer its duties to a new and faster box. I don't cannot suffer downtime, so I have to do this stuff live. Here's what I plan to do: Install Exchange 2003 on the new server Set up the new server, so it will accept requests from users for their mailboxes I want to do as little manual set up as possible, because that 'll eat up my time and is too error prone Than I want to transfer my datastores one by one to the new server and have those users (once the datastore in the new server is up and running) to get their data from the new server (without them noticing) I don't have to transfer all the datastores, some of them need to stay on the old box (because I'm still waiting for extra HD space to arrive from the supplier) What steps do I need to follow to do this? The new box has never seen this domain before, the old exchange server is not the DC, we have a dedicated DC.

    Read the article

  • Program that converts Windows 7 install files into iso

    - by Stephen R
    I recently got to build a custom image of Windows 7 and I can successfully install it on other computers. The problem I have is right now I have to do the installs from a bootable hard drive. I would much rather do this from a disk or ISO. I know there is a program that will take the Windows 7 system files and convert it to ISO image, but I can't think of it. Otherwise, could I just simply burn the files to a DVD? Would that work?

    Read the article

  • VS: Separating headers from source files?

    - by jco
    I know this is completely subjective, but I'm curious: do you use separate filters for headers and source files in your Visual Studio solutions? Visual Studio creates "Header Files" and "Source Files" filters by default. To me, this dichotomy causes more annoyance than anything else. What's your take on this?

    Read the article

  • Separate the multiple ipv6 addresses

    - by Ruriko
    I have a vps that comes with 16 ipv6. I used one of the addresses and I tried testing the proxy at http://ipv6-test.com/ but it detected that is using the first ipv6 adddress instead of the certain ipv6 address that I provided. Example I used 2607:f358:0001:fed5:0022:772f:6ed3:7f3c as my proxy but it detected as 2607:f358:0001:fed5:0022:0000:db61:85e1 . Why is it doing that? The proxy server I am using is polipo

    Read the article

  • Does Microsoft make available the .obj files for its CRT versions to enable whole program optimizati

    - by Leeks and Leaks
    Given the potential performance improvements from LTCG (link time code generation, or whole program optimization), which requires the availability of .obj files, does Microsoft make available the .obj files for the various flavors of its MSVCRT releases? One would think this would be a good place for some potential gain. Not sure what they have to lose since the IL that is generated in the .obj files is not documented and processor specific.

    Read the article

  • subversion: how to manage tweaked files

    - by punk4funk
    Our group is considering moving to SVN. But, I can't seem to find a way to do the following: I need to make minor tweaks locally to about 20 files in the repository w/o having SVN consider them "changed" and included in the commit. (Changes like communication time-outs and logging levels.) Ideally I would want to merge the tweaked files to newer versions in the repository. (Keeping the tweaked local file up-to-date with committed changes form other users.) I can't imagine we're unique in wanting/needing this. Are there best practices around this type of use case? One thing I'm considering is putting all the tweaked files into a branched "tweaked" working copy. Then merging my tweaked files into my "official" working copy. Then using a script, which compares the "tweaked" and "official" working copies, to update my ignore list. The script would also un-ignore and alert me to any files that had tweaks and other changes that, presumably, needed to be committed to the repository. This seems kinda hacky and I can't imagine there's not a better way.

    Read the article

  • mysql is not using multiple cpus

    - by helpmhost
    Hi, Our MySQL server has been using a lot of CPU lately (it's reached 100% several times and stays there for a while) and I noticed that it the CPU load is all on one core of one cpu. I was hoping to spread that out to all 4 on my server. I have been tweaking the MySQL settings to use more ram and less cpu, but it still occasionally reaches very high CPU usage. It seems like everything about the topic refers to thread_concurrency (which I've read is a solaris only setting). What can I do in Linux? Thanks.

    Read the article

  • not able to open files in windows partition from linux on a dual boot system

    - by user1237244
    I have installed dual boot on my laptop, windows XP && Fedora linux. For some unknown reason windows XP is not booting up. Through fedora I'm able to see the windows XP partition and files in it. But the problem here is, I'm not able to open those files from linux. Does anybody hit this kind of issue, if so and incase you figured out the soluton, can you please share the solution? Thanks in advance.

    Read the article

  • testing directory S_ISDIR acts inconsistently

    - by coubeatczech
    hi, I'm doing simple tests on all files in directory. But from some reason, sometimes, they behave wrongly? What's bad with my code? using namespace std; int main() { string s = "/home/"; struct dirent * file; DIR * dir = opendir(s.c_str()); while ((file = readdir(dir)) != NULL){ struct stat * file_info = new (struct stat); stat(file-d_name,file_info); if ((file_info-st_mode & S_IFMT) == S_IFDIR) cout << "dir" << endl; else cout << "other" << endl; } closedir(dir); }

    Read the article

  • Multiple wifi cards and Internet connections on Win-7

    - by Dpp
    Hello, I have two wifi cards and two separated internet connections. I connect to the Internet with both of them but one does all of the internet transactions (and I have not seen any place where I can specify which one I would prefer to use!) What I would like to do is use one of them for the browser and skype only, and the other one for stock exchange software for instance. Any idea if it is possible?

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Download multiple files in background in Android

    - by Addev
    Basically I'm trying to make a little app for watching offline content. So there's a moment where the user selects to download the contents (and the app should download about 300 small files and images). I'd like to show the user how does the process go if he enters the proper activity. Showing a list of all the files, telling what has been already downloaded, in progress or waiting for download. My problem is that I really don't know what approach to take for achieve this. Since the download should last until finished I imagine the solution is an Service, but whats best? an IntentService, a Bound Service or an Standard Service calling a startService() for each download? And how can I keep my objects updated for displaying them later? should I use a database or objects in memory? Thanks

    Read the article

  • unlock database files when SQL server is idle

    - by Andy
    In my development/test environment on my laptop, I can't back up the SQL server database files because the file handles are kept permanently open by SQL server (VSS doesn't work because the drive is truecrypted) I was hoping there may be some setting in SQL server that can make it unlock the data files after a certain period of inactivity and automatically open them again on demand, but I can't find anything. I don't really want to be dumping the database out every night because it's only a development environment. apart from stopping sql server before I do the backup is there any other solution?

    Read the article

  • Windows Remote Desktop with multiple monitors

    - by BDW
    I'm looking for a remote desktop product that will allow me to switch on the fly between how many monitors the desktop is using. This would be going from one Windows 7 machine to another, one is running Ultimate, the other Enterprise. I know I can use Windows RDP with multi-monitor support, but it seems that only allows me to use ALL monitors, and if I switch to a windowed view, I get huge scrollbars because I'm going from 3 monitors to 1 window. What I'm looking for is something that will allow me to switch from 1, 2, or 3 monitor displays without restarting the RDP and would end up with a true 1/2/3 display - ie, if I go from 3 monitors to 1, or a window, I don't get scrollbars. Is there any such product?

    Read the article

  • Passwordless ssh on multiple machines

    - by Phil
    Hi all, I'm quite confused with all the ssh security stuff. I am trying to reconfigure a system that is currently broken for reasons unknown. Machine A is your personal computer that you use whenever you're at home. Machine B is the head node of an HPC cluster and all the other machines C are all identically configured machines which share the home directories of machine B. This is an HPC cluster if you haven't guessed. How would I configure passwordless ssh between any nodes B or C. A can only get to C through sshing into B

    Read the article

  • Uploading files to a server that has Real Time Antivirus scan running

    - by zecougar
    I need to allow users to upload files onto a server that has an antivirus program running with real-time scanning switched on. What would be a good design to ensure that infected files are not uploaded to the server. Questions - would large files be copied onto disk and then immediately scanned, or would they be scanned as they are copied and not allowed to appear on disk if infected Should i build a seperate infrastructure around this to specifically ionvoke a scan on the copied file ? this might be an issue if the file is deleted through the real-time scan

    Read the article

  • Combine multiple network interfaces to connect to a dedicated server

    - by Dženis Macanovic
    this is an underpaid employee writing, who's apparently responsible for all the IT stuff in a very small (non-IT) company. Today said company got a bunch of PCs/workstations, a switch, a computer that's supposed to be used as a router, two DSL connections (each 16 MBit/s downstream and 1 MBit/s upstream) and a dedicated server which is hosted and managed professionally by a larger local company with some decent connection speed (1 GBit/s both directions if I'm not mistaken). This is what I've set up (note I'm not making use of the second DSL connection at all)... ETH0 ETH1 [ SWITCH ]---[LINUX DEBIAN ROUTER]---[DSL MODEM 1]---[INTERNET] | | | PC1 | | PC2 | ... ... when my boss asked me, if it was somehow possible to get 32 MBit/s downstream and 2 MBit/s upstream. At that time I replied "no" without thinking too much about it. Now I've just had the following idea... ETH1 ETH0 ETH0 ,---[DSL MODEM 1 (NON-STATIC IP)]---, ,---, ETH0 [ SWITCH ]---[LINUX DEBIAN ROUTER] [INTERNET] [LINUX DEBIAN SERVER]---[INTERNET] | | | '---[ DSL MODEM 2 (STATIC IP) ]---' '---' PC1 | | ETH2 ETH0 PC2 | ... ... but I have absolutely no clue how to implement that. Would that even be possible? What would the masquerading rules look like on the router? What about the server? I didn't find anything on the internet, mainly because I couldn't come up with any good keywords to search for to begin with. English obviously isn't my first language. Thanks in advance for your time!

    Read the article

  • Installing multiple versions of a shared library

    - by nsfyn55
    I am running ubuntu 10.04 and I want to use tmux 1.6. tmux has a dependency on libevent 2. My solution was to compile libevent2 and drop into /usr/local/lib then compile tmux against this lib and drop into /usr/local/bin. This works great until...I restart. This is just an assumption on my part but it seems that other binaries are now linking to the libevent2 library presumably because its on the library path. Because there are 60+ packages with libevent1 dependencies this causes my install to basically lose its mind. Is there an idiomatic way to approach running an application that has a core library dependency on a different version? Should I just statically link the lib?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >