Search Results

Search found 9235 results on 370 pages for 'disk cloning'.

Page 298/370 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Mongodb using db.help() on a particular db command

    - by user1325696
    When I type db.help() It returns DB methods: db.addUser(username, password[, readOnly=false]) db.auth(username, password) ... ... db.printShardingStatus() ... ... db.fsyncLock() flush data to disk and lock server for backups db.fsyncUnock() unlocks server following a db.fsyncLock() I'd like to find out how to get more detailed help for the particular command. The problem was with the printShardingStatus as it returned "too many chunks to print, use verbose if you want to print" mongos> db.printShardingStatus() --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3 } shards: { "_id" : "shard0000", "host" : "localhost:10001" } { "_id" : "shard0001", "host" : "localhost:10002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "dbTest", "partitioned" : true, "primary" : "shard0000" } dbTest.things chunks: shard0001 12 shard0000 19 too many chunks to print, use verbose if you want to for ce print I found that for that particular command I can specify boolean parameter db.printShardingStatus(true) which wasn't shown using db.help().

    Read the article

  • Sharing [config] data across modules,functions

    - by williamstw
    I have some configuration data in a config file that I read off disk when the app starts. I need to make that configuration data available to other functions/modules in the application. I started down the path of looking into ets/mnesia to store the data on startup to make it shared among all processes, but then my inner voice cautioned me that there must be a more functional, erlang-y way to do this. The only alternate approach I've come up with so far is setting up a module that has an actor loop that reads the data on startup and responds to messages like {Key, From} and responds by From ! {ok,Value}. Then, I gave up and decided to ask... Thanks, --tim

    Read the article

  • Restoring Sharepoint content database

    - by jude
    Hi, My WSS_Content database had got corrupt. And my pc was infected by virus. I had no backup of my WSS_Content database. So, I copied the corrupt database to a separete disk, formatted and reinstalled Sharepoint, with SQL Server 2005 as before (I'm using sharepoint 2007 ). I used Sytools Sharepoint Recovery tool, that i found on the net, which helped me restore my corrupt WSS_Content database. Now i want to set this content database as my "The content database" for my newly installed sharepoint. I tried the steps that i found in the link :- http://www.stationcomputing.com/scblogspace/Lists/Posts/Post.aspx?ID=40 I get stuck at step 3. Can anybody help me. I am really in a big mess. Would appreciate any help. Thanks, Jude Aloysius

    Read the article

  • Referencing View directory in asp.net mvc

    - by ooo
    i have some html files as part of a regular website that has been ported over to asp.net mvc. In my code i need to read and write these html files and stick them in a tinymce editor To be able to read and write this file from disk in the past i had a hard coded path but this doesn't seem to work in asp.net mvc unless i do something like this: Writing: string _urlDirectory = @"c:\hosting\MySite\Views\Members\newsletters\test.html"; System.IO.File.WriteAllText(_urlDirectory, htmlData_); Reading: string url = @"c:\hosting\MySite\Views\Members\newsletters\test.html"; var req = WebRequest.Create(url); var response = req.GetResponse(); StreamReader sr = new StreamReader(response.GetResponseStream()); string htmlData_ = sr.ReadToEnd(); i am moving my site from one data center to another and the directory structure is changing. instead of just changing the hard coded path to another hard coded path i wanted to see if there was a more relative way to reference these files.

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • Trying to right click on code in VS2008 causes lockup.

    - by Adam Haile
    Working on a Win32 DLL using Visual Studio 2008 SP1 and, since yesterday, whenever I try to right click on code, to go to a variable definition for example, VS completely locks up and I have to manually kill the process. To make it even weirder, whenever this happens the devenv.exe process uses exactly 25% of the CPU. And I mean exactly, never 24%, never 26%, always 25% Also, I've run ProcMon to see if devenv is actually doing something, but it's doing absolutely nothing external of the process. No disk, network, registry access. Nothing. This is getting really aggravating because I have a large code base to deal with and the only other way of jumping to the definition is to first search for it. Has anyone run into a similar issue? And, better yet, know a fix?

    Read the article

  • Recreation of DB using "mysql mydb < mydb.sql" is really slow when the table has tens of millions of

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • Download and write .tar.gz files without corruption.

    - by arbales
    I've tried numerous ways of downloading files, specifically .zip and .tar.gz, with Ruby and write them to the disk. I've found that the file appears to be the same as the reference (in size), but the archives refuse to extract. What I'm attempting now is: Thanks! def download_request(url, filePath:path, progressIndicator:progressBar) file = File.open(path, "w+") begin Net::HTTP.get_response URI.parse(url) do |response| if response['Location']!=nil puts 'Direct to: ' + response['Location'] return download_request(response['Location'], filePath:path, progressIndicator:progressBar) end # some stuff response.read_body do |segment| file.write(segment) # some progress stuff. end end ensure file.close end end download_request("http://github.com/jashkenas/coffee-script/tarball/master", filePath:"tarball.tar.gz", progressIndicator:nil)

    Read the article

  • What are the advantages to use StringBuilder versus XmlDocument or related to create XML documetns?

    - by Rob
    This might be a bit of a code smell, but I have seen it is some production code, namely the use of StringBuilder as opposed to XmlDocument when creating XML documents. In some cases these are write once operations (e.g. create the document and save it to disk) where as others are passing the built string to an XmlDocument to preform an XslTransform to a document that is returned to the client. So obvious question: is there merit to doing things this way, is it something that should be done on a case-by-case basis, or is this the wrong way of doing things?

    Read the article

  • Strategy to keep track of stored files in the documents directory?

    - by mystify
    In my app, the user can save his input to disk. This is done with NSKeyedArchiver. Currently I simply name my files with a timestamp. But of course, the user may want to load one of them back in to keep on editing them. What would be the most reliable / safe strategy to keep track of those files? I need to present the user a list of those files, so that he can choose one to open. Currently I think of making an archived NSMutableArray which simply stores the file names, but I feel that this strategy is not really good. We all know when we save files sometimes something goes wrong, and this seems very likely to get currupted easily, or not? How would you do it?

    Read the article

  • How do I track images embedded in HTML?

    - by ycseattle
    Hi, I'd like to track the views/impressions of images on web pages, but still allow the images to be embedded in HTML, like in the "img src="http://mysite.com/upload/myimage.jpg"/" element. I know in Windows I can write a handler for ".jpg" so the URL will actually trigger a handling function instead of loading the images from disk. Is it possible to do that in python/django on Ubuntu server? Can web browser still cache the jpg files if it is not a straight file path? It looks to me that this is how google picasaweb handles the image file name. I'd like to get some ideas on how to implement that. Thanks! -Yi

    Read the article

  • remote function with pthread

    - by user311130
    Hi all, I wrote some code in c, using pthread (I configured the linker and compiler in eclipse IDE first). #include <pthread.h> #include "starter.h" #include "UI.h" Page* MM; Page* Disk; PCB* all_pcb_array; void* display_prompt(void *id){ printf("Hello111\n"); return NULL; } int main(int argc, char** argv) { printf("Hello\n"); pthread_t *thread = (pthread_t*) malloc (sizeof(pthread_t)); pthread_create(thread, NULL, display_prompt, NULL); printf("Hello\n"); return 1; } that works fine. However, when I move display_prompt to UI.h no "Hello111 " output is printed. anyone know how to solve that? Elad

    Read the article

  • LBA48 in Linux SCSI ATA Passthrough

    - by Ben Englert
    I am writing a custom disk monitoring/diagnostics app which, among other things, needs to do stuff to SATA disks behind a SAS PCI card under Linux. So far I am following this guide as well as the example code in sg_utils to pass ATA taskfiles through the SCSI layer. Seems to be working okay. However, in both cases, the CDB data structure (pointed to by the cmdp member of the sg_io argument to the ioctl) has only one unsigned char worth of space for the number of sectors. If you look at the ata_taskfile structure in linux\ata.h you'll see that it has an "nsect" and a "hob_nsect" field - high order bits for the sector count, to support LBA48. It turns out that in my application I need LBA48 support. So, anyone know how to set up an sg_io_hdr structure with an LBA48 sector count?

    Read the article

  • Display Computer Info on an ASP.NET Page

    - by Gene
    I want to build a page for end users to visit (in our MPLS Network) and it show the following information in regards to them: Computer Name OS Disk Space Memory IP Address Active Directory User Name Password Expiration Time (As defined by Global Policy) Maybe a few other things such as Trend Micro Office current version vs. their version, # of MS Updates needed (we utilize WSUS), and a few other things in the future. My question is how would I pull this information from the user when they visit the page? What is the proper function for this? Anyone have examples they wish to share for me to learn by if possible?

    Read the article

  • Download office document without the web server trying to render it

    - by Dan Revell
    I'm trying to download an InfoPath template that's hosted on SharePoint. If I hit the url in internet explorer it asks me where to save it and I get the correct file on my disk. If I try to do this programmatically with WebClient or HttpWebRequest then I get HTML back instead. How can I make my request so that the web server returns the actual xsn file and doesn't try to render it in html. If internet explorer can do this then it's logical to think that I can too. I've tried setting the Accept property of the request to application/x-microsoft-InfoPathFormTemplate but that hasn't helped. It was a shot in the dark.

    Read the article

  • Images to video - converting to IplImage makes video blue

    - by user891908
    I want to create a video from images using opencv. The strange problem is that if i will write image (bmp) to disk and then load (cv.LoadImage) it it renders fine, but when i try to load image from StringIO and convert it to IplImage, it turns video to blue. Heres the code: import StringIO output = StringIO.StringIO() FOREGROUND = (0, 0, 0) TEXT = 'MY TEXT' font_path = 'arial.ttf' font = ImageFont.truetype(font_path, 18, encoding='unic') text = TEXT.decode('utf-8') (width, height) = font.getsize(text) # Create with background with place for text w,h=(600,600) contentimage=Image.open('0.jpg') background=Image.open('background.bmp') x, y = contentimage.size # put content onto background background.paste(contentimage,(((w-x)/2),0)) draw = ImageDraw.Draw(background) draw.text((0,0), text, font=font, fill=FOREGROUND) pi = background pi.save(output, "bmp") #pi.show() #shows image in full color output.seek(0) pi= Image.open(output) print pi, pi.format, "%dx%d" % pi.size, pi.mode cv_im = cv.CreateImageHeader(pi.size, cv.IPL_DEPTH_8U, 3) cv.SetData(cv_im, pi.tostring()) print pi.size, cv.GetSize(cv_im) w = cv.CreateVideoWriter("2.avi", cv.CV_FOURCC('M','J','P','G'), 1,(cv.GetSize(cv_im)[0],cv.GetSize(cv_im)[1]), is_color=1) for i in range(1,5): cv.WriteFrame(w, cv_im) del w

    Read the article

  • Read and Write in the same file with different process

    - by muruga
    I have written the two program. One program is write the content to the text file simultaneously. Another program is read that content simultaneously. But both the program should run at the same time. For me the program is write the file is correctly. But another program is not read the file. I know that once the write process is completed than only the data will be stored in the hard disk. Then another process can read the data. But I want both read and write same time with different process in the single file. How can I do that? Please help me.

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • Efficient persistent storage for simple id to table of values map for java

    - by wds
    I need to store some data that follows the simple pattern of mapping an "id" to a full table (with multiple rows) of several columns (i.e. some integer values [u, v, w]). The size of one of these tables would be a couple of KB. Basically what I need is to store a persistent cache of some intermediary results. This could quite easily be implemented as simple sql, but there's a couple of problems, namely I need to compress the size of this structure on disk as much as possible. (because of amount of values I'm storing) Also, it's not transactional, I just need to write once and simply read the contents of the entire table, so a relational DB isn't actually a very good fit. I was wondering if anyone had any good suggestions? For some reason I can't seem to come up with something decent atm. Especially something with an API in java would be nice.

    Read the article

  • accesing local exe file from asp.net application

    - by sansknwoledge
    hi, i am having a intranet asp.net application , all the clients connected to it is having a specific .exe file in their local disk, now i want to execute the .exe file from the .net application hosted in a intranet server. i used this code to do that Dim psi As New System.Diagnostics.ProcessStartInfo() psi.WorkingDirectory = "C:\\" psi.FileName = "c:\\Project1.exe" '"file:///c:/Project1.exe" psi.Arguments = iApqpId 'cTimeMaster.APQPID psi.UseShellExecute = False System.Diagnostics.Process.Start(psi) but it is throwing a error the system cannot find the file specified. is it having a permission issue on local system or any thing else, any help would be greatly appriciated. thanks and regards

    Read the article

  • Rename PHP Downloaded File in File Downloader/Accelerator Applications

    - by Joe
    I have the following "download" script in PHP which basically makes an address on my website download the file for the user, eg. mysite.com/download.php?fileid=10 The question I have is, how can I send the file name to the used for the download when a user downloads the .php address with a File Downloaders/Accelerators? eg. "Content-Disposition" in this case makes the file called download.php where as I want it to be renamed to $downloadFileName as usual. // Set headers header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Description: File Transfer"); header("Content-Type: application/force-download"); header("Content-Disposition: attachment; filename=\"".$downloadFileName."\""); header("Content-Transfer-Encoding: binary"); // Read the file from disk readfile($downloadLocation);

    Read the article

  • mounting without -o loop

    - by jumpinjoe
    Hi, I have written a dummy (ram disk) block device driver for linux kernel. When the driver is loaded, I can see it as /dev/mybd. I can successfully transfer data onto it using dd command, compare the copied data successfully. The problem is that when I create ext2/3 filesystem on it, I have to use -o loop option with the mount command. Otherwise mount fails with following result: mount: wrong fs type, bad option, bad superblock on mybd, missing codepage or helper program, or other error What could be the problem? Please help. Thanks.

    Read the article

  • Is there a more correct type for passing in the file path and file name to a method

    - by Rihan Meij
    Hi What I mean by this question is, when you need to store or pass a URL around, using a string is probably a bad practice, and a better approach would be to use a URI type. However it is so easy to make complex things more complex and bloated. So if I am going to be writing to a file on disk, do I pass it a string, as the file name and file path, or is there a better type that will be better suited to the requirement? This code seems to be clunky, and error prone? I would also need to do a whole bit of checking if it is a valid file name, if the string contains data and the list goes on. private void SaveFile(string fileNameAndPath) { //The normal stuff to save the file }

    Read the article

  • How to roll my own index in c#?

    - by bill seacham
    I need a faster way to create an index file. The application generates pairs of items to be indexed. I currently add each pair as it is generated to a sorted dictionary and then write it out to a disk file. This works well until the number of items added exceeds one million, at which time it slows to the point that is unacceptable. There can be as many as three million data items to be indexed. I prefer to avoid a database because I do not want to significantly increase the size of the deployment package, which is now less than one-half of one megabyte. I tried Access but it is even slower than the sorted dictionary -if it had an efficient bulk load utility then that might work, but I do not find such a tool for Access. Is there a better way to roll my own index?

    Read the article

  • Netbeans C++ not finding standard libraries (Macintosh)

    - by Grue
    Hello everyone! I am trying to use Netbeans 6.7 (on a Mac) to create C++ applications. I started out with the standard "Hello World," just to test if everything was working correctly. First try std and could not be found. So I tried reinstalling the developer tools on my Mac OS X disk. After that Netbeans updated its c++ compiler info, but still cannot find std or . Odder than this XCode seems to be working with C++ perfectly fine. Any help fixing this would be greatly appreciated.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >