Search Results

Search found 10693 results on 428 pages for 'raw disk'.

Page 202/428 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Caching with AVPlayer and AVAssetExportSession

    - by tba
    I would like to cache progressive-download videos using AVPlayer. How can I save an AVPlayer's item to disk? I'm trying to use AVAssetExportSession on the player's currentItem (which is fully loaded). This code is giving me "AVAssetExportSessionStatusFailed (The operation could not be completed)" : AVAsset *mediaAsset = self.player.currentItem.asset; AVAssetExportSession *es = [[AVAssetExportSession alloc] initWithAsset:mediaAsset presetName:AVAssetExportPresetLowQuality]; NSString *outPath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"out.mp4"]; NSFileManager *fileManager = [NSFileManager defaultManager]; [fileManager removeItemAtPath:outPath error:NULL]; es.outputFileType = @"com.apple.quicktime-movie"; es.outputURL = [[[NSURL alloc] initFileURLWithPath:outPath] autorelease]; NSLog(@"exporting to %@",outPath); [es exportAsynchronouslyWithCompletionHandler:^{ NSString *status = @""; if( es.status == AVAssetExportSessionStatusUnknown ) status = @"AVAssetExportSessionStatusUnknown"; else if( es.status == AVAssetExportSessionStatusWaiting ) status = @"AVAssetExportSessionStatusWaiting"; else if( es.status == AVAssetExportSessionStatusExporting ) status = @"AVAssetExportSessionStatusExporting"; else if( es.status == AVAssetExportSessionStatusCompleted ) status = @"AVAssetExportSessionStatusCompleted"; else if( es.status == AVAssetExportSessionStatusFailed ) status = @"AVAssetExportSessionStatusFailed"; else if( es.status == AVAssetExportSessionStatusCancelled ) status = @"AVAssetExportSessionStatusCancelled"; NSLog(@"done exporting to %@ status %d = %@ (%@)",outPath,es.status, status,[[es error] localizedDescription]); }]; How can I export successfully? I'm looking into copying mediaAsset into an AVMutableComposition, but haven't had much luck with that either. Thanks! PS: Here are some questions from people trying to accomplish the same thing (but with MPMoviePlayerController): Cache Progressive downloaded content in MPMoviePlayerController Simultaneously stream and save a video? Caching videos to disk after successful preload by MPMoviePlayerController

    Read the article

  • Android SQLite database gets corrupted

    - by Seu
    There are about 100 people using my Android App right now and every once and while I get a crash report to the server with this stack trace: android.database.sqlite.SQLiteDatabaseCorruptException: database disk image is malformed at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2596) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2621) at android.app.ActivityThread.access$2200(ActivityThread.java:126) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1932) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:123) at android.app.ActivityThread.main(ActivityThread.java:4595) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:521) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) at dalvik.system.NativeStart.main(Native Method) Caused by: android.database.sqlite.SQLiteDatabaseCorruptException: database disk image is malformed at android.database.sqlite.SQLiteQuery.native_fill_window(Native Method) at android.database.sqlite.SQLiteQuery.fillWindow(SQLiteQuery.java:75) at android.database.sqlite.SQLiteCursor.fillWindow(SQLiteCursor.java:295) at android.database.sqlite.SQLiteCursor.getCount(SQLiteCursor.java:276) at android.database.AbstractCursor.moveToPosition(AbstractCursor.java:171) at android.database.AbstractCursor.moveToFirst(AbstractCursor.java:248) The result is the app crashing and all the data in the DB being lost. One thing to note is that every time I read or write to the database I get a new SQLiteDatabase and close it as soon as I'm done. I thought this would simplify things, but perhaps that's causing the problem? Is it possible this is just a SQLite bug?

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Deployed Qt5 Application Doesn't Print or Show Print Dialog

    - by MustacheMcLimey
    I'm experiencing Qt4 to Qt5 troubles. In my application when the user clicks the print button two things should happen, one is that a PDF gets written to disk (which still works fine in the new version, so I know that some of the printing functions are working properly) and the other is that a QPrintDialog should exec() and then send to a connected printer. I see the dialog when I launch from my development machine. The application launches on the deployed machine, but the QPrintDialog never shows and the document never prints. I am including print support. QT += core gui network webkitwidgets widgets printsupport I have been using Process Explorer to see what DLLs the application uses on my development machine, and I believe that everything is present. My application bundle includes: {myAppPath}\MyApp[MyApp.exe, Qt5PrintSupport.dll, ...] {myAppPath}\plugins\printsupport\windowsprintersupport.dll {myAppPath}\plugins\imageformats[ qgif.dll, qico.dll,qjpeg.dll, qmng.dll, qtga.dll, qtiff.dll, qwbmp.dll ] The following is the relevant code snippet: void PrintableForm::printFile() { //Writes the PDF to disk in every environment pdfCopy(); //Paper Copy only works on my dev machine QPrinter paperPrinter; QPrintDialog printDialog(&paperPrinter,this); if( printDialog.exec() == QDialog::Accepted ) { view->print(&paperPrinter); } this->accept(); } My first thought is that the relevant DLLs are not being found come print time, and that means that my application file system is incorrect, but I have not found anything that shows me a different file structure. Am I on the right track or is there something else wrong with this setup?

    Read the article

  • XenServer 5.5 local storage problem

    - by Jason Nerer
    Hi community, I have the following problem with a Citrix XenServer 5.5. I had to physically move the host, so I shut down all machines via console: xe vm-shutdown force=true vm=my-machine-uuis-s After that I shut down the machine itself by issuing: halt After the reboot today the local storage repository is unplugged. I was trying to repair it via XenCenter, but I don't trust this one. So I tried: [root@xenserver ~]# xe pbd-list uuid ( RO) : ef6e2f3b-5825-393c-23e1-391d105c87ec host-uuid ( RO): c4bcf09c-2e52-448f-8210-df5d13bd33a9 sr-uuid ( RO): 2fb3be9c-075c-53ed-acb6-42f0c4ad0614 device-config (MRO): device: /dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83698154,/dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83694262 currently-attached ( RO): false To reattach the storage I issued: xe pbd-plug uuid=ef6e2f3b-5825-393c-23e1-391d105c87ec That one is running now for a while but not talking to me. The local repo has around 1TB. Should I wait, or are there any other options to reattach the local repo? What could have caused this problem? Any ideas? Thx. J

    Read the article

  • Unable to fetch initial output of "defrag" commad in Windows Server 2008 R2 in WOW64 environment.

    - by Ganesh
    Hi All, [Application & code back ground] I have an MFC application which is executing on Windows Server 2008 R2 in WOW64 environment. In which on user input it defragments the selected drive on the disk. I initiated the process(.i.e. cmd /c defrag –v c:) of defragmentation using the CreateProcess() API, along with this to display output of the process on the main screen I created the pipe using CreatePipe() API. I used PeekNamedPipe() & ReadFile() API to get the output and display. [Problem Area] When the process is launched I am not getting the initial output as below: Microsoft Disk fragmenter Copyright © 2007 Microsoft Crop. Invoking defragmentation on (C:)…. I constantly monitor the output of the process while it is under progress but not able to get any thing as output in the pipe. It seem the process is not doing any thing and appears as if the application is not responding. But after certain period of time, once the process is about to completed I get result along with the initial data. [Sample code] //Pipe created if(0 == ::CreatePipe(&l_hStdOutRead, &l_hStdOutWrite, &l_SecurityAttribute, (DWORD)NULL)) { //Error code } //Process created/launched if (0 == ::CreateProcess(NULL, (LPTSTR)f_csProcessName, &l_stSecurityAttributes, NULL, TRUE, CREATE_NO_WINDOW, NULL,NULL, &l_StartupInfo, &l_CmdPI)) { //Error code } //Read output if (0 == ::PeekNamedPipe(m_hStdOutRead, l_cArrPeekBuffer, (DWORD)NULL, (LPDWORD)NULL, &l_dwAvailable, (LPDWORD)NULL)) { //Return to read again } if (MPLUSFALSE == ::ReadFile(m_hStdOutRead, l_cArrOutput, min(BUFFER_SIZE - 2, l_dwAvailable), &l_dwRead, NULL) || !l_dwRead) { //error code } //Display data. If any one is aware of similar problem or worked on the similar issue please let me know the solution. Thanks in Advance, Ganesh

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Write asynchronously to file in perl

    - by Stefhen
    Basically I would like to: Read a large amount of data from the network into an array into memory. Asynchronously write this array data, running it thru bzip2 before it hits the disk. repeat.. Is this possible? If this is possible, I know that I will have to somehow read the next pass of data into a different array as the AIO docs say that this array must not be altered before the async write is complete. I would like to background all of my writes to disk in order as the bzip2 pass is going to take much longer than the network read. Is this doable? Below is a simple example of what I think is needed, but this just reads a file into array @a for testing. use warnings; use strict; use EV; use IO::AIO; use Compress::Bzip2; use FileHandle; use Fcntl; my @a; print "loading to array...\n"; while(<>) { $a[$. - 1] = $_; } print "array loaded...\n"; my $aio_w = EV::io IO::AIO::poll_fileno, EV::WRITE, \&IO::AIO::poll_cb; aio_open "./out", O_WRONLY || O_NONBLOCK, 0, sub { my $fh = shift or die "error while opening: $!\n"; aio_write $fh, undef, undef, $a, -1, sub { $_[0] > 0 or die "error: $!\n"; EV::unloop; }; }; EV::loop EV::LOOP_NONBLOCK;

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • PHP, jQuery and Ajax Object Orientation

    - by pastylegs
    I'm a fairly experienced programmer getting my head around PHP and Ajax for the first time, and I'm having a bit of trouble figuring out how to incorperate object oriented PHP into my ajax webapp. I have an admin page (admin.php) that will load and write information (info.xml) from an XML file depending on the users selection of a form on the admin page. I have decided to use an object (ContentManager.php) to manage the loading and writing of the XML file to disk, i.e : class ContentManager{ var $xml_attribute_1 ... function __construct(){ //load the xml file from disk and save its contents into variables $xml_attribute = simplexml_load_file(/path/to/xml) } function get_xml_contents(){ return xml_attribute; } function write_xml($contents_{ } function print_xml(){ } } I create the ContentManager object in admin.php like so <?php include '../includes/CompetitionManager.php'; $cm = new CompetitionManager() ?> <script> ...all my jquery </script> <html> ... all my form elements </html> So now I want to use AJAX to allow the user to retrieve information from the XML file via the ContentManger app using an interface (ajax_handler.php) like so <?php if(_POST[]=="get_a"){ }else if() } ... ?> I understand how this would work if I wasn't using objects, i.e. the hander php file would do a certain action depending on a variable in the .post request, but with my setup, I can't see how I can get a reference to the ContentManager object I have created in admin.php in the ajax_handler.php file? Maybe my understanding of php object scope is flawed. Anyway, if anyone can make sense of what I'm trying to do, I would appreciate some help!

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Problem with stack based implementation of function 0x42 of int 0x13

    - by IceCoder
    I'm trying a new approach to int 0x13 (just to learn more about the way the system works): using stack to create a DAP.. Assuming that DL contains the disk number, AX contains the address of the bootable entry in PT, DS is updated to the right segment and the stack is correctly set, this is the code: push DWORD 0x00000000 add ax, 0x0008 mov si, ax push DWORD [ds:(si)] push DWORD 0x00007c00 push WORD 0x0001 push WORD 0x0010 push ss pop ds mov si, sp mov sp, bp mov ah, 0x42 int 0x13 As you can see: I push the dap structure onto the stack, update DS:SI in order to point to it, DL is already set, then set AX to 0x42 and call int 0x13 the result is error 0x01 in AH and obviously CF set. No sectors are transferred. I checked the stack trace endlessly and it is ok, the partition table is ok too.. I cannot figure out what I'm missing... This is the stack trace portion of the disk address packet: 0x000079ea: 10 00 adc %al,(%bx,%si) 0x000079ec: 01 00 add %ax,(%bx,%si) 0x000079ee: 00 7c 00 add %bh,0x0(%si) 0x000079f1: 00 00 add %al,(%bx,%si) 0x000079f3: 08 00 or %al,(%bx,%si) 0x000079f5: 00 00 add %al,(%bx,%si) 0x000079f7: 00 00 add %al,(%bx,%si) 0x000079f9: 00 a0 07 be add %ah,-0x41f9(%bx,%si) I'm using qemu latest version and trying to read from hard drive (0x80), have also tried with a 4bytes alignment for the structure with the same result (CF 1 AH 0x01), the extensions are present.

    Read the article

  • How to create resource manager in ASP.NET

    - by Bart
    Hello, I would like to create resource manager on my page and use some data stored in my resource files. (default.aspx.resx and default.aspx.en.resx) The code looks like this: System.Resources.ResourceManager myResourceManager = System.Resources.ResourceManager.CreateFileBasedResourceManager("resource", Server.MapPath("App_LocalResources") + Path.DirectorySeparatorChar, null); if (User.Identity.IsAuthenticated) { Welcome.Text = myResourceManager.GetString("LoggedInWelcomeText"); } else { Welcome.Text = myResourceManager.GetString("LoggedOutWelcomeText"); } but when i compile and run it on my local server i get this type of error: Could not find any resources appropriate for the specified culture (or the neutral culture) on disk. baseName: resource locationInfo: fileName: resource.resources Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Resources.MissingManifestResourceException: Could not find any resources appropriate for the specified culture (or the neutral culture) on disk. baseName: resource locationInfo: fileName: resource.resources Source Error: Line 89: else Line 90: { Line 91: Welcome.Text = myResourceManager.GetString("LoggedOutWelcomeText"); Line 92: } Line 93: can you please assist me with this issue?

    Read the article

  • Error doing an MSBuild on a CLR Storedprocedure project on Build Server

    - by CraftyFella
    Hi, When building a CLR Storedprocedure Project using MSBuild on our build server (Team City) we're getting the following error: error MSB4019: The imported project "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\SqlServer.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk I've checked to see if the file exists on disk and sure enough it doesn't. I've checked on my own machine and it does exist. I don't really want to start copying over files manually to the build server. Here's the line from the csproj file which is being imported to the proj file: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="$(MSBuildToolsPath)\SqlServer.targets" /> Here's the line from the proj file which is begin run by our Team City Server: <Import Project="..\$(ProjectName).csproj"/> My question is really: Where does this file comes from? Is it part of the Visual Studio install for example.. Or is there some re-distribution package somewhere to allow me to compile this project on our build server? Thanks BTW.. if i just copy the file onto the Build server it does actually work. Dave

    Read the article

  • Securely erasing a file using simple methods?

    - by Jason
    Hello, I am using C# .NET Framework 2.0. I have a question relating to file shredding. My target operating systems are Windows 7, Windows Vista, and Windows XP. Possibly Windows Server 2003 or 2008 but I'm guessing they should be the same as the first three. My goal is to securely erase a file. I don't believe using File.Delete is secure at all. I read somewhere that the operating system simply marks the raw hard-disk data for deletion when you delete a file - the data is not erased at all. That's why there exists so many working methods to recover supposedly "deleted" files. I also read, that's why it's much more useful to overwrite the file, because then the data on disk actually has to be changed. Is this true? Is this generally what's needed? If so, I believe I can simply write the file full of 1's and 0's a few times. I've read: http://www.codeproject.com/KB/files/NShred.aspx http://blogs.computerworld.com/node/5756 http://blogs.computerworld.com/node/5687 http://stackoverflow.com/questions/4147775/securely-deleting-a-file-in-c-net

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Optimize Use of Ramdisk for Eclipse Development

    - by Eric J.
    We're developing Java/SpringSource applications with Eclipse on 32-bit Vista machines with 4GB RAM. The OS exposes roughly 3.3GB of RAM due to reservations for hardware etc. in the virtual address space. I came across several Ramdisk drivers that can create a virtual disk from the OS-hidden RAM and am looking for suggestions how best to use the 740MB virtual disk to speed development in our environment. The slowest part of development for us is compiling as well as launching SpringSource dm Server. One option is to configure Vista to swap to the Ramdisk. That works, and noticeably speeds up development in low memory situations. However, the 3.3GB available to the OS is often sufficient and there are many situations where we do not use the swap file(s) much. Another option is to use the Ramdisk as a location for temporary files. Using the Vista mklink command, I created a hard link from where the SpringSource dm Server's work area normally resides to the Ramdisk. That significantly improves server startup times but does nothing for compile times. There are roughly 500MB still free on the Ramdisk when the work directory is fully utilized, so room for plenty more. What other files/directories might be candidates to place on the Ramdisk? Eclipse-related files? (Parts of) the JDK? Is there a free/open source tool for Vista that will show me which files are used most frequently during a period of time to reduce the guesswork?

    Read the article

  • Creating a single CRM plugin DLL to store in the CRM database

    - by Shaamaan
    Since the suggested way of storing plugins in MS CRM is via the CRM database, I figured it's about time to do something about the method I'm currently using, which is storing the DLLs on the disk. The trouble however is that I don't know how to embed all the other various bits that are needed by the DLL: the localization resource files (which are kept in another folder) and some referenced DLLs from the latest SDK (which had to be manually placed in the bin\assembly folder). At this point, I'm not even entirely sure this is possible. So far I've tried to solve the localization problem by changing the build action on the resource files to "Content" or "Resource" and tested this solution (still keeping the location on-disk, but without the added localization folder). This didn't work: when I purposely generated a validation error in one of the plugins, I got the default language message (English) despite having a different language selected in the CRM. I've faced a similar problem when trying to add some of the referenced DLL files (namely the new SDK DLLs: xrm.portal, xrm.portal.files and xrm.client). When I tried to store the plugin in the database (skipping for a moment the localization issue), I got a CRM error saying it cannot find the XRM.Client assembly or one of it's dependencies. I know I could use ILMerge to put the whole thing together, but I've got a gut feeling telling me this isn't really a good idea. Any hints or suggestions on this issue would be great.

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • Help Me With This Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obatin the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • Cucumber-rails on jruby installs gem into my apps root directory with bundler

    - by brad
    Just installed cucumber 0.7.2 and cucumber-rails 0.3.1 with jruby-1.4.0 on OSX. When I run a bundle install, it places a cucumber-rails directory in my main app with all of the gem code/dependencies. First off, this is definitely not what I want and I'm not sure why this happens for cucumber-rails only. Second, if I delete this folder and just manually install cucumber-rails, when I run script/generate feature blah I get /Users/bradrobertson/.rvm/rubies/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:344:in `refresh!': source index not created from disk (RuntimeError) from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:34:in `refresh!' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:29:in `initialize' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `new' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `add_frozen_gem_path' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:298:in `add_gem_load_paths' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:132:in `process' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:13 from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:1:in `require' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:1 from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:3:in `require' from script/generate:3 Similarly running rake cucumber I get rake aborted! source index not created from disk So something obviously doesn't work. If I add that cucumber-rails directory back in, then my rake cucumber actually runs. Can someone tell me why it would need to install the gem right in my rails app? I've never seen this before. setup jruby-1.4.0 cucumber-0.7.2 cucumber-rails 0.3.1 bundler 0.9.23 webrat 0.7.1

    Read the article

  • Why would I be seeing execution timeouts when setting $_SESSION values?

    - by Kev
    I'm seeing the following errors in my PHP error logs: PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\index.php on line 3 PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\search.php on line 4 The lines in question are: index.php: 01 <?php 02 session_start(); 03 ob_start(); 04 error_reporting(E_All); 05 $_SESSION['nav'] = "range"; // <-- Error generated here search.php 01 <?php 02 03 session_start(); 04 $_SESSION['nav'] = "range"; 05 $_SESSION['navselected'] = 21; // <-- Error generated here Would it really take as long as 60+ seconds to assign a $_SESSION[] value? The platform is: Windows 2003 SP2 + IIS6 FastCGI for Windows 2003 (original RTM build) PHP 5.2.6 Non thread-safe build There aren't any issues with session data files being cleared up on the server as sessions expire. The oldest sess_XXXXXXXXXXXXXX file I'm seeing is around 2 hours old. There are no disk timeouts evident in the event logs or other such disk health issues that might suggest difficulty creating session data files. The site is also on a server that isn't under heavy load. The site is busy but not being hammered and is very responsive. It's just that we get these errors, three or four in a row, every three or four hours. I should also add that I'm not the original developer of this code and that it belongs to a customer who's developer has long since departed.

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >