Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 307/837 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Convert Ubuntu back to windows 8

    - by alex0112
    I recently wiped a windows 8 machine and installed Ubuntu. I now need to sell this machine, and the people I want to sell it to would like it to be running windows 8 again. I've been looking around online, and I was under the impression that I could simply order a recovery disk or a re-install disk or something similar. But I'm getting a lot of different answers. What is the right (legal, cheap, in that order) way to do this?

    Read the article

  • Recovery partition is 'visible' while hidden

    - by jroeleveld
    For some reason the recovery partition of windows is appearing while it is hidden. With hidden I mean hidden in Disk management and Diskpart. This partition causes the screen to flicker sometimes, as it is continuously switching between being visible and hidden. Sometimes however, it is just visible. In the attached screenshots you can see the configuration. Translation: partitie=partition, In orde = OK, Verborgen = hidden, Lokale schijf = local disk, Herstelpartitie = recovery partition

    Read the article

  • Create a primary partition on Windows 7

    - by TutorialPoint
    I have windows 7 installed. On the moment i have the following partitions on 1 hard disk: (300 GB) C: (Windows) (Primary, System, Boot, Page File, Crash Dump, Active) 100 GB D: (Data) (Logical) 100 GB E: (System Reserved created after Boot repair) (primary, system) 100 MB Now i want to create a new primary partition on this disk, because i have +/- 100 GB left, for a new OS. However when trying to make a new partition, it makes it a Logical partition, not Primary. How to make it primary...?

    Read the article

  • Speed up MySQL for inserts (for testing purposes)

    - by Alex N
    I have a bit of software that needs to do a lot of INSERTs. In production environment there'll be some serious tweaking and testing and stuff like that, but now when I need to test it I'd like to speed up inserts as much as possible. Hence my question - is there a way to tweak mysql such that it doesn't do much disk I/O but keeps everything in RAM and syncs with disk rarely(like once n-seconds say?)

    Read the article

  • How to run benchmarking on MySQL?

    - by HexaHow
    My server has installed MySQL Server 5.1. I would like to run benchmarking on the MySQL, but I couldn't found sql-bench, which is Benchmark Suite provided by MySQL. The MySQL Benchmark Suite seem like complicated to be install or setup into my server. I need one can be direct setup to test the benchmark without using Perl script liked the benchmark suite from MySQL. Do anyone knows how to get the most popular benchmarking tool to measure MySQL performance? I need to measure the performance of my SQL written in ASP.Net that connecting to MySQL. I need to optimize the SQL script. It's better has a benchmarking tool where can be read my SQL in many times and return me the query result's time for comparison, etc. I just need to know the time consuming and performance for the each SQL runs in many times.

    Read the article

  • SQL Server: Database stuck in "Restoring" state

    - by Ian Boyd
    i backed up a data: BACKUP DATABASE MyDatabase TO DISK = 'MyDatabase.bak' WITH INIT --overwrite existing And then tried to restore it: RESTORE DATABASE MyDatabase FROM DISK = 'MyDatabase.bak' WITH REPLACE --force restore over specified database And now the database is stuck in the restoring state. Some people have theorized that it's because there was no log file in the backup, and it needed to be rolled forward using: RESTORE DATABASE MyDatabase WITH RECOVERY Except that, of course, fails: Msg 4333, Level 16, State 1, Line 1 The database cannot be recovered because the log was not restored. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. And exactly what you want in a catastrophic situation is a restore that won't work. The backup contains both a data and log file: RESTORE FILELISTONLY FROM DISK = 'MyDatabase.bak' Logical Name PhysicalName ============= =============== MyDatabase C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase.mdf MyDatabase_log C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase_log.LDF

    Read the article

  • NedMalloc / DlMalloc experiences

    - by Suma
    I am currently evaluating a few of scalable memory allocators, namely nedmalloc and ptmalloc (both built on top of dlmalloc), as a replacement for default malloc / new because of significant contention seen in multithreaded environment. Their published performance seems to be good, however I would like to check what are experiences of other people who have really used them. Were your performance goals satisfied? Did you experience any unexpected or hard to solve issues (like heap corruption)? If you have tried both ptmaalloc and nedmalloc, which of the two would you recommend? Why (ease of use, performance)?

    Read the article

  • AnkhSVN Commits Are Very Slow

    - by jakdep
    Recently, I had to move my SVN repositories to a different server, but I am experiencing some performance problems since the move. I am using Visual Studio 2005, AnkhSVN 2.1.7819.411 and TortoiseSVN 1.6.6 on my workstation and VisualSVN Server on the server which runs Windows Server 2008. Whenever I try to commit a file or view the file history in Visual Studio it takes twenty odd seconds. I confirmed that an exception has been made for VisualSVN Server on the server's firewall, but when I disable the server's firewall the performance is back to normal (1-2 seconds for a commit). When I do a commit or check the log on a file in TortoiseSVN the performance is fine as well. To ensure that the problem was not related to the moving of the repositories, I am running these tests against a new repository which was created on the new server. So, I reckon the problem lies with AnkhSVN, but am at a loss as how to diagnose it further. Any help would be greatly appreciated.

    Read the article

  • Help make sense of a KillDisk error/log

    - by user284194
    I have a hard drive that I've been trying to reformat. I tried reformatting it in the windows XP and 7 installers, and in an Ubuntu live cd with gparted. I tried using dd to 'zero' the drive as well with no success. Finally I ran across KillDisk after a search. I tried to zero the disk again with KillDisk and after 8 hours of zeroing I get the following errors in the log: ----------------------------------------Erase Session Begin--------------------------------------- 2010-03-23 19:35:54 Active@ KILLDISK for Windows Build 5.1.39 started Target: WDC WD2500KS-00MJB0 232.9 GB Located on: WDC WD2500KS-00MJB0 (Serial number: WD-WCANK9604799) Erase method: One Pass Zeros (1 pass) Passes: 1 Bad (unwritable) sectors detected from 1701 to 488397167 on Hard Disk 1. Error (the handle is invalid) refreshing device Hard Disk 1. Error (the handle is invalid) reading sector 0 on 81h. 2010-03-24 02:28:25 Total number of erased device(s): 0, partition(s): 0 -----------------------------------------Erase Session End---------------------------------------- Is the drive dead?

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • Alternative to udev functionality on OSX

    - by S1syphus
    I'm trying to create a custom file/check in check out script for external hardrives, however part of the script is from a Linux machine, which I have tested works fine, but uses udevinfo, OS X doesn't have udev, so is there anything that offers the same functionality? #!/bin/bash declare -a EXTERNAL_DISKS declare -a INTERNAL_DISKS for disk in /dev/[sh]d[a-z]; do eval `udevinfo -q env -n $disk` [ "$ID_BUS" = "usb" ] && EXTERNAL_DISKS=( ${EXTERNAL_DISKS[@]} $disk ) [ "$ID_BUS" = "scsi" ] && INTERNAL_DISKS=( ${INTERNAL_DISKS[@]} $disk ) done echo "Internal disks: ${INTERNAL_DISKS[@]}" echo "External disks: ${EXTERNAL_DISKS[@]}" Anybody know any alternatives? Or a way this could be accomplished on OSX using bash?

    Read the article

  • How are interrupts handled by dual processor machines?

    - by jeffD
    I have an idea of how interrupts are handled by a dual core CPU. I was wondering about how interrupt handling is implemented on a board with more than one physical processor. Is any of the interrupt responsibility determined by the physical board's configuration? Each processor must be able to handle some types of interrupts, like disk I/O. Unless there is some circuitry to manage and dispatch interrupts to the appropriate processor? My guess is that the scheme must be processor neutral, so that any processor and core can run the interrupt handler. If a core is waiting on a disk read, will that core be the one to run the interrupt handler when the disk is ready?

    Read the article

  • Detect block size for quota in Linux

    - by Chen Levy
    The limit placed on disk quota in Linux is counted in blocks. However, I found no reliable way to determine the block size. Tutorials I found refer to block size as 512 bytes, and sometimes as 1024 bytes. I got confused reading a post on LinuxForum.org for what a block size really means. So I tried to find that meaning in the context of quota. I found a "Determine the block size on hard disk filesystem for disk quota" tip on NixCraft, that suggested the command: dumpe2fs /dev/sdXN | grep -i 'Block size' or blockdev --getbsz /dev/sdXN But on my system those commands returned 4096, and when I checked the real quota block size on the same system, I got a block size of 1024 bytes. Is there a scriptable way to determine the quota block size on a device, short of creating a known sized file, and checking it's quota usage?

    Read the article

  • Perl - Internal File (create and execute)

    - by drewrockshard
    I have a quick question about creating files with perl and executing them. I wanted to know if it was possible to generate a file using perl (I actually need a .bat script) and then execute this file internally to the program. I know I can create files, and I have with perl, however, I'm wanting to do this internally to the program. So, what I want it to do is actually create a batch script internally to the program (no file is actually written to the disk, everything remains in memory, or the perl program), and then once it completes the writing of the file, I'd like to be able to actually execute this file, and then discard the file it just wrote. I'm basically trying to have it create a batch script on the fly, so that I can just have output text files from the output of the script, rather than creating the batch script on disk, then executing it, and then deleting the batch file from disk when its done. Can this be done and how would I go about doing this? Regards, Drew

    Read the article

  • Storing varchar(max) & varbinary(max) together - Problem?

    - by Tony Basallo
    I have an app that will have entries of both varchar(max) and varbinary(max) data types. I was considering putting these both in a separate table, together, even if only one of the two will be used at any given time. The question is whether storing them together has any impact on performance. Considering that they are stored in the heap, I'm thinking that having them together will not be a problem. However, the varchar(max) column will be probably have the text in row table option set. I couldn't find any performance testing or profiling while "googling bing," probably too specific a question? The SQL Server 2008 table looks like this: Id ParentId Version VersionDate StringContent - varchar(max) BinaryContent - varbinary(max) The app will decide which of the two columns to select for when the data is queried. The string column will much used much more frequently than the binary column - will this have any impact on performance?

    Read the article

  • Why darcs instead of git?

    - by Ctrl Alt D-1337
    Using pure functional languages can have a lot of benefits over using impure imperatives but low level systems languages will generally allow you to achieve much greater performance especially when they are imperative because it allows you to specify the exact steps in how the cpu should compute the result. If there is ever list of tools where high performance is an absolute must then I would put source version controls systems right at the top of that list and git achieves this very well but performance is not it's only advantage over many other other types of version control systems anyway. The git team are handling the unsafe c code very well and I never worry about my type system or any other features of the language it is written in so why is it that there is a lot of haskell developers that must use darcs when they will only be using the finished product?

    Read the article

  • Touch screens for kiosk applications

    - by Micah
    I'm developing a kiosk-style touchscreen application in Qt. Currently I'm using an Elo Touch surface acoustic wave touchmonitor which works well except for one thing: drag performance is way too poor to provide a good user experience. As this is the case for the cursor in X as well as in my application, it seems to be either the fault of X (probably not) or the touchmonitor. Since mobile platforms are able to achieve very high performance in this regard, it seems like it should be possible for vastly more powerful desktop systems. Does anybody have experience with getting good drag performance out of desktop touchmonitors? What hardware have you used? Is X to blame?

    Read the article

  • Make is more OOPey - good structure?

    - by Tom
    Hi, I just want advice on whether I could improve structure around a particular class which handles all disk access functions The structure of my program is that I have a class called Disk which gets data from flatfiles and databases on a, you guessed it, hard disk drive. I have functions like LoadTextFileToStringList, WriteStringToTextFile, DeleteLineInTextFile etc which are kind of "generic methods" In the same class I also have some more specific methods such as GetXFromDisk where X might be a particular field in a database table/query. Should I separate out the generic methods from the specialised. Should I make another class which inherits the generic methods. At the moment my class is static as there is no need to have an internal state of the class. I'm not really OOPing am I? Thanks Thomas

    Read the article

  • How can I get read-ahead bytes?

    - by Bruno Martinez
    Operating systems read from disk more than what a program actually requests, because a program is likely to need nearby information in the future. In my application, when I fetch an item from disk, I would like to show an interval of information around the element. There's a trade off between how much information I request and show, and speed. However, since the OS already reads more than what I requested, accessing these bytes already in memory is free. What API can I use to find out what's in the OS caches? Alternatively, I could use memory mapped files. In that case, the problem reduces to finding out whether a page is swapped to disk or not. Can this be done in any common OS?

    Read the article

  • FORMSOF Thesaurus in SQL Server

    - by Coolcoder
    Has anyone done any performance measures with this in terms of speed where there is a high number of substitutes for any given word. For instance, I want to use this to store common misspellings; expecting to have 4-10 variations of a word. <expansion> <sub>administration</sub> <sub>administraton</sub> <sub>aministraton</sub> </expansion> When you run a fulltext search, how does performance degrade with that number of variations? for instance, I assume it has to do a separate fulltext search performing an OR? Also, having say 20/30K entries in the Thesaurus xml file - does this impact performance?

    Read the article

  • Database caching on a shared host

    - by tau
    Anyone have any ideas how to increase MySQL performance on a shared host? My question has less to do with overall database performance and more to do with simply retrieving user-submitted data. Currently my database will create caches at timed intervals, and then the PHP will selectively access the static files it needs. This has given me a noticeable performance boost, but I am worried about a time in which I have so much data that having to read in big files in PHP will actually be slower. I am just looking for ideas for shared hosting solutions; I am not going to get my own server anytime soon. Thanks!

    Read the article

  • Sending SMTP e-mail at a high rate in .NET

    - by Martin Liversage
    I have a .NET service that processes a queue on a background thread and from the items in the queue sends out a large number of small e-mail messages at a very high rate (say 100 messages per second if that is even possible). Currently, I use SmtpClient.Send() but I'm afraid that it may hamper performance. Each call to Send() goes through a full cycle of opening the socket, performing the SMTP conversation (HELO, MAIL FROM, RCPT TO, DATA) and closing the socket. In pseudo code: for each message { open socket send HELO send MAIL FROM send RCPT TO send DATA close socket } I would think that the following pseudo code would be more optimal: open socket send HELO for each message { send MAIL FROM send RCPT TO send DATA } send QUIT close socket Should I be concerned about the performance of SmtpClient.Send() when sending e-mail at a high rate? What are my options for optimizing the performance?

    Read the article

  • Slowness of Netbeans Platform Apps - how to mitigate?

    - by user559298
    Hi, We are developing a commerical application (pretty complex) in java using Netbeans IDE. We have 2 options in netbeans to create it- 1. Develop Java desktop app 2. Netbeans Platform app We have requirement that application startup and response times should be very very fast, should be modular etc. We did Proof of Technology by creating apps using both approaches mentioned above. We found Netbeans platform apps are very slow during startup and during screen navigation compared to pure Swing based desktop apps. We tried to implement suggestions provided at http://wiki.netbeans.org/Category:Performance:FAQ and in other blogs and forums to improve on speed of the app but were not successful. We feel for creating a complex desktop app Netbeans platform app would be better suited, but its not meeting our performance requirements (startup and response times, memory footprints, CPU usage guidelines etc). Can any one guide us on how to mitigate our problem of improving performance of Netbeans Platforms apps? Thanks in advance for your help. -bhan

    Read the article

  • c program for this quesion

    - by sashi
    suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. the drive is currently serving a request at cylinder 143 and the previous request was at cylinder 125. the ueue of pending requests in the given order is 86,1470,913,17774,948,1509,1022,1750,130. write a 'c' program for finding the total distance in cylinders that the disk arm moves to satisfy all the pending reuests from the current heads position, using SSTF scheduling algorith. seek time is the time for the disk arm to move the head to the cylider containing the desired sector. sstf algorithm selects the minimum seek time from the current head position.

    Read the article

  • Succinct code over verbose?

    - by WeNeedAnswers
    With C# becoming more and more declarative and becoming the new Swiss army knife of Programming. Is it better to be succinct thus reducing the actual code base, or long winded but verbose. Is there a performance issue with succinct or does being succinct improve performance because your putting more of your code in the hands of the compiler. (LINQ being an example when used correctly). I know that verbosity should override succinct where code would become less readable, but is this a good idea when your style could affect the performance.

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >