Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 57/837 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How do I mount a sparse disk image permanently?

    - by Mike
    On Mac OS X 10.6.7, when I mount a sparse disk image (either by double-clicking it or using hdid from the command line), the image: Appears on my desktop Needs to be re-mounted every time I log in I'd like to set up the equivalent of an /etc/fstab which will mount the image when the system boots, and make it permanent - so I don't have to worry if my symbolic links will resolve or not. Is this more trouble than it's worth on a Mac? I noticed that there is no /etc/fstab, and /etc/fstab.hd contains a dire warning: IGNORE THIS FILE. This file does nothing, contains no useful data, and might go away in future releases. Do not depend on this file or its contents. I tried sudo hdid -notremovable <image>, which seemed like half of what I wanted (according to man hdid), but it failed with an error: hdid: attach failed - no mountable file systems.

    Read the article

  • Can't read from the source file or disk

    - by Wanna coffee
    I'm having a two WD external hard disk with capacity of 1 TB. I'm trying to copy SAP file(capacity - 250 GB ) in the extension of .vmdk from one hard disk to another hard disk. But when ever i'm trying to copy, at down to the line it showing me this error message. By default my both hard disk File System value is NFTS, even though it showing me an this error message. Is this problem with OS or Hard disk or Data which i'm taken into the action?? What might be the problem, Please give me your suggestions and recommendation. Awaiting for your reply.

    Read the article

  • Macbook Pro - Disk Locked HD needs repair, can't reinstall OSX

    - by Rob
    I basically have the same problem. A friends HD was acting badly. Ran Disk Repair Util many times. Says disk needs repair but it won't repair it. I re formatted and installed 10.8 but that hasn't fixed it. Ive tried partitioning it and that won't work. Now I'm trying to get system 10.8 to reinstall and it says the disk is locked. Your fix requires Terminal, is that right? How do I open terminal to run your fix? Please help if you can. Disk Warrior couldn't fix it. I think the disk is bad. What do you think?

    Read the article

  • Hard Disk recovery

    - by Shaihi
    I have 3 disks of the same type model and year of production. All the disks were used part of a generic solution of an IBM server solution. My problem is that all 3 disks suffered the same malfunction at the same exact time and are now dis-functional. I went to two different expert's laboratories and got the same answer: To recover the data they need another identical disk from which they can take spare parts. Can my case really be that clinical? Anyway, I am not sure if this question belongs to this forum, but I am looking to buy the following disk: IIBM ESERVER XSERIES IBM P/N 24P3707 IBM FRU 24P3708 146.8GB USCSI 10K RPM PART NOMBER 9V2005-027 I already bought a disk with the same part number, but the labs said that apparently I need a disk that was manufactured in the same factory. That means that all the numbers have to be exactly the same. If anybody know where I can purchase such a disk (the information on the lost disks is really important to me), please tell me the place.

    Read the article

  • Bootcamp: setup of Windows 7 (64 bit) hanging at "disk.sys"

    - by Skade
    I am having trouble getting Windows 7 Professional (64bit) to install on a Macbook Pro (late 2010 model). The installer hangs when loading the setup from disk. When restarting, I get an option to boot Windows from disk. The installer then starts loading files from disk and suddenly hangs. Using "Safe Mode" (from the advanced menu), it tells me that the installer hangs when loading "disk.sys". The installation is made on a fresh Bootcamp partition, the disk uses GPT. Has anyone seen this before and maybe found a solution?

    Read the article

  • What are my options for a disk with what seems to be a corrupted filesystem?

    - by CT
    I have a friend with an old Dell that will not boot into Windows. It has an IDE drive. It spins up. I have an IDE to USB device. I've attached the drive via that device to a working laptop. The drive does not mount. If I go into Disk Management I can see the drive but it will not initalize, says "Drive not ready." I've also booted into a linux live cd to see if the drive mounts, it does not. I am just trying to recover some pictures from the drive. The data is not important enough to send to a professional. The issue is more of a curosity on how to recover data if and when these situations would occur in the future.

    Read the article

  • how to reduce size (disk space) of windows 8?

    - by humanityANDpeace
    This questions is about what things I can do to reduce the size that Windows 8 uses. Background For example: At present and with only one programm installed (MS Access 2007) I have a about 15GB of my harddisk space used. I have little space (its a 17 GB partition on a SSD disk). I would like solutions that are like: Remove files not really needed (drivers not actually needed in the system) Help files not really needed (documentation) pagefile.sys (assuming I would have 4GB ram and no real need for swaping) hiberfil.sys (used for hibernate and sleep... I need that. though I would regain about 4GB space) At best I would like to delete mostly files that I would most likely not need. Though I have no good idea where to start there. Since my setup (hardware will not change) I would be willing to delete all the drivers that windows 8 has for hardware I do not have.... The question is about ways to reduce the space that Windows 8 uses.

    Read the article

  • How to create a RAM Drive (RAM Disk) in Windows 2008 R2?

    - by Mark
    There are lots of tools for creating RAM drives. None of them seem to work for windows 2008 R2. Does anyone know if this is possible and if so how. Does anyone know of a tool that does work? I've tried the gavotte ram disk. It doesn't work. When i try to install it it just sais "Failed". I don't see log files anywhere. I've tried a couple of other ones (forgot the names) to no avail. Any ideas? Thanks

    Read the article

  • Black screen on login, can get thru decrypt disk and access command line but no GUI

    - by t3lf3c
    Running 12.04 64 bit fresh alternative install, with disk crypto on a new Lenovo laptop Install didn't connect and install modules, even though I had the network cable plugged in and don't have any whacky proxy settings. I had to manually install ubunut-desktop and define sources after initial installation, so this seemed a bit weird (ISO matched MD5 sum though) I unplug the network cable, otherwise I get a black screen that I can do nothing with. So I turn laptop on, I have disk encryption, I type in the password at the Ubuntu decryption GUI then get "set up successfully" message "Waiting for network configuration ..." then "Waiting for up to 60 more seconds for network configuration" At this stage (a) If I wait for it then I get a black screen that I can do nothing with. (b) If I interrupt the process by pressing escape, then I break through to the command line. From the command line, I can go ahead and login, then plug my network cable in to do apt-get commands. As a precaution I do some house keeping which takes a few mins to run: sudo apt-get update sudo apt-get upgrade Running startx to get to the GUI gives: Fatal server errror: no screens found The .Xauthority file is being created in my home directory but it's empty. I review my order and note the system graphics: Intel HD Graphics (WWAN or mSATA capable) So it's weird that I can't get to the Gnome. It looks like drivers aren't working. Is there a way of getting Intel drivers from the command line? Or do you have any other suggestions on what to try next?

    Read the article

  • cPanel's Web Disk - Security issues?

    - by Tim Sparks
    I'm thinking of using Web Disk (built into the later versions of cPanel) to allow a Windows or Mac computer to map a network drive that is actually a folder on our website (above the public_html folder). We currently use an antiquated local server to store information, but it is only accessible from one location - we would like to be able to access it from other locations as well. I understand that folders above public_html are not accessible via http, but I want to know how secure is the access to these folders as a network drive? There is potentially sensitive information that we need to decide whether it is appropriate to store here. The map network drive option seems to work well as it behaves as if the files are on your own computer (i.e. you can open and save files without then having to upload them - as it happens automatically). We have used Dropbox for similar purposes, but space is a issue with them, as is accountability and so we haven't used it for sensitive information. Are there any notable security concerns with using Web Disk as a secure file server?

    Read the article

  • VirtualBox : increase hard disk size of the virtual machine

    - by wim
    I have run out of space on my WinXP virtual machine, which I only gave 10 GB space for when I created it. Is there an easy way to increase it to, say, 20 GB? I can't see any obvious option in VirtualBox settings. edit: the suggestion below gives this error wim@wim-ubuntu:/media/data/winxp_vm$ VBoxManage modifyhd wim.vdi --resize 20000 VBoxManage: error: Cannot register the hard disk '/media/data/winxp_vm/wim.vdi' {46284957-2c09-4e70-8a49-bfbe0f7f681d} because a hard disk '/home/wim/VirtualBox VMs/winxp_vm/wim.vdi' with UUID {46284957-2c09-4e70-8a49-bfbe0f7f681d} already exists VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component VirtualBox, interface IVirtualBox, callee nsISupports Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, AccessMode_ReadWrite, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 210 of file VBoxManageDisk.cpp edit2: removing the .vdi from VirtualBox before calling VBoxManage command, then adding it back in, was successful. But now I can't boot the virtual machine, I get this worrying screen: By the way, it says FATAL: Could not read from the boot medium! System halted. edit3: The vdi must be reattached to the VM after VBoxManage command. Further, the partition will need to be resized from WITHIN windows, because you will have this empty space: I was able to resize the partition easily using a bit of freeware called EASEUS Partition Master 9.1.0 Home Edition.

    Read the article

  • Disk failure is imminent Laptop Hard drive ~5 months old

    - by Drew
    There's another post about this, but I don't have enough 'points' to say anything on that thread. So I'll start my own ... with more details! My computer still boots, but gnome domain reports problems with HDD smart. This has been confirmed in the bios as it makes me press f1 to boot up now. I tried running HDD disk check in the bios, but it fails running the tests. As in, running the tests failed not that the tests themselves indicated a failed drive. Here is what disk utility is reporting as failing: Reallocated Sector Count FAILING Normalized: 132 Worst: 132 Threshold: 140 Value: 544 Current Pending Sector Count WARNING Normalized: 200 Worst: 1 Threshold: 0 Value: 2 Is this related to the insane number of DRDY errors on the drive? kernel: [51345.233069] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 kernel: [51345.233076] ata1.00: BMDMA stat 0x4 kernel: [51345.233081] ata1.00: failed command: READ DMA kernel: [51345.233090] ata1.00: cmd c8/00:00:00:8b:4a/00:00:00:00:00/e0 tag 0 dma 131072 in kernel: [51345.233092] res 51/40:00:a8:8b:4a/10:04:00:00:00/e0 Emask 0x9 (media error) kernel: [51345.233097] ata1.00: status: { DRDY ERR } kernel: [51345.233103] ata1.00: error: { UNC } kernel: [51345.291929] ata1.00: configured for UDMA/100 kernel: [51345.291944] ata1: EH complete kernel: [51347.682748] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 kernel: [51347.682754] ata1.00: BMDMA stat 0x4 kernel: [51347.682759] ata1.00: failed command: READ DMA kernel: [51347.682768] ata1.00: cmd c8/00:00:00:8b:4a/00:00:00:00:00/e0 tag 0 dma 131072 in kernel: [51347.682770] res 51/40:00:a8:8b:4a/10:04:00:00:00/e0 Emask 0x9 (media error) kernel: [51347.682774] ata1.00: status: { DRDY ERR } kernel: [51347.682777] ata1.00: error: { UNC } Did Ubuntu 10.10 and/or EXT4 eat my work laptop? What steps can I take to backup my important information, which is probably the home folder. Please include steps to recover my data on the new hard drive as well. It does me little good to have backups I can't use.

    Read the article

  • Lubuntu Full Install on USB Drive with Full Disk Encryption and Grub2

    - by vivi
    I apologise for the wall of text, but I want you to scrutinize my thought-process to make sure there's no mistakes and no other way around it: I wish to have a full install of lubuntu with full disk encryption on one of my usb drives. The laptop I would be booting it from also has windows 7. I want to maintain that OS. From what I've read I must place grub2 on the usb drive so that: If I have the usb plugged in, the laptop would start lubuntu (having USB HD in the BIOS Boot options) If I don't have the usb plugged in, it would normally start windows 7. That's exactly what I want it to do. But: If I install from the normal .iso: Clicking "install lubuntu alongside them" would install it onto my normal HD. Clicking "Erase disk and install lunbuntu" would delete all the stuff I have in my HD and install lubuntu on it. Clicking "Something else" would allow me to choose to install lunbuntu and grub2 onto the usb drive, but would not provide it with encryption. So the normal .iso won't work for what I want. Then I found the alternate .iso and this tutorial: It allows me to install lubuntu with all the options I want and gives me the option to choose where to place the grub2! Hopefully there are no flaws in my train of thought. If there aren't, I have a few questions regarding that tutorial: The author says in his case choosing "Yes to install GRUB to your MBR" installed the grub to the usb drive's mbr. I can't have "in his case". I need to be sure that's what it will do, so that it doesn't mess up the windows boot loader. Choosing "no" would open this window and allow to choose where I want to install the grub. Unfortunately I don't understand which option I should type in the box to install it into the usb drive. Would removing my laptop's Hard Drive ensure that the grub is installed onto the usb drive if i picked first option, "yes"? I apologise once again for the wall of text and appreciate any help you guys can offer me.

    Read the article

  • Not enough disk space '/' in AWS instance

    - by Sumant
    i am running Ubuntu 11.04 instance for my Web Server on AWS cloud, now i am getting there is no disk space in / partition of my server. df -ah say this Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 7.8G 97M 99% / proc 0 0 0 - /proc none 0 0 0 - /sys fusectl 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security none 3.7G 112K 3.7G 1% /dev none 0 0 0 - /dev/pts none 3.7G 0 3.7G 0% /dev/shm none 3.7G 80K 3.7G 1% /var/run none 3.7G 0 3.7G 0% /var/lock /dev/xvdb 414G 16G 377G 4% /mnt Now i have Tried these thing for getting some extra space on / partition Clean up All Log files for Apache. Removed all unnecessary files from server. Home directory Cleanup. But Still I am not getting enough space. This Instance type is m1.large with 8GB EBS. Now i am getting i have enough disk space in /dev/xvdb. Is there a way i can allocate some diskspace to / from /dev/xvdb or Any other Ways. Please suggest me the possible solution for this.Is it possible to use the same /dev/xvdb partition with another instance.

    Read the article

  • No space left on disk

    - by Ned
    folks. I'm trying to copy/move files to an external 1 TB hard drive with about 50 GB remaining space. I receive a "no space left on disk" when I try. I've moved files off and retried, but still get the same message. Disk Usage Analyzer, Properties, and freeware Treesize all report available hard drive space of about 50 GB. I've tried df -i (50 GB available) and df -k, with the latter reporting only 1% of inode usage. I've been able to save files from Firefox to the drive also. I can't even rename files without getting the message. Yesterday in the midst of trying to figure this out I tried to move 4 files to the drive and got the message. Today, I found them on the drive. What's up with that? (That's the only time that has happened to my knowledge.) Is this an ubuntu problem? or is my hard drive just about to fail because of something like a controller problem? Any thoughts would be appreciated.

    Read the article

  • Which hard disk drive is which?

    - by djeikyb
    I want to know which hard disk drive corresponds to which device path. It's trivial to match the hard disk stats (brand, size) with the dev path, but I want more. I want to know which drive is which inside my case. What's a good way to go about getting this info? Rules: I am a lazy bum. I don't want to tear apart my server to remove all the drives, then add back one by one. A reboot is acceptable. The drives are inconveniently scrunched together in the case. All label information is hidden. The case can be opened. Most disks are SATA, so theoretically hot swappable. Unplugging cables is fair game. Bonus: For a cli only solution. I'll award answer to the best/easiest gui or cli answer, and give a bounty to the next-best answer of the other kind. Or maybe the other way around, because the bounty is worth more points.

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • Changing time intervals for vSphere performance monitoring, and is there a better way?

    - by user991710
    I have a set of experiments running on a cluster node which is running ESXi 5.1, and I want to monitor the resource consumption on the node itself. Specifically, I am currently running experiments on a subset of the VMs on the ESXi host and wish to monitor resource consumption on those specific VMs. Right now, since I'm using only a single ESXi host, I am using vSphere to access it and the performance reports. Ideally, I would like to get these reports for different time intervals. I can already get the charts for a time interval of 1h, but these are rather long-running experiments and something like 2h, 3h,... would be preferable. However, I cannot seem to change the time interval. Here is an example of what my Customize Performance Chart dialog shows: I am also running on a trial key at the moment. How can I change this interval? Do I need a standard license, or do I just need to turn off the VM (unlikely, but I haven't attempted it yet as these are long-running experiments)? Any help (or pointers to documentation which deals with the above -- I've already looked but did not find much) would be greatly appreciated.

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • In the Firing Line: The impact of project and portfolio performance on the CEO

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What are the primary measurements for rating CEO performance? For corporate boards, business analysts, investors, and the trade press the metrics they deploy are relatively binary in nature; what is being done to generate earnings, and what is being done to build and sustain high performance? As for the market, interest is primarily aroused when operational and financial performance falls outside planned commitments for the year. When organizations announce better than predicted results, they usually experience an immediate increase in share price. Likewise, poor results have an obviously negative impact on the share price and impact the role and tenure of the incumbent CEO. The danger for the CEO is that the risk of failure is ever present, ranging from manufacturing delays and supply chain issues to labor shortages and scope creep. This risk is enhanced by the involvement of secondary suppliers providing services critical to overall work schedules, and magnified further across a portfolio of programs and projects underway at any one time – and all set within a global context. All can impact planned return on investment and have an inevitable impact on the share price – the primary empirical measure of day-to-day performance. Read this complete complementary report, In the Firing Line and explore what is the direct link between the health of the portfolio and CEO performance. This report will provide an overview of the responsibility the CEO has for implementing and maintaining a culture of accountability, offer examples of some of the higher profile project failings in recent years, and detail the capabilities available to the CEO to mitigate the risks residing in their own portfolios. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • What is better for the overall performance and feel of the game: one setInterval performing all the work, or many of them doing individual tasks?

    - by Bane
    This question is, I suppose, not limited to Javascript, but it is the language I use to create my game, so I'll use it as an example. For now, I have structured my HTML5 game like this: var fps = 60; var game = new Game(); setInterval(game.update, 1000/fps); And game.update looks like this: this.update = function() { this.parseInput(); this.logic(); this.physics(); this.draw(); } This seems a bit inefficient, maybe I don't need to do all of those things at once. An obvious alternative would be to have more intervals performing individual tasks, but is it worth it? var fps = 60; var game = new Game(); setInterval(game.draw, 1000/fps); setInterval(game.physics, 1000/a); //where "a" is some constant, performing the same function as "fps" ... With which approach should I go and why? Is there a better alternative? Also, in case the second approach is the best, how frequently should I perform the tasks?

    Read the article

  • Repairing Damage to VMWare Virtual Disk

    - by Lachlan McDonald
    Evening all, I've got a considerable problem I'm hoping to get some resolution on. I had two VMWare 6.5 virtual machines, one running Ubuntu 9.10 and the other Ubuntu 10.04. I used 9.10 as a testing server, so I could install a LAMP environment to prepare some code. Over the months I took a number of snapshots of this VM just in case something went wrong, and did a full copy of the entire VM a month ago. I created the 10.04 VM when Lucid Lynx launched so I could continue development on a fresh install. To get the files over, I simply added the 9.10 virtual disk into the 10.04 VM, grabbed some of the files I needed, and dismounted it. Unknown to me at the time, the changes to the 9.04 virtual disk meant that I could no longer boot it with the 9.10 VM. I'd always get the "The parent virtual disk has been modified since the child was created." error. I decided this was a good time to backup all the critical files, but now whenever I open the 9.04 disk to get the data it isn't in the same state as it was earlier. My question is; is it possible when I'm mounting the virtual disk that I'm not seeing the most recent snapshot, or in my blundering, have I lost the virtual disk. Cheers

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >