Search Results

Search found 6879 results on 276 pages for 'azure storage blobs'.

Page 71/276 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How to save and retrieve data as key-value pairs or files in isolated storage?

    - by kaleidoscope
    One can use isolated storage to store data locally on the user's computer. There are two ways to use isolated storage. The first way is to save or retrieve data as key/value pairs by using the IsolatedStorageSettings class. The second way is to save or retrieve files by using the IsolatedStorageFile class. More details can be found at http://silverlight.net/learn/quickstarts/isolatedstorage/   Rituraj, J

    Read the article

  • Filezilla/Puttygen doesn't recognize private key file

    - by devzoner
    I have generated a key for an Ubuntu Virtual Machine running on Azure Cloud Services http://www.windowsazure.com/en-us/manage/linux/how-to-guides/ssh-into-linux/ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem When loading the private key into Filezilla, it asks me to convert the format, however, when converting the key it fails, the same happens with puttygen from linux console, using this: puttygen myPrivateKey.key -o myKey.ppk In both cases I have the following error: puttygen: error loading `myPrivateKey.key': unrecognised key type By the way, this key doesn't have a passphrase. I found an old thread about it, but I'm using 0.6.3 version which is newer than what this thread recommends: http://fixunix.com/ssh/541874-puttygen-unable-import-openssh-key.html I've managed to solve this issue by using another gui client Fugu for Mac, but one of my co-worker uses windows and I still have to figure this out. Since Filezilla is the de-facto ftp client, I thought it would be easier to solve it there. Thanks

    Read the article

  • DNS settings for SaaS in the cloud?

    - by Jeremy
    I am building a SaaS product. When a user signs up for an account they must select an alias for their site --------.getlaunchpoint.com. Right now I have an A record *.getlaunchpoint.com that points to the ip address server. However, with Azure I am not given an IP address. The suggested implementation is to make use of a CNAME. I need to create a CNAME for *.getlaunchpoint.com - getlaunchpoint.cloudapp.net GoDaddy does not support CNAME wildcards. Searching on Google I'm getting conflicting information... is CNAME wildcard a bad practice? I run into the same problem with Amazon EC2 if I want to make use of load balancers because you cannot tie a public IP address to an Amazon Load Balancer. Amazon also suggests the use of a CNAME. Any help would be appreciated.

    Read the article

  • Windows Server (2012) ASP.NET

    - by alexus
    I honestly don't even know where to start.. I created a Windows Server 2012 (Azure) VM, inside of VM I THOUGHT I did everything whatever is required to run ASP.NET application.. BUT that wasn't a case. I'm stuck and unable to run a simple ASP.NET app HelloWorld( and I blame Microsoft! I had everything running before they decided to wipe out my VM so that's why I'm redoing that VM. What can I do to resolve this? Where should I look? My HelloWorld application returns Internal Server Error 500.19. I've must have missed something somewhere, but I need someone help me to pinpoint. PLEASE HELP!

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • How to trigger chef-client on all nodes from my workstation

    - by divyanshm
    I have 5 nodes and all of them have one setup cook-book in common. Now I would like to add another task in this common cookbook that would configure SQL server for me on all the nodes. Is there a way/command to manually trigger this change across all clients right away? I use azure VM's. All the nodes are Windows Server 2012 machines. I could do a knife winrm machine-name chef-client -m -x username -P password on all the machines, but i'm sure there should be a better way of doing this. I'm new to using chef, so I might be missing a very basic command here.

    Read the article

  • Detach Disk from deleted virtual machine

    - by user1628043
    I had an Virtual Machine running in Azure for a couple of weeks and suddenly it stopped responding. I shut it down and tried to restart it but that failed saying the VM faulted. I then deleted the VM which leaves the VHD file intact on my storage account. I was intending to try recreating a new VM using thie VHD from the first VM however, the OS disk and Data disk are both still marked as being attached to the original VM which no longer exists. Is there any way to detach these disks so I can use them to create a new VM?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #039

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 FQL – Facebook Query Language Facebook list following advantages of FQL: Condensed XML reduces bandwidth and parsing costs. More complex requests can reduce the number of requests necessary. Provides a single consistent, unified interface for all of your data. It’s fun! UDF – Get the Day of the Week Function The day of the week can be retrieved in SQL Server by using the DatePart function. The value returned by the function is between 1 (Sunday) and 7 (Saturday). To convert this to a string representing the day of the week, use a CASE statement. UDF – Function to Get Previous And Next Work Day – Exclude Saturday and Sunday While reading ColdFusion blog of Ben Nadel Getting the Previous Day In ColdFusion, Excluding Saturday And Sunday, I realize that I use similar function on my SQL Server Database. This function excludes the Weekends (Saturday and Sunday), and it gets previous as well as next work day. Complete Series of SQL Server Interview Questions and Answers Data Warehousing Interview Questions and Answers – Introduction Data Warehousing Interview Questions and Answers – Part 1 Data Warehousing Interview Questions and Answers – Part 2 Data Warehousing Interview Questions and Answers – Part 3 Data Warehousing Interview Questions and Answers Complete List Download 2008 Introduction to Log Viewer In SQL Server all the windows event logs can be seen along with SQL Server logs. Interface for all the logs is same and can be launched from the same place. This log can be exported and filtered as well. DBCC SHRINKFILE Takes Long Time to Run If you are DBA who are involved with Database Maintenance and file group maintenance, you must have experience that many times DBCC SHRINKFILE operations takes a long time but any other operations with Database are relatively quicker. mssqlsystemresource – Resource Database The purpose of resource database is to facilitates upgrading to the new version of SQL Server without any hassle. In previous versions whenever version of SQL Server was upgraded all the previous version system objects needs to be dropped and new version system objects to be created. 2009 Puzzle – Write Script to Generate Primary Key and Foreign Key In SQL Server Management Studio (SSMS), there is no option to script all the keys. If one is required to script keys they will have to manually script each key one at a time. If database has many tables, generating one key at a time can be a very intricate task. I want to throw a question to all of you if any of you have scripts for the same purpose. Maximizing View of SQL Server Management Studio – Full Screen – New Screen I had explained the following two different methods: 1) Open Results in Separate Tab - This is a very interesting method as result pan shows up in a different tab instead of the splitting screen horizontally. 2) Open SSMS in Full Screen - This works always and to its best. Not many people are aware of this method; hence, very few people use it to enhance performance. 2010 Find Queries using Parallelism from Cached Plan T-SQL script gets all the queries and their execution plan where parallelism operations are kicked up. Pay attention there is TOP 10 is used, if you have lots of transactional operations, I suggest that you change TOP 10 to TOP 50 This is the list of the all the articles in the series of computed columns. SQL SERVER – Computed Column – PERSISTED and Storage This article talks about how computed columns are created and why they take more storage space than before. SQL SERVER – Computed Column – PERSISTED and Performance This article talks about how PERSISTED columns give better performance than non-persisted columns. SQL SERVER – Computed Column – PERSISTED and Performance – Part 2 This article talks about how non-persisted columns give better performance than PERSISTED columns. SQL SERVER – Computed Column and Performance – Part 3 This article talks about how Index improves the performance of Computed Columns. SQL SERVER – Computed Column – PERSISTED and Storage – Part 2 This article talks about how creating index on computed column does not grow the row length of table. SQL SERVER – Computed Columns – Index and Performance This article summarized all the articles related to computed columns. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 21 of 31 What is Data Warehousing? What is Business Intelligence (BI)? What is a Dimension Table? What is Dimensional Modeling? What is a Fact Table? What are the Fundamental Stages of Data Warehousing? What are the Different Methods of Loading Dimension tables? Describes the Foreign Key Columns in Fact Table and Dimension Table? What is Data Mining? What is the Difference between a View and a Materialized View? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 22 of 31 What is OLTP? What is OLAP? What is the Difference between OLTP and OLAP? What is ODS? What is ER Diagram? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 23 of 31 What is ETL? What is VLDB? Is OLTP Database is Design Optimal for Data Warehouse? If denormalizing improves Data Warehouse Processes, then why is the Fact Table is in the Normal Form? What are Lookup Tables? What are Aggregate Tables? What is Real-Time Data-Warehousing? What are Conformed Dimensions? What is a Conformed Fact? How do you Load the Time Dimension? What is a Level of Granularity of a Fact Table? What are Non-Additive Facts? What is a Factless Facts Table? What are Slowly Changing Dimensions (SCD)? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 24 of 31 What is Hybrid Slowly Changing Dimension? What is BUS Schema? What is a Star Schema? What Snow Flake Schema? Differences between the Star and Snowflake Schema? What is Difference between ER Modeling and Dimensional Modeling? What is Degenerate Dimension Table? Why is Data Modeling Important? What is a Surrogate Key? What is Junk Dimension? What is a Data Mart? What is the Difference between OLAP and Data Warehouse? What is a Cube and Linked Cube with Reference to Data Warehouse? What is Snapshot with Reference to Data Warehouse? What is Active Data Warehousing? What is the Difference between Data Warehousing and Business Intelligence? What is MDS? Explain the Paradigm of Bill Inmon and Ralph Kimball. SQL SERVER – Azure Interview Questions and Answers – Guest Post by Paras Doshi – Day 25 of 31 Paras Doshi has submitted 21 interesting question and answers for SQL Azure. 1.What is SQL Azure? 2.What is cloud computing? 3.How is SQL Azure different than SQL server? 4.How many replicas are maintained for each SQL Azure database? 5.How can we migrate from SQL server to SQL Azure? 6.Which tools are available to manage SQL Azure databases and servers? 7.Tell me something about security and SQL Azure. 8.What is SQL Azure Firewall? 9.What is the difference between web edition and business edition? 10.How do we synchronize On Premise SQL server with SQL Azure? 11.How do we Backup SQL Azure Data? 12.What is the current pricing model of SQL Azure? 13.What is the current limitation of the size of SQL Azure DB? 14.How do you handle datasets larger than 50 GB? 15.What happens when the SQL Azure database reaches Max Size? 16.How many databases can we create in a single server? 17.How many servers can we create in a single subscription? 18.How do you improve the performance of a SQL Azure Database? 19.What is code near application topology? 20.What were the latest updates to SQL Azure service? 21.When does a workload on SQL Azure get throttled? SQL SERVER – Interview Questions and Answers – Guest Post by Malathi Mahadevan – Day 26 of 31 Malachi had asked a simple question which has several answers. Each answer makes you think and ponder about the reality of the IT world. Look at the simple question – ‘What is the toughest challenge you have faced in your present job and how did you handle it’? and its various answers. Each answer has its own story. SQL SERVER – Interview Questions and Answers – Guest Post by Rick Morelan – Day 27 of 31 Rick Morelan of Joes2Pros has written an excellent blog post on the subject how to find top N values. Most people are fully aware of how the TOP keyword works with a SELECT statement. After years preparing so many students to pass the SQL Certification I noticed they were pretty well prepared for job interviews too. Yes, they would do well in the interview but not great. There seemed to be a few questions that would come up repeatedly for almost everyone. Rick addresses similar questions in his lucid writing skills. 2012 Observation of Top with Index and Order of Resultset SQL Server has lots of things to learn and share. It is amazing to see how people evaluate and understand different techniques and styles differently when implementing. The real reason may be absolutely different but we may blame something totally different for the incorrect results. Read the blog post to learn more. How do I Record Video and Webcast How to Convert Hex to Decimal or INT Earlier I asked regarding a question about how to convert Hex to Decimal. I promised that I will post an answer with Due Credit to the author but never got around to post a blog post around it. Read the original post over here SQL SERVER – Question – How to Convert Hex to Decimal. Query to Get Unique Distinct Data Based on Condition – Eliminate Duplicate Data from Resultset The natural reaction will be to suggest DISTINCT or GROUP BY. However, not all the questions can be solved by DISTINCT or GROUP BY. Let us see the following example, where a user wanted only latest records to be displayed. Let us see the example to understand further. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • problem storing a hash in DB using Storage::nfreeze Perl

    - by Sam
    Hello, I want to insert a hash in the db using Storage::nfreeze but the data is not inserted properly. the code is as follow: %rec=(); $rec{'name'} = 'my name'; $rec{'address'} = 'my address'; my $order1 = new Order(); $order1->set_session(\%rec); $self->createOrder($order1); sub createOrder { my $self = $_[0]; my $order = $_[1]; # Retrieve the fields to insert into the database. my $st = $dbh->prepare("insert into order (session,.......) values(?,........)"); my $session = %{$order->get_session()}; $st->execute(&Storable::nfreeze(\%session),.....); $st->finish(); } sub getOrder { ... my $session = &Storable::thaw( $ref->{'session'} ); ..... } the thaw is working fine because I tested it withe some rows that have been inserted correctly. but when I try to get a row that was inserted using the createOrder subroutine, I get an error saying" Storable binary image v36.65 more recent than I am (v2.7) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/thaw.al) line 415 the error comes from the line that have thaw. the nfreeze did not store the hash properly. Can someone point me to what i m doing wrong in the createOrder subroutine? Thanks in advance. I know the module version have nothing to do with the problem.

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Android - Where to store generated bitmaps?

    - by Josh
    I've got an app which dynamically generates anywhere from 6 to 100 small bitmaps for the user to move around the screen in a given session. I currently generate them in onCreate and store them to the sd card, so that after an orientation change I can grab them out of external storage and display them again. However, this takes time (the loading) and I'd like to keep the bitmap references around between lifecyle changes for quicker access. My question is, is there a better place to store my generated bitmaps? I was thinking about creating a static storage library in my base activity, something that would only need to be reloaded when the app is completely removed from memory (shutdown, other apps need resources, 30 minute restart, etc). Ideally, I'd like the user to be able to back out to the title screen, click a "Resume" button, and in onCreate I just have access to those resident bitmap references instead of having to load them from storage again. For this reason I don't think Activity.onRetainNonConfigurationInstance is what I need. Alternatively, is there a better way to handle multiple generated bitmaps than what I'm doing or the plan I described?

    Read the article

  • Best choice for a personal "online backup" in Europe

    - by marc_s
    I'm looking for an online backup solution for personal use - besides all the usual requirements (like not too expensive, since it's for personal use), I'd like to add two requirements to it: data center should be in Europe (I don't want my personal data stored in the US, when the next crazed president comes along and wants to confiscate and rifle through everybody's files.....) the online backup store should be accessible through a drive letter in cmd.exe So far, I've looked at a few services, but none have totally convinced me: Dropbox is looking ok, but they insist on creating a silly "My Dropbox" directory in my data path - and there's no way I can choose that name. Sorry - "My everything" is for dummies - I don't like that, I like to name my files and folders according to my liking LiveDrive is OK, too - they offer European storage, drive letter and all - but those drive letters are only available in the Windows Explorer - and not on the cmd.exe command line :-( and since I do 99% of my work on the command line, this is a major drawback..... Any other services I haven't looked at worth checking out? Marc

    Read the article

  • How to check CPU temperature on a HP P2000?

    - by Pavel
    I have a HP StorageWorks MSA Storage P2000 G3 SAS. show sensor-status gives something like # show sensor-status Sensor Name Value Status ---------------------------------------------------- On-Board Temperature 1-Ctlr A 53 C OK On-Board Temperature 1-Ctlr B 52 C OK On-Board Temperature 2-Ctlr A 61 C OK On-Board Temperature 2-Ctlr B 63 C OK On-Board Temperature 3-Ctlr A 53 C OK On-Board Temperature 3-Ctlr B 53 C OK Disk Controller Temp-Ctlr A 34 C OK Disk Controller Temp-Ctlr B 32 C OK Memory Controller Temp-Ctlr A 66 C OK Memory Controller Temp-Ctlr B 67 C OK [...] Overall Unit Status OK OK Temperature Loc: upper-IOM A 40 C OK Temperature Loc: lower-IOM B 38 C OK Temperature Loc: left-PSU 36 C OK Temperature Loc: right-PSU 40 C OK [...] is one of the values the CPU/FPGA temperature? Or, if not, how do I get it? Thanks!

    Read the article

  • Single/Mulitple LUN for vmware vm hosting

    - by Yucong Sun
    I'm building a iscsi storage system for hosting about ~500 Vmware vm running concurrently. And I have a disk array with 15 disks, I only need moderate write performance but preferably not SPOFed. so, that leaves me with RAID1 / RAID10 , I have couple choices: 1) 3x LUN 4disk RAID10 + 3 hot-swap 2) 1x LUN 14disk RAID10 + 1 hot-swap 3) 7x LUN 2disk RAID1 + 1 host-swap Which way is better? Is there a real problem running 500 vms on single LUN? and would it be better to resort to 7 LUns so each VM is better isolated with each other?

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • Do Seagate Momentus XT SSD Hybrid drives perform better than a good hard drive + flash on ReadyBoost

    - by Chris W. Rea
    Seagate has released a product called the Momentus XT Solid State Hybrid Drive. At a glance, this looks exactly like what Windows ReadyBoost attempts to do with software at the OS level: Pairing the benefits of a large hard drive together with the performance of solid-state flash memory. Does the Momentus XT out-perform a similar ad-hoc pairing of a decent hard drive with similar flash memory storage under Windows ReadyBoost? Other than the obvious "a hardware implementation ought to be faster than a software implementation", why would ReadyBoost not be able to perform as well as such a hybrid device?

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • How do I use an internal SSD as a scratch disk for FCP X?

    - by andrewb
    I'm contemplating setting up my MacBook Air as a video editing machine. If I do this, I'll upgrade to a 256 GB SSD, and I should be able to keep around 100 GB or more free for video editing. The video files would of course be stored externally, but save purchasing some expensive Thunderbolt RAID device (which I suppose is gradually becoming more of an option), it will be slow for read/writes. How can I have a set up where I take advantage of my SSD's speed for a scratch disk/cache for FCP X, but still have the TB(s) of storage of externals? I don't want to have to be moving files constantly back and forth, this is about saving time not wasting it.

    Read the article

  • Is a "failed" RAID 5 disk really no good?

    - by GregH
    This is my first venture in to setting up RAID on my home system. After installing 3 x 1TB drives in RAID 5, everything was running well for about 10 days. Then, the Intel Rapid Storage Technology software that monitors the disks and RAID on my system, told me that I had a failed drive. I marked the drive as good, and the array rebuilt. Then a day or so later I got a notification again, that the drive failed. I'm just wondering if this drive really is no good or if there is something I can do to get it working again? Or, do I just need to return it to the store where I bought it and get a replacement?

    Read the article

  • How to use new disk space after extend attached SAN disk

    - by Edu Lomeli
    I have extended the space of my SAN vDisk from 1TB to 1.2TB, but Windows Explorer doesn't show the new size. After resize the vdisk in the SAN Manager, the Disk Management utility shows the 200GB unallocated space, then I resized the partition to use the unallocated space to get a 1.2TB partition, the process was succesfully, but in the Windows File Explorer the disk still have 1TB of total space. Win version: Windows Storage Server Enterprise 2007. Do I need to restart the server? How can I use the new extra space without rebooting?

    Read the article

  • Why does StackExchange store images in imgur rather than its own servers? [migrated]

    - by martin's
    I am trying to understand the technical (and business) logic behind taking such an approach. Certainly SE isn't short of server or bandwidth resources. I don't think imgur is a CDN, so that can't be the reason. On the one hand one is giving up local control (meaning your files, your hardware) of the content. On the other, you don't have to use your own bandwidth, storage and resources. Then again, you depend on someone else for the reliability and up-time of your service.

    Read the article

  • Linux Disk Setup for VMs

    - by zjherner
    Been trying to find the ideal way to setup disks/partitions for Linux guests on ESXi. Seems as though Linux is falling behind when it comes easily adding disk space. The end goal is to be able to add disk space to a Linux server without rebooting the server or taking the server offline. Ideally, I would expect adding disk to a Linux machine should be as easy as adding disk space to a Windows machine. I expand the vmdk file from vSphere Open disk mangler find the disk and extend volume. Would have to use command line tools in linux which is no big deal, but I haven't been able to find a solid way to exand filesystems on the fly. What is everyone else using for disk setups on their linux guests? Has anyone been able to acheive adding storage space to linux without downtime? Can it be done without using lvm?

    Read the article

  • Flushing disk cache for performance benchmarks?

    - by Ido Hadanny
    I'm doing some performance benchmark on some heavy SQL script running on postgres 8.4 on a ubuntu box (natty). I'm experiencing some pretty un-stable performance, even though I'm supposed to be the only one running on the machine (the same script on the exact same data might run in 20m and then 40m for no specific reason). So, remembering my distant DBA training, I decided I should flush the postgres cache, using sudo /etc/init.d/postgresql restart, but it's still shaky! My question: maybe I'm missing some caches in my disk/os? I'm using a netapp appliance as my storage. Am I on the right track? Do I even want to make sure I get repeatable performance before I start tuning?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >