Search Results

Search found 6770 results on 271 pages for 'azure storage'.

Page 63/271 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Strategy to store/average logs of pings

    - by José Tomás Tocino
    I'm developing a site to monitor web services. The most basic type of check is sending a ping, storing the response time in a CheckLog object. By default, PingCheck objects are triggered every minute, so in one hour you get 60 CheckLogs and in one day you get 1440 CheckLogs. That's a lot of them, I don't need to store such level of detail, so I've set a up collapsing mechanism that periodically takes the uncollapsed CheckLogs older than 24h and collapses (averages) them in intervals of 30 minutes. So, if you have 360 CheckLogs that have been saved from 0:00 to 6:00, after collapsing you retain just 12 of them. The problem.. well, is this: After averaging the response times, the graph changes drastically. What can I do to improve this? Guess one option could be narrowing the interval duration to 15 min. I've seen the graphs at the GitHub status page and they do not seem to suffer from this problem. I'd appreciate any kind of information you could give me about this area.

    Read the article

  • Microsoft ajoute un nouveau service de Build à la version hébergée de Team Foundation Server permettant de compiler sur Windows Azure

    Microsoft ajoute nouveau service de Build à la version hébergée de Team Foundation Server permettant de compiler le code source sur Windows Azure Microsoft a procédé à une mise à jour de la version hébergée de Team Foundation Server sur Windows Azure. Team Foundation Server est une solution de travail collaboratif permettant la gestion des sources, des builds, le suivi des éléments de travail, la planification et l'analyse des performances. Il y a un an de cela, Microsoft avait porté la solution sur sa plateforme d'hébergement Cloud Windows Azure. La société vient d'annoncer qu'un nouveau service de Build a été ajouté à Team Foundation. Ce service réduit le...

    Read the article

  • Azure VS Tools and SDK - systray already running&hellip;

    - by Shawn Cicoria
    If you are getting a message when you start the Compute Emulator “Systray already running…” from within Visual Studio one fix is to check what the image name is loading is. For some reason, on 2 of my machines the image was loading with the 8.3 format.  This caused the logic in the VS tools to not find the process.  So, to fix, I just did a little copy/rename magic. C:\Program Files\Windows Azure SDK\v1.3\bin>copy csmonitor.exe csmonitor-a.exe 1 file(s) copied. C:\Program Files\Windows Azure SDK\v1.3\bin>del csmonitor.exe C:\Program Files\Windows Azure SDK\v1.3\bin>copy csmonitor-a.exe csmonitor.exe 1 file(s) copied. If you bring up task manager and see something like CSMON~1.EXE in the Image Name column, you probably have this issue.

    Read the article

  • How should a team share/store game content during development?

    - by irwinb
    Other than Dropbox, what out there has been especially useful for storing and sharing game content like images during development (similar feature set to Dropbox like working offline, automatic syncing and support for windows/osx)? We are looking into hosting our own SharePoint server but it seems to be really focused on documents... Maybe Box.net would work? EDIT For code, we are using Git. To be more precise, I was looking for an easy, automatic way for content produced by artists/audio engineers to be available to everyone. Features like approvals of assets don't hurt either. Following the answer linked by Tetrad, Alienbrain looked pretty interesting but..is way out of our budget (may be something to invest in in the future). What ended up doing... We were going to go with Box.net but downloading the sync apps for desktop use required us to wait to be contacted by them for some reason. We did not have much time to wait so we ended up going with Dropbox Teams. Box.net has a nice feature set but we never really felt held back without them. Thanks for the help :).

    Read the article

  • Storing Projects on Google Drive (Cloud)

    - by JamesKraw
    I've started using Google Drive for my cloud needs and backing up pretty much everything. I've got the app installed so it auto-sync's all my content in most things. My question is this, I am currently coding for iOS (although this applies to any coding project) and am split on storing my project files on Google Drive while using sync. My theory is that if I did use it, I'd never have to worry about system crashes or lost code before backups, but if I do use it it will be sync'ing a-lot and I thought there might be problems with it detecting changes and trying to sync for example half way through compiling. Bandwidth isn't an issue as I have fast connection and unlimited monthly allowance. Has anyone ever used this, or similar cloud-based sync'ing (dropbox etc) for this and knows whether it works or not or whether there are any potential problems etc.

    Read the article

  • Top Exastack Partner Headlines - 7 Nov

    - by Roxana Babiciu
    R-Style Softlab recommends RS-solutions with Exadata and SuperCluster to improve business performance for their customers. Read more. WingArc, a subsidiary of 1st Holdings, Inc., finds Oracle Exadata’s substantial computing power enabling to their enterprise output management solution (SVF/RD). It helps solve the problem of providing an integrated output platform for high volume businesses. Read more.

    Read the article

  • Le nouveau Windows Azure est disponible depuis ce matin, la version dévaluation de 90 jours est toujours valide

    Nouveau Windows Azure : des machines virtuelles persistantes sous Linux Du IaaS, et encore plus de technologies open-sources supportées Edit du 08/06/12 : Le nouveau Windows Azure est disponible depuis ce matin (0h15 heure de Paris) Windows Azure, la plateforme Cloud de Microsoft dédiée aux développeurs, continue sa montée en puissance. Depuis hier, plusieurs nouveaux services ont été officiellement annoncés. Parmi ceux-ci, un des plus attendus (et qui a alimenté le plus de rumeurs) est l'arrivée de machines virtuelles - persistantes ? capables de faire tourner des distributions Linux (Ubuntu, OpenSuse, CentOS, SUSE Linux Enterprise Server). Az...

    Read the article

  • Why use binary files to stack up different versions on DMSs?

    - by edgarator
    I've used both Liferay and Alfresco trying to use them as the Document Management System for an intranet. I noticed the following: They use the file system and the database to store files They use a GUID to name the file on the filesystem and that GUID is used as an Id in the database. The GUID-named file is a binary file The GUID-named binary file stores all versions for a given file The path for the file in the DMS doesn't match the one in the file system The URL makes reference to the GUID when a certain file is requested What I want to know is why is this, and what would be the best way of doing it. Like how to would you create the binary file (zip?), and what parts would you keep in the binary file and what parts would you store in the database (meta-data, path?). I'm assuming some of the benefits of doing it like this. As having the same URL for a file, regardless of its current document path. And having only one file even if the file has changed names over time.

    Read the article

  • Oracle Virtual Compute Appliance Launch Channel Update Webcast - May 28

    - by Roxana Babiciu
    Join us for an Oracle Virtual Compute Appliance (OVCA) launch update for the channel. This training webcast is a follow up to the OVCA launch on April 16. We will provide a brief product overview of OVCA followed by some great OPN program content, resell criteria, OPN Incentive Program and Demo Equipment Program details. There will be two sessions to accommodate each region. Additionally, don’t miss the latest Oracle Virtual Compute Appliance article packed with great information!

    Read the article

  • Bing search API and Azure

    - by Gapton
    I am trying to programatically perform a search on Microsoft Bing search engine. Here is my understanding: There was a Bing Search API 2.0 , which will be replaced soon (1st Aug 2012) The new API is known as Windows Azure Marketplace. You use different URL for the two. In the old API (Bing Search API 2.0), you specify a key (Application ID) in the URL, and such key will be used to authenticate the request. As long as you have the key as a parameter in the URL, you can obtain the results. In the new API (Windows Azure Marketplace), you do NOT include the key (Account Key) in the URL. Instead, you put in a query URL, then the server will ask for your credentials. When using a browser, there will be a pop-up asking for a/c name and password. Instruction was to leave the account name blank and insert your key in the password field. Okay, I have done all that and I can see a JSON-formatted results of my search on my browser page. How do I do this programmatically in PHP? I tried searching for the documentation and sample code from Microsoft MSDN library, but I was either searching in the wrong place, or there are extremely limited resources in there. Would anyone be able to tell me how do you do the "enter the key in the password field in the pop-up" part in PHP please? Thanks alot in advance.

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Connecting Dell PowerVault NAS to ESXi

    - by Matt Fitz
    Just got a Dell PowerVault NAS storage device running Windows Storage Server Standard which includes NFS. When I try and connect the ESXi server I get the following message: Call "HostDatastoreSystem.CreateNasDatastore" for object "ha-datastoresystem" on ESXi "powerhouse" failed. Operation failed, diagnostics report: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details. I am pretty sure it is a username/password type thing. Not sure were to begin though. Also, I am planning on using "Username Mapping" instead of Active Directory. Any ideas would be greatly appreciated

    Read the article

  • many partitions on a single filegroup?¿ does it make sense?

    - by river0
    Hi, I'm designing a datawarehouse solution and I'm a newbie in disk configuration issues, let me explain you. Our storage is spread over 6 storage enlosures having each of them 5 raid-1 disk arrays, and having 2 LUNS defined per each disk array, which makes a total 48 LUNS (this is following Microsoft fast track recommendations for datawarehouse architectures). I would like to partition my data, on other projects I have worked before, we always followed a 1 partition - 1 filegroup rule. On the microsoft fast track recomendations it is advised to create a filegroup and then for that filegroup a data file per each lun... but I pretend to have a week level partitioning... if I apply that rule I think that I'll get too many files and a complex layout. I'm thinking of just creating just one filegroup (with the 48 lun data files), but still create the partitions since I want to keep soem of the benefits of partitions like partition switching... Is this scenario not recommended? What would you suggest?

    Read the article

  • Question regarding an NAS server and remote users

    - by JB
    I have a client that requires a massive amount of storage space but: Doesn't want to spend very much money Needs remote users (across the country) to be able to pull data from it and store to it, as well. Can this be done with an NAS Server such as the Western Digital Sharespace Network Storage System? I do not believe the client wants to spend over $1400 and HP is offering 8TB for $1299. Also, if anyone has any other ideas besides using NAS, please let me know. Thanks in advance for any help.

    Read the article

  • Windows 2008 terminal server - How to restrict access to DVD/floppy?

    - by test1839
    I has a very simple task. I need to block access to removable media (CD, DVD, floppy, USB drives etc.) on a Windows 2008 R2 Terminal Server for users and allow it for admins. I tried to enable the following policy in GPO: User Configuration/Administrative Templates/System/Removable Storage Access All Removable Storage classes: Deny all access = Enabled But it did not work. I tried different physical and virtual 2008 servers with the same result. It works on Windows 7 but not on Windows 2008. Has anyone had success with this parameter on Windows 2008? Thank you

    Read the article

  • Server vendor that allows 3rd party disks

    - by Alvin S
    As noted here, Dell is no longer allowing 3rd party disks to be used with their latest servers. As in, they don't work period. Which means that if you buy one of these boxes and want to upgrade the storage later, you have buy disks from Dell at significant premiums. Dell has just given me a very strong reason to take my server business elsewhere. My company buys (instead of leasing) our servers, and typically uses them for 5 years. I need to be able to upgrade/repurpose storage periodically, and do not want to be locked in to whatever Dell might have in stock, at inflated prices to boot. As you will see in the comments of the above link, it seems HP is doing the same thing. I am looking for a server vendor that offers 3-5 year warranty with same day/next day onsite service, and allows me to use 3rd party disks. Suggestions?

    Read the article

  • best filesystem for an aws s3 like service

    - by gucki
    Hi! I need to build a fault tolerant, highly available key/value storage (no posix, only same functionaluty as S3) using cheap existing hardware. The storage should be able to handle several billions of items. The maximum size of items is around 1GB, most are only several KB. What's the best software/ filesystem for this task? I already had a brief look at mogilefs, mongodb (grid-fs) & glusterfs but I'm not really sure which is stable & fault tolerant enough. The simpler the setup and later expansion the better :). Corin

    Read the article

  • Advantage of using nexenta vs. OpenSolaris

    - by jotango
    I am currently building a NAS for about 24 TB of storage. Video files, slow access, long term storage. No performance issues. I am currently undecided between buying a JBOD case and installing OpenSolaris (because of ZFS), or purchasing a Nexenta license. The difference is about $ 12.500 for licenses over three years. What would you see as the main advantage in purchasing a nexenta license, beside the support? Did nexenta really enhance the basic OpenSolaris, or is it just a lot of marketing speak? No one really wanted to answer that question.

    Read the article

  • Alternatives to FTP

    - by Jack Hickerson
    I need to share files with clients outside of my business and unfortunately our FTP server is becoming too much of a hassle (with regards to clients use of an ftp client and creating password protected downloads based on customized account privileges) Essentially, I need: a remote service that mimics an FTP server with a web interface (easy for basic internet users to comprehend). over 100gb of storage file transfer size over 2gb customizable user account privileges (password protected downloads) secure storage and data transfer preferably less then $100/mo I have already looked into some services that almost meet my requirements (StreamFile.com, box.net, onehub.com, filesanywhere.com)- has anyone used a service they would recommend? cheers, jack

    Read the article

  • Effecient organization of spare cables and hardware

    - by Jake Wharton
    As many of you also likely do, I have a growing collection of cables, hardware, and spare parts (screws, connectors, etc.). I'm looking to find a good system of organization so that everything isn't a tangled mess, mismatched, and potentially able to be damaged. Since the the three things listed above are all have varying sizes and degrees of delicacy this poises an interesting problem. Presently I have those cheap plastic storage bins you find at Wal-mart for everything. Cables that were once wrapped neatly have become tangled due to numerous "I know I have a cable for this" moments. Hardware is mixed in other bins with odds and ends with no protection from each other. NICs, CPUs, and HDDs are all interacting and likely causing damage. Finally there are stray parts sprinkled amongst these two both in plastic bags and loose. I'm looking to unify this storage into a controlled chaos. Here are my thoughts: Odds and ends are the easiest. Screws, connectors, and small electronic parts lend themselves perfectly to tackle boxes and jewelry boxes. Since these are usually dynamically compartmentalized I can adjust for the contents and label them on the outside or inside of the lid. Cables are easily wrangled with short velcro strips but that doesn't stop them from being all mixed in together. Hardware is the worst offender. Size, shape, and degree of delicacy changes with nearly every piece. I'm willing to sacrifice a bit of organization for a somewhat efficient manner. What are all your thoughts? What is the best type of tackle or jewelry box to use? Most of them are cheap and flimsy. Is there a better alternative? How can I organize cables to know exactly (within reason) where one is? What about associating cables with hardware (Wall adapter to router, etc.)? What kind of storage unit lends itself to all shapes of hardware? Do I need to separate by size or degree of delicacy for better organization?

    Read the article

  • VMware ESXi 4 On-Disk Data Deduplication - possible and supported?

    - by hurikhan77
    Environment: We are running multiple web, database, and application servers which usually share a pretty common installation (gentoo linux) and similar configuration in VMware ESXi 4. The differences are usually only some installed features or differing component versions. To create a new server, I usually choose the most similar (by features) running server, rsync a copy of it into freshly mounted filesystems, run grub, reconfigure and reboot. Problem: Over time this duplicates many on-disk data blocks which probably sums up to several 10's of gigabytes. I suppose if I could use a base system as template with the actual machines based on top of that, only writing changed blocks to some sort of "diff image", performance should improve (increased cache hit rate) and storage efficiency should increase (deduplicated storage space). This would be similar to what ESXi already supports for RAM deduplication (page sharing). Question: Is there any way to easily do this on ESXi 4? I already share the portage tree via NFS but this would not work for the rootfs.

    Read the article

  • Fast, reliable data transfers from/to China

    - by Nils
    We are a small company and we will need to transfer rather large amounts of data (10GB+ each time) between Europe and China in the near future. As many may have experienced, Internet connections to or from China can be rather unreliable and slow at times without any apparent reason. For example, while sending data to China via FTP generally works well, it can be painfully slow in the other direction. Currently, we are investigating new ways to have high transfer rates in both directions. So far we have tried: FTP (see above) FTP over VPN services (generally slower than direct connections) F2F (like Retroshare or Freenet - slow!!) Aspera (fast but expensive!) BitTorrent (unreachable end nodes, b/c of firewalls which we must not configure) We would like to try: Cloud storage (e.g. Amazon S3, Google Storage) - are those services always and reliably reachable from inside China? Point-to-Point VPN (currently not possible, b/c of the network, see above) I'd be especially grateful to hear from people who have already dealt with this kind of problem before.

    Read the article

  • What disk setup is needed / best practice for hypervisor-only servers?

    - by Luke404
    Planning to buy some servers to run an hypervisor (Citrix XenServer or VMware vSphere, still have to decide between the two) we'd like to boot off the local redundant SD card module offered by various vendors (eg. Dell, HP, etc...). The actual VMs will run from an existing iSCSI SAN (which, by the way, can't support booting the servers directly off the SAN). What are the reasons, if any, to choose completely diskless servers VS having some local storage? And what would be the guidelines to choose that local storage? (number of spindles, raid level, etc)

    Read the article

  • Network Drive Via Ethernet Port for Speed?

    - by Yar
    I have a Macbook with Firewire 400 and USB 2.0, so the only way I can get fast external storage is through the Ethernet port. A really fast firewire 800 drive on ANOTHER computer is actually much faster than the built-in drive (according to XBench). So I thought I would try to go one better and buy an ethernet-ready drive. I bought a Seagate GoFlex™ Home Network Storage System, and it seems like the only way to get it to work is to plug it into a router. Can this drive be used without a router (i.e., direct to computer)? Are there any drives that can be plugged directly into the ethernet port for fast access? I don't want the drive on my router: I want it on my computer. Ideally I'd need 7200rpm or faster, too... Update: Just chatted with Seagate and they said that this particular drive will not work that way. Will any others?

    Read the article

  • Any experience with SATA SAS Interposer Cards?

    - by korkman
    Driven by the current price difference between SATA and SAS disks on one side and the potentially bad behaviour of SATA disks in bigger storage arrays on the other side, I have found so-called SATA-to-SAS interposer cards. Advertised as "seamlessly adding SAS capabilities to existing SATA disk drives", I wonder if anyone here has had some experience with these or similar products. The major benefits I can identify are the increased cable voltage (if all drives are SAS connected), the ability to power-cycle the drive and multipath (if desired). Obviously the SATA drive will still have to be RAID edition. The question is: Do these cards indeed increase the overall reliability of a storage system, or will failing SATA disks cause trouble nevertheless? Edit: I'm not asking for hypothetical answers, only actual experience please. I'm well aware that the typical 10k SAS drive is more reliable (and better performing) than 7200 SATA drives. But how does a nearline SAS, which is phyiscally the same disk as its SATA counterpart, compare to the SATA version with interposer?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >