Search Results

Search found 6317 results on 253 pages for 'persistent storage'.

Page 61/253 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • for what reason live USB linux images are not 100% persistent?

    - by gcb
    I understand that during CD-R times non-persistence had a purpose, but what's the purpose now that pretty much everyone uses USB flash drivers? not to mention USB3 sticks are pretty much 4x faster than my HD raid. I'm writing this while taking a break going over the linux from scratch guide... And i'm still baffled that this is not the norm already with all live images. So, Is there any reason (besides historical) that i'm missing and that will bite me after I finish this ext3-rw image?

    Read the article

  • Is it possible to make a persistent USB serial port regardless ofthe existence of the USB device?

    - by Keith Nicholas
    USB serial ports, especially devices which emulate serial ports, don't quite behave the same as old serial ports, which causes a few problems with some software. old serial ports, always existed, they never come and go out of existence. With USB devices that emulate serial ports, they come and go depending on whether they are powered / reset etc. Is there a way under windows to make the USB serial port permanently exist regardless of the presence of the device? (not just come back as the same name as it was before).

    Read the article

  • How can I create persistent SSH connection to "stream" commands over a period of time?

    - by Darth
    Say that I have an application running on one PC that is sending commands via SSH to another PC on the network (both machines running Linux). For example every time something happens on #1, I want to run a task on #2. In this setup, I have to create SSH connection on every single command. Is there any simple way to do this with basic unix tools without programming custom client/server application? Basically all I want is to establish a connection over SSH and then send one command after another.

    Read the article

  • How to create persistent static route on Mac OS X 10.6?

    - by kopobamypa
    I need to add static route on Mac OS. I found good description here Permanent Static Route Mac OS X 10.4.0 and followed the Roark Holz's (roarkh) solution. Now my problem: sometimes this solution works, sometimes does not. When it doesn't work I see these messages after boot in the Console Messages log: 06.05.10 9:34:13 com.apple.launchd[1] *** launchd[1] has started up. *** 06.05.10 9:34:46 com.apple.SystemStarter[30] Adding Static Route to 10.152 06.05.10 9:34:46 com.apple.SystemStarter[30] route: writing to routing socket: Network is unreachable 06.05.10 9:34:46 com.apple.SystemStarter[30] add net 10.152.0.0: gateway 192.168.1.234: Network is unreachable I want to know what is going on. How this kind of problem can be troubleshooted?

    Read the article

  • How to make a persistent acknowledgment in Icinga/Nagios?

    - by blerontin
    I am using Icinga (Nagios fork) to also monitor uptime of external hosts and services. Currently when looking at the "Critical" count I find it difficult to decide if an internal service is affected (I should take immediate action) or an external service (I just acknowledge the problem). Is there a way to keep a problem acknowledgement for future down-times of the checked host/service? Is there some way to auto-acknowledge the state change of external hosts/services?

    Read the article

  • How to make RHEL have persistent local hdd name?

    - by Mxx
    I have 2 identical Dell R720 servers running identical Oracle Enterprise Linux(RHEL)6.4. Both servers (supposedly) configured in exactly the same way. However, one of the servers is behaving differently. Every other reboot its local HDD name(and related partitions) flip from /dev/sda to /dev/sdj. This is problematic because both servers are configured for multipathd, and if this flip happens their config does not match and Oracle DB(or its clusterware) complains that nodes are not configured identically. Why does one server has a consistent device names while the other server keeps flipping back and forth? How can I make local hdd to consistently be /dev/sda? I suspect this might have something to do with udev but I'm not sure.

    Read the article

  • How can I keep persistent cookies from certain domains only?

    - by Mike L.
    This question is similar this one which covers Firefox, but I want to know how to do it in Chrome: I want Chrome to clear cookies from all sites accept those from certain domains. In the Cookies section of the *Content Settings I've made following selections: (*) Allow local data to be set (recommended) ( ) Allow local data to be set for the current session only ( ) Block sites from setting any data [ ] Block third-party cookies and site data [x] Clear cookies and other sites and plug-in data when I close my browser After logged in to my preferred website(s), I find the required domains listed when I click at All cookies and site data. Let's say, I find some cookies for mysite.comand www.mysite.com. Now I click at Manage exceptions and enter these items: Hostname Pattern Behavior ------------------------------------------- mysite.com Allow www.mysite.com Allow Unfortunately, this does not seem to work, because when I close Chrome and reopen it, all cookies are gone, even those from the configured mysite.com hosts.

    Read the article

  • Is there a way to store and access a SQL CE (.sdf) database in isolated storage?

    - by phredtalkpointcom
    I have a .Net application which must store all it's local data in isolated storage. I want to start using SQL CE to store this data. I can't find any documentation on how (or even if) this is possible. Is it possible to use isolated storage to store a SQL CE database? If so, what would the connection string look like (or is there some other way you would need to open the database)?

    Read the article

  • Is there a way to use sharepoint as the back-end versioning and storage for my custom document manag

    - by Aaron Palmer
    I want to build a custom document management web application that ties in with sharepoint for the actual document versioning and storage. I'm hoping for something like a sharepoint widget that I can plug into my web application that would allow me to tie in with sharepoint and download documents, make edits to them, and upload them back to sharepoint, with sharepoint handling all of the versioning and storage. If WSS is the answer to this, are there licensing issues that I need to consider? Thanks.

    Read the article

  • Personal cloud storage options... HELP!

    - by rhaddan
    I'm looking for some personal cloud storage options. My biggest concern about moving to a hosted storage solution is the long-term viability of the provider. Has anyone used a cloud service that you're crazy about? I'm a Mac user, so I need to have something that will work on the Mac OS and ideally the iPhone as well.

    Read the article

  • When using SQL Compact on Windows Mobile, do you store the sdf file on a storage card?

    - by Michal Drozdowicz
    Having had some Sql Compact db corruption issues in the past and gone through the article on these, I got the idea that storing the database sdf file on a storage card significantly increases the risk of data loss due to db corruption. Do you store the sdf file on a storage card? Have you had any issues caused by it? What should I pay attention to when recommending a particular brand or model of an SD card wrt the stability and security for use with SQL Compact?

    Read the article

  • MVC: Why put the business logic in the model? What happens when I've multiple types of storage?

    - by Steffen Winkler
    I always thought that the business logic has to be in the controller and that the controller, since it is the 'middle' part, stays static and that the model/view have to be capsuled via interfaces, that way you could change the business logic without affecting anything else, program multiple Models (one for each database/type of storage) and a dozens of views (for different platforms for example). Now I read in this question that you should always put the business logic into the model and that the controller is deeply connected with the view. To me, that doesn't really make sense and implies that each time I want to have the means of supporting another database/type of storage I've to rewrite my whole model including the business logic. And if I want another view, I've to rewrite both the view and the controller. May someone explain why that is or if I went wrong somewhere? Currently, that whole thing doesn't really make sense to me.

    Read the article

  • CodePlex Daily Summary for Monday, February 21, 2011

    CodePlex Daily Summary for Monday, February 21, 2011Popular ReleasesA2Command: 2011-02-21 - Version 1.0: IntroductionThis is the full release version of A2Command 1.0, dated February 21, 2011. These notes supersede any prior version's notes. All prior releases may be found on the project's website at http://a2command.codeplex.com/releases/ where you can read the release notes for older versions as well as download them. This version of A2Command is intended to replace any previous version you may have downloaded in the past. There were several bug fixes made after Release Candidate 2 and all...Chiave File Encryption: Chiave 0.9: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Feedbacks are Welcome!....Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...Azure Storage Samples: Version 1.0 (February 2011): These downloads contain source code. Each is a complete sample that fully exercises Windows Azure Storage across blobs, queues, and tables. The difference between the downloads is implementation approach. Storage DotNet CS.zip is a .NET StorageClient library implementation in the C# language. This library come with the Windows Azure SDK. Contains helper classes for accessing blobs, queues, and tables. Storage REST CS.zip is a REST implementation in the C# language. The code to implement R...MiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????View Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.119.18): Initial releaseWindows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Advanced Explorer for Wp7: Advanced Explorer for Wp7 Version 1.4 Test8: Added option to run under Lockscreen. Fixed a bug when you open a pdf/mobi file without starting adobe reader/amazon kindle first boost loading time for folders added \Windows directory (all devices) you can now interact with the filesystem while it is loading!Game Files Open - Map Editor: Game Files Open - Map Editor Beta 2 v1.0.0.0: The 2° beta release of the Map Editor, we have fixed a big bug of the files regen.Document.Editor: 2011.6: Whats new for Document.Editor 2011.6: New Left to Right and Left to Right support New Indent more/less support Improved Home tab Improved Tooltips/shortcut keys Minor Bug Fix's, improvements and speed upsCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...??????????: All-In-One Code Framework ??? 2011-02-18: ?????All-In-One Code Framework?2011??????????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????,?????AzureBingMaps??????,??Azure,WCF, Silverlight, Window Phone????????,????????????????????????。 ???: Windows Azure SQL Azure Windows Azure AppFabric Windows Live Messenger Connect Bing Maps ?????: ??????HTML??? ??Windows PC?Mac?Silverlight??? ??Windows Phone?Silverlight??? ?????:http://blog.csdn.net/sjb5201/archive/2011...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.New ProjectsAuto-mobile Forum: Automobileforum is a website regarding cars where users can share their ideas about cars and whatever information they want about a car they can get that from this website. The language used for development of this website is ASP.NET Azure Storage Samples: Azure Storage Samples are Windows Azure Storage code samples. They show how to perform every Windows Azure Storage operation for blobs, queues, and tables. There are 2 identical implementations, one using REST and the other using the .NET Storage Client Library (both in C#).b1234: b1234 is b1234betrayal: bahothBungee Jumper: This is a community website for a sport known as Bungee Jumping which is one of the most dangerous adventure sport. Chiave File Encryption: Application for file encryption and decryption using 512 Bit Rijndael encryption algorithm with simple UI to use. Its written in C# and compiled in .Net version 3.5, it incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass.DecEncIT: DecEncIT rend l’expérience de l’utilisateur commun avec la cryptographie (cryptage) plus aisée. Pas de soucies pour les réglages et les termes techniques, vagues et difficiles à comprendre ; inclus également une option pour automatiser toutes les tâches recommandées. VB.Net/3.5ELearningTutorial: A list of tutorialsExtjs Practice - Open our extj journey: ????extjs 1.demo ?? 2.partice ?? 3.work ??Functional: This class library for .NET 3.5 and 4.0 provides extension methods for delegates, to help with functional style programming.HauntedNetworking: HauntedNetworking is a site where people can share their paranormal experiences and related photographs. They can also share pictures of haunted and scary places across the world.KharaSoft Nexus: Nexus is the last inbox you'll ever need.Kojax: kojax projectLEAD - Learn Execute And Develop: It’s a portal to share your knowledge, learn from blogs, wikis, post your questions in query board, learn new things under video training, Execute and develop these, helping gaining knowledge.Luac For 5.1 ????: luac??????? Lunar Lander: Simple WPF Lunar Lander simulator, with reactive intelligent agent autopilot.MasterSchool: Tworzenie swiadectwmovies blogspot: The Movies blog- spot is an interactive site for the information related to movies, actors, personnel and fictional characters features. MvcTemplates: MvcTemplates is a project devoted to creating a complete suite of Razor Editor and Display templates that utilize attributes, built in models, and jQuery. It is developed in C# and packages the templates into an easy to use dll.MySearcher: Quickly and easily search the major search engines with MySearcher. No clicking around, no switching windows; No matter how many windows you have open, or what you're doing, Searching the major search engines is as simple as moving your mouse to the side of the screen!Photography in Kerala: Photography in Kerala's a website which makes easier for people who are into naturistic Photography to share their photos and other Info of Kerala(also called God's own country).You'll no longer have to search about Kerala,Or share your own info/pics here.Developed in ASP.Net.Pixels(Storage Engine): Pixels(Storage Engine)PortsManager: C# Project to Simplify working with Ports Currnetly it's working with COMs in Receive only and soon it'll be updated to work in send and expand the library to handle send informations even in network.Public Web Browser: Public Web Browser is a webbrowser client with alot of featuresRealStatus - MS Communicator 2007 & A Traffic Light Indicator Hardware: A demonstration of a traffic light indicator hardware corresponding to status events of MS Communicator in real-time. A demo video can be found here: http://www.youtube.com/watch?v=GwgfSVeXVksSABnzbdNet: A SABnzbd monitor and administrator. Built in C# WPF.Sendill: Kerfi fyrir sendibílastöðvarT4 Metadata and Data Annotations Template: This T4 template handles generating metadata classes from an Entity Framework 4 model and decorates primitive and navigation properties with data annotation attributes such as [Required] and [StringLength]. The [DataType] attribute is also applied when appropriate.urlshorten service: urlshorten service need storoom @ http://storoom.codeplex.com/ developed in vb.netuVersionClientCache: uVersionClientCache is a custom macro to always automatically version (URL querstring parameter) your files based on a MD5 hash of the file contents or file last modified date to prevent issues with client browsers caching an old file after you have changed it.Video Game Network Library: A lightweight network library written in C++ for video games.View Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator make it easier for Microsoft Dynamics CRM 2011 customizers to copying the layout of a view and paste it to the layout of other views in the same entityVLC MakeIT: This is virtual learning center yogaasana4you: The website is on Yoga Asana for the community who want to feel the amazing experience of Yoga asana - Yoga Exercises and Postures.

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • How to format my external HDD back to as "removable storage"?

    - by user990106
    Recently I formated my Seagate FreeAgent GoFlex external HDD in Mac OS X using GUID partition table since I wanted to install another Mac OS X onto that external HDD. However I changed my mind after my external HDD being formatted. Now I want to format my external HDD back to NTFS so that I can use it with my Windows 7. However, after I connected my external HDD via USB it didn't show up in my "computer" so I used "Disk Management" to check what's wrong with it. In the "Disk Management" I saw that there was one partition of my external HDD called "EFI partition" and I found that I could not delete this partition in the "Disk Management". So I tried to use "diskpart" in cmd and select the external HDD and commanded "clean". Then the EFI partition was gone and I created new volumn on that external HDD. However, after the volumn being created my external HDD did show up in my "computer" but it is in the "Hard Disk Drive" not in the "Devices with Removable Storage" as it used to be. I'm wondering if I can do anything to it to make it recognized as a "Devices with Removable Storage"?

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Microsoft Offloaded Data Transfer (ODX)

    - by Charles Cline
    For all you admins and other technical people out there who have watched the Windows OS spool the data from network storage to your workstation and then back to network storage, watch for Offloaded Data Transfer (ODX).  I saw ODX at TechEd a few weeks ago and the data movement is primarily kept at the backend storage network.  EMC and other storage vendors are already posting about when they will have this functionality.Here's some information about it:http://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspxhttp://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspx

    Read the article

  • ??????????????????

    - by kazun
    2012 ? 8 ? 9 ???????????? ?? 13F ??????????????????????? DB ???????????????????????????????????????????????????????·????????????·??????????Exadata ?????????? Hybrid Columnar Compression(HCC)?????????????????????????????? (1)?????????????????IT????????????? ?????????????·??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????IT???????????????????ZFS Storage Appliances ????????100Petabytes ??(8 ? 9 ???)? ZFS Storage ?????????????????????????????????????????????????????????????????????? (2)????????????????????????????? ??????????? VTR ???????????????????? EOL(End of Life)????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????7 ??????????? StorageTek SL150 ??????????????????????????450TB(????)?30~300 ????????????????????????? 10.1TB?????????????????????????????????????????????????????????????????????????????? ??:StorageTek SL150(????????????) ?????????????????????????????????????????????????????????? (3)Oracle????????????????????????? ??????(????????)????????????????????????????????????????????????Sun ZFS Storage Appliance?????????????????????????????Sun ZFS Storage Appliance ?Pillar Axiom ???????????????????????????? ?????????Oracle Database?????????????????????????????????????????? Database ????????? Hybrid Columnar Compression / Snapshot?Clone????????????????DB???????? I/O Quality of Services?????????DB??? ????? ???Optimized Solutions????Oracle??????(??????????????????)????????????????????????????????Exadata ?? InfiniBand ?????????????????1?????? ??????????????????????????????????????????????Oracle Database ? Exadata ?????????????????????????? Oracle ?????????????????????? Sun ZFS Storage 7120 Appliance ??????????????????????? ??:Sun ZFS Storage 7120 Appliance (4)?????????Oracle Database ??????????????? “Hybrid Columnar Compression (HCC) ” ????????????? HCC??????????????????????????????????????????????????? I/O ????????????????????????????????????????????Oracle Database ???? HCC ??????????????? ?????:?Oracle ????·???? 2009?12? Exadata Hybrid Columnar Compression? ?????????Sun ZFS Storage Appliance ?Pillar Axiom ?????????4??????HCC??????????????????? HCC ???????? HCC ???????? Exadata ? DataGuard ?????HCC ?????????????????????? Exadata ??????????????·??????????????? DB ??????? HCC ?????? ?????????????????????????HCC ?????????????????????????????????? ?????????? HCC ?????????????????????ZFS Storage Appliance ? Pillar Axiom ?????????????????????? 6 ?????????????????????????????????????? ¦???????? ???????????

    Read the article

  • Exchange 2007 Standard Edition

    - by Phrontiste
    We Have : Exchange 2007 Standard Edition IBM System X3650 2 x Intel Xeon 5430 2.66 GHz Version 8.1 Build 240.6 Mailbox, Hub Transport, Client Access Role Installed on One Box Total Number of Mailboxes : 110 - 130 6 Physical Disks Disk 0,1 (68 GB) = Raid-1, OS Partition ( C: Partition) Disk 2,3 (279GB) = Raid-1, Exchange Database (First and Second Storage Groups) ( D: Partition ) Disk 4,5 (68 GB) = Raid-1, Exchange Transaction Logs ( E: Partition ) Setup: Storage Groups : D:\First Storage group\Mailbox database.edb Storage Groups : D:\Second Storage Group\Public Folder Database.edb Transaction Logs : E Partition Problem 1: On our D Partition (Mailbox Database Partition), total size is 279 GB, free space remaining is 64.7 GB, when I select the first storage group and second storage group folders and right click properties they report a size of 165 GB. Mailbox database reports a size of 157GB when right clicked Properties. where as the size displayed in the folder is 164,893,456 KB So, we are missing around 50-54 GB, there is nothing else on these drives, no page file, nothing at all. The partition housing the Transaction logs is reporting the sizes accurately. Any suggestions / fixes on the above ? Problem 2: As you may have already read in Problem 1, the size of the mailbox database is 157GB or 164GB reported; which is not recommended, a) What would you suggest we should do to divide mailboxes in storage groups on this same server ? b) How would we move mailboxes into different storage groups ? c) This is the information store size ? (Am I right in thinking that this is not recommended) d) Having multiple storage groups with one Mailbox DB in each, would that reduce the size of the Information Store? e) Any suggestions / how-to reduce the size of information store ? We didn't install this, we have inherited this - what other recommendations you can make in order to keep ourselves better prepared for any server disaster? We are backing up with Yosemite Backup on RD1000 (320GB) at the moment, which is backing up successfully, flushing the logs daily. We haven't done a test restore YET. I have tried to provide as much info as possible, please let me know if you need further info. Also, we haven't yet faced any problems in mailflow, access speeds, everything is working fine, we have two to five people accessing OWA or Outlook via vpn only. Thanks for your time to read the above - will look forward to your expert suggestions.

    Read the article

  • How can I write javaScript cookies to keep the data persistent after page reloads on my form?

    - by Johhny Thero
    Hello, I am trying to learn how to write cookies to keep the data in my CookieButton1 button persistent and to survive refreshes and page reloads. How can I do this in JavaScript? I have supplied my source code. Any advise, links or tutorials will be very helpful. If you navigate to http://iqlusion.net/test.html and click on Empty1, it will start to ask you questions. When finished it stores everything into CookieButton1. But when I refresh my browser the data resets and goes away. Thanks! <html> <head> <title>no_cookies> </head> <script type="text/javascript" > var Can1Set = "false"; function Can1() { if (Can1Set == "false") { Can1Title = prompt("What do you want to name this new canned response?",""); Can1State = prompt("Enter a ticket state (open or closed)","closed"); Can1Response = prompt("Enter the canned response:",""); Can1Points = prompt("What point percentage do you want to assign? (0-10)","2.5"); // Set the "Empty 1" button text to the new name the user specified document.CookieTest.CookieButton1.value = Can1Title; // Set the cookie here, and then set the Can1Set variable to true document.CookieTest.CookieButton1 = "CookieButton1"; Can1Set = true; }else{ document.TestForm.TestStateDropDownBox.value = Can1State; document.TestForm.TestPointsDropDownBox.value = Can1Points; document.TestForm.TestTextArea.value = Can1Response; // document.TestForm.submit(); } } </script> <form name=TestForm> State: <select name=TestStateDropDownBox> <option value=new selected>New</option> <option value=open selected>Open</option> <option value=closed>Closed</option> </select> Points: <select name=TestPointsDropDownBox> <option value=1>1</option> <option value=1.5>1.5</option> <option value=2>2</option> <option value=2.5>2.5</option> <option value=3>3</option> <option value=3.5>3.5</option> <option value=4>4</option> <option value=4.5>4.5</option> <option value=5>5</option> <option value=5.5>5.5</option> <option value=6>6</option> <option value=6.5>6.5</option> <option value=7>7</option> <option value=7.5>7.5</option> <option value=8>8</option> <option value=8.5>8.5</option> <option value=9>9</option> <option value=9.5>9.5</option> <option value=10>10</option> </select> <p> Ticket information:<br> <textarea name=TestTextArea cols=50 rows=7></textarea> </form> <form name=CookieTest> <input type=button name=CookieButton1 value="Empty 1" onClick="javascript:Can1()"> </form>

    Read the article

  • how to emulate thread local storage at user space in C++ ?

    - by vprajan
    I am working on a mobile platform over Nucleus RTOS. It uses Nucleus Threading system but it doesn't have support for explicit thread local storage i.e, TlsAlloc, TlsSetValue, TlsGetValue, TlsFree APIs. The platform doesn't have user space pthreads as well. I found that __thread storage modifier is present in most of the C++ compilers. But i don't know how to make it work for my kind of usage. How does __thread keyword can be mapped with explicit thread local storage? I read many articles but nothing is so clear for giving me the following basic information will __thread variable different for each thread ? How to write to that and read from it ? does each thread has exactly one copy of the variable ? following is the pthread based implementation: pthread_key_t m_key; struct Data : Noncopyable { Data(T* value, void* owner) : value(value), owner(owner) {} int* value; }; inline ThreadSpecific() { int error = pthread_key_create(&m_key, destroy); if (error) CRASH(); } inline ~ThreadSpecific() { pthread_key_delete(m_key); // Does not invoke destructor functions. } inline T* get() { Data* data = static_cast<Data*>(pthread_getspecific(m_key)); return data ? data->value : 0; } inline void set(T* ptr) { ASSERT(!get()); pthread_setspecific(m_key, new Data(ptr, this)); } How to make the above code use __thread way to set & get specific value ? where/when does the create & delete happen? If this is not possible, how to write custom pthread_setspecific, pthread_getspecific kind of APIs. I tried using a C++ global map and index it uniquely for each thread and retrieved data from it. But it didn't work well.

    Read the article

  • HTML5 Local Storage of audio element source - is it possible?

    - by andrewdotcom
    Hi stackoverflow experts I've been experimenting with the audio and local storage features of html5 of late and have run into something that has me stumped. I'd like to be able to cache or store the source of the audio element locally to enable speedier and offline playback. The problem is I can't see how this is possible with the current implementation. I have tried the following using webkit: Creating a manifest file to set up local caching but the audio file appears not to be a cacheable item maybe due to the way it is stream or something I have also attempted to use javascript to put an audio object into local storage but the size of the mp3 makes this impossible due to memory issues (i think). I have tried to use the data uri and base64 to use the html as a audio transport that can be cached but again the filesize makes this prohibitive. Also the audio element does not seem to like this in webkit (works fine in mozilla) I have tried several methods of putting the data into the local database store. Again suffering the same issues as the other cases. I'd love to hear any other ideas anyone may have as to how I could achieve my goal of offline playback using caching/local storage in webkit.

    Read the article

  • Mozilla Weave can't sync Firefox. What's wrong?

    - by Mehper C. Palavuzlar
    For the last few days, Mozilla Weave can't sync. Below is the activity log. Any ideas? 2010-05-02 20:47:15 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-02 20:47:15 Service.Main CONFIG Starting backoff, next sync at:Sun May 02 2010 21:16:09 GMT+0300 (GTB Yaz Saati) 2010-05-02 20:47:15 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-02 21:16:09 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 21:16:30 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 21:16:30 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 21:26:50 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 21:26:50 Engine.Clients INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Clients INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Clients DEBUG Total (ms): sync 6, processIncoming 3, uploadOutgoing 0, syncStartup 3, syncFinish 0 2010-05-02 21:26:50 Engine.Bookmarks INFO 0 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Bookmarks INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Bookmarks DEBUG Total (ms): sync 13, processIncoming 5, uploadOutgoing 0, syncStartup 3, syncFinish 3 2010-05-02 21:26:50 Engine.Forms INFO 1 outgoing items pre-reconciliation 2010-05-02 21:26:50 Engine.Forms INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:26:50 Engine.Forms INFO Uploading all of 1 records 2010-05-02 21:26:50 Collection DEBUG POST Length: 388 2010-05-02 21:27:06 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/forms 2010-05-02 21:27:06 Engine.Forms DEBUG Total (ms): sync 15924, processIncoming 3, uploadOutgoing 15918, syncStartup 3, syncFinish 0, createRecord 1 2010-05-02 21:27:06 Engine.History INFO 55 outgoing items pre-reconciliation 2010-05-02 21:27:06 Engine.History INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:09 Engine.History INFO Uploading all of 55 records 2010-05-02 21:27:09 Collection DEBUG POST Length: 35337 2010-05-02 21:27:32 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/history 2010-05-02 21:27:32 Engine.History DEBUG Total (ms): sync 25588, processIncoming 4, uploadOutgoing 25580, syncStartup 3, syncFinish 0, createRecord 2540 2010-05-02 21:27:32 Engine.Passwords INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Passwords INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Passwords DEBUG Total (ms): sync 8, processIncoming 4, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Prefs INFO 0 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Prefs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Prefs DEBUG Total (ms): sync 8, processIncoming 3, uploadOutgoing 0, syncStartup 4, syncFinish 0 2010-05-02 21:27:32 Engine.Tabs INFO 1 outgoing items pre-reconciliation 2010-05-02 21:27:32 Engine.Tabs INFO Records: 0 applied, 0 reconciled, 0 left to fetch 2010-05-02 21:27:32 Engine.Tabs INFO Uploading all of 1 records 2010-05-02 21:27:32 Collection DEBUG POST Length: 393 2010-05-02 21:27:54 Collection DEBUG POST success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/tabs 2010-05-02 21:27:54 Engine.Tabs DEBUG Total (ms): sync 21943, processIncoming 3, uploadOutgoing 21936, syncStartup 3, syncFinish 0, createRecord 8 2010-05-02 21:27:54 Service.Main INFO Sync completed successfully 2010-05-02 22:27:53 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-02 22:28:14 Net.Resource DEBUG GET success 200 https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global 2010-05-02 22:28:14 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2 2010-05-02 22:28:16 Net.Resource DEBUG GET fail 503 https://sj-weave03.services.mozilla.com/1.0/mehper/info/collections 2010-05-02 22:28:16 Service.Main DEBUG Exception: aborting sync, failed to get collections No traceback available 2010-05-02 23:28:15 Service.Main DEBUG Idle timer created for sync, will sync after 5 seconds of inactivity. 2010-05-03 00:26:42 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 00:31:03 RecordMgr DEBUG Failed to import record: App. Quitting JS Stack trace: Res__request(...)@resource.js:208 < Res_get()@resource.js:271 < RecordMgr_import("https://sj-weave03.services.mozilla.com/1.0/mehper/storage/meta/global")@wbo.js:119 < WeaveSvc__remoteSetup()@service.js:824 < ()@service.js:1187 < WrappedNotify()@util.js:114 < WrappedLock()@util.js:86 < WrappedCatch()@util.js:65 < sync(false)@service.js:1146 < ([object Object])@service.js:414 < notify([object XPCWrappedNative_NoHelper])@util.js:629 2010-05-03 00:31:03 Service.Main DEBUG Weave Version: 1.2.3 Local Storage: 2 Remote Storage: 2010-05-03 00:31:03 Service.Main WARN Unknown error while downloading metadata record. Aborting sync. 2010-05-03 00:31:03 Service.Main DEBUG Exception: aborting sync, remote setup failed No traceback available 2010-05-03 17:26:25 Service.Main INFO Loading Weave 1.2.3 2010-05-03 17:26:25 Engine.Bookmarks DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Forms DEBUG Engine initialized 2010-05-03 17:26:25 Engine.History DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Passwords DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Prefs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Engine initialized 2010-05-03 17:26:25 Engine.Tabs DEBUG Resetting tabs last sync time 2010-05-03 17:26:25 Service.Main INFO Mozilla/5.0 (Windows; U; Windows NT 6.1; tr; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) 2010-05-03 17:26:26 Service.Main DEBUG Caching URLs under storage user base: https://sj-weave03.services.mozilla.com/1.0/mehper/ 2010-05-03 17:26:30 Service.Main DEBUG Autoconnecting in 3 seconds 2010-05-03 17:26:36 Service.Main INFO Logging in user mehper 2010-05-03 17:45:46 Service.Main DEBUG Exception: Could not acquire lock No traceback available 2010-05-03 17:53:18 Service.Main DEBUG Exception: Could not acquire lock No traceback available

    Read the article

  • Building an HTML5 App with ASP.NET

    - by Stephen Walther
    I’m teaching several JavaScript and ASP.NET workshops over the next couple of months (thanks everyone!) and I thought it would be useful for my students to have a really easy to use JavaScript reference. I wanted a simple interactive JavaScript reference and I could not find one so I decided to put together one of my own. I decided to use the latest features of JavaScript, HTML5 and jQuery such as local storage, offline manifests, and jQuery templates. What could be more appropriate than building a JavaScript Reference with JavaScript? You can try out the application by visiting: http://Superexpert.com/JavaScriptReference Because the app takes advantage of several advanced features of HTML5, it won’t work with Internet Explorer 6 (but really, you should stop using that browser). I have tested it with IE 8, Chrome 8, Firefox 3.6, and Safari 5. You can download the source for the JavaScript Reference application at the end of this article. Superexpert JavaScript Reference Let me provide you with a brief walkthrough of the app. When you first open the application, you see the following lookup screen: As you type the name of something from the JavaScript language, matching results are displayed: You can click the details link for any entry to view details for an entry in a modal dialog: Alternatively, you can click on any of the tabs -- Objects, Functions, Properties, Statements, Operators, Comments, or Directives -- to filter results by type of syntax. For example, you might want to see a list of all JavaScript built-in objects: You can login to the application to make modification to the application: After you login, you can add, update, or delete entries in the reference database: HTML5 Local Storage The application takes advantage of HTML5 local storage to store all of the reference entries on the local browser. IE 8, Chrome 8, Firefox 3.6, and Safari 5 all support local storage. When you open the application for the first time, all of the reference entries are transferred to the browser. The data is stored persistently. Even if you shutdown your computer and return to the application many days later, the data does not need to be transferred again. Whenever you open the application, the app checks with the server to see if any of the entries have been updated on the server. If there have been updates, then only the updates are transferred to the browser and the updates are merged with the existing entries in local storage. After the reference database has been transferred to your browser once, only changes are transferred in the future. You get two benefits from using local storage. First, the application loads very fast and works very fast after the data has been loaded once. The application does not query the server whenever you filter or view entries. All of the data is persisted in the browser. Second, you can browse the JavaScript reference even when you are not connected to the Internet (when you are on the proverbial airplane). The JavaScript Reference works as an offline application for browsers that support offline applications (unfortunately, not IE). When using Google Chrome, you can easily view the contents of local storage by selecting Tools, Developer Tools (CTRL-SHIFT I) and selecting Storage, Local Storage: The JavaScript Reference app stores two items in local storage: entriesLastUpdated and entries. HTML5 Offline App For browsers that support HTML5 offline applications – Chrome 8 and Firefox 3.6 but not Internet Explorer – you do not need to be connected to the Internet to use the JavaScript Reference. The JavaScript Reference can execute entirely on your machine just like any other desktop application. When you first open the application with Firefox, you are presented with the following warning: Notice the notification bar that asks whether you want to accept offline content. If you click the Allow button then all of the files (generated ASPX, images, CSS, JavaScript) needed for the JavaScript Reference will be stored on your local computer. Automatic Script Minification and Combination All of the custom JavaScript files are combined and minified automatically whenever the application is built with Visual Studio. All of the custom scripts are contained in a folder named App_Scripts: When you perform a build, the combine.js and combine.debug.js files are generated. The Combine.config file contains the list of files that should be combined (importantly, it specifies the order in which the files should be combined). Here’s the contents of the Combine.config file:   <?xml version="1.0"?> <combine> <scripts> <file path="compat.js" /> <file path="storage.js" /> <file path="serverData.js" /> <file path="entriesHelper.js" /> <file path="authentication.js" /> <file path="default.js" /> </scripts> </combine>   jQuery and jQuery UI The JavaScript Reference application takes heavy advantage of jQuery and jQuery UI. In particular, the application uses jQuery templates to format and display the reference entries. Each of the separate templates is stored in a separate ASP.NET user control in a folder named Templates: The contents of the user controls (and therefore the templates) are combined in the default.aspx page: <!-- Templates --> <user:EntryTemplate runat="server" /> <user:EntryDetailsTemplate runat="server" /> <user:BrowsersTemplate runat="server" /> <user:EditEntryTemplate runat="server" /> <user:EntryDetailsCloudTemplate runat="server" /> When the default.aspx page is requested, all of the templates are retrieved in a single page. WCF Data Services The JavaScript Reference application uses WCF Data Services to retrieve and modify database data. The application exposes a server-side WCF Data Service named EntryService.svc that supports querying, adding, updating, and deleting entries. jQuery Ajax calls are made against the WCF Data Service to perform the database operations from the browser. The OData protocol makes this easy. Authentication is handled on the server with a ChangeInterceptor. Only authenticated users are allowed to update the JavaScript Reference entry database. JavaScript Unit Tests In order to build the JavaScript Reference application, I depended on JavaScript unit tests. I needed the unit tests, in particular, to write the JavaScript merge functions which merge entry change sets from the server with existing entries in browser local storage. In order for unit tests to be useful, they need to run fast. I ran my unit tests after each build. For this reason, I did not want to run the unit tests within the context of a browser. Instead, I ran the unit tests using server-side JavaScript (the Microsoft Script Control). The source code that you can download at the end of this blog entry includes a project named JavaScriptReference.UnitTests that contains all of the JavaScripts unit tests. JavaScript Integration Tests Because not every feature of an application can be tested by unit tests, the JavaScript Reference application also includes integration tests. I wrote the integration tests using Selenium RC in combination with ASP.NET Unit Tests. The Selenium tests run against all of the target browsers for the JavaScript Reference application: IE 8, Chrome 8, Firefox 3.6, and Safari 5. For example, here is the Selenium test that checks whether authenticating with a valid user name and password correctly switches the application to Admin Mode: [TestMethod] [HostType("ASP.NET")] [UrlToTest("http://localhost:26303/JavaScriptReference")] [AspNetDevelopmentServerHost(@"C:\Users\Stephen\Documents\Repos\JavaScriptReference\JavaScriptReference\JavaScriptReference", "/JavaScriptReference")] public void TestValidLogin() { // Run test for each controller foreach (var controller in this.Controllers) { var selenium = controller.Value; var browserName = controller.Key; // Open reference page. selenium.Open("http://localhost:26303/JavaScriptReference/default.aspx"); // Click login button displays login form selenium.Click("btnLogin"); Assert.IsTrue(selenium.IsVisible("loginForm"), "Login form appears after clicking btnLogin"); // Enter user name and password selenium.Type("userName", "Admin"); selenium.Type("password", "secret"); selenium.Click("btnDoLogin"); // Should set adminMode == true selenium.WaitForCondition("selenium.browserbot.getCurrentWindow().adminMode==true", "30000"); } }   The results for running the Selenium tests appear in the Test Results window just like the unit tests: The Selenium tests take much longer to execute than the unit tests. However, they provide test coverage for actual browsers. Furthermore, if you are using Visual Studio ALM, you can run the tests automatically every night as part of your standard nightly build. You can view the Selenium tests by opening the JavaScriptReference.QATests project. Summary I plan to write more detailed blog entries about this application over the next week. I want to discuss each of the features – HTML5 local storage, HTML5 offline apps, jQuery templates, automatic script combining and minification, JavaScript unit tests, Selenium tests -- in more detail. You can download the source control for the JavaScript Reference Application by clicking the following link: Download You need Visual Studio 2010 and ASP.NET 4 to build the application. Before running the JavaScript unit tests, install the Microsoft Script Control. Before running the Selenium tests, start the Selenium server by running the StartSeleniumServer.bat file located in the JavaScriptReference.QATests project.

    Read the article

  • T42 Thinkpad, USB boot, CF for programs and storage?

    - by Tom K.
    Is this feasible? I have a Thinkpad T40. I'd like to get one of the tiny USB drives (like the Verbatim Store 'n' Stay series) of sufficient size to handle basic Linux OS booting (8GB?), then put 16Gb or greater memory card (SD or CF) in the PCMCIA slot with appropriate adapter for additional application programs and data storage. I know I could get a used or refurb HD, but I've had reliability issues in the past. I don't believe that new IDE drives are available.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >