Search Results

Search found 20582 results on 824 pages for 'double array'.

Page 585/824 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • 12.04 ext4 - cannot create regular file/No space left - with a lot of space and inodes

    - by user1434058
    This seems similar: EXT4 "No space left on device (28)" incorrect but there is no explanation I created an ext4 filesystem on a RAID 1 array with: mke2fs -t ext4 -T small /dev/md0 Copying a single directory with many tiny files I get: cp: cannot create regular file `/mnt/raid1_new/pics/pic3412.jpg': No space left on device space used 5% inodes used 1% I manually tried: cp /source/test1.jpg /mnt/raid1_new/pics/test1.jpg --- error cp /source/test1.jpg /mnt/raid1_new/pics/test2.jpg --- ERROR cp /source/test1.jpg /mnt/raid1_new/pics/test3.jpg --- no error Notes: RAID 1 disks are error free. I tried mv instead of cp and got the same thing. I tried omitting -T small with no effect Can somebody help me understand this magic?

    Read the article

  • grow/shrink a zfs RAIDZ

    - by c2h2
    I'm going to build a freenas server, would like to make sure what I can do with such magical and advanced zfs. If I have 5 * 3TB disks in RAIDZ (12TB storage in total), now I am trying to add another 2 * 3TB disks to this existing array. Q: Am I able to do it without affect/touch any existing data on RAIDZ volume? What about take away some existing disk? say take away 1 disk out of the 5 disks, assuming only very small portion of data exists on the raidz.

    Read the article

  • Grub install fails while installing Ubuntu on RAID

    - by Warren Pena
    I'm trying to install Ubuntu 9.10 using the alternate install CD, but I keep getting stuck. I get through the first few steps of the install process easily enough (telling it what partition to install to, what user ID and password to create, time zone, etc.), but then it suddenly pops up a menu asking me what the next step in the install process is. It has "Install the GRUB boot loader on a hard disk" selected by default. When I select it, it goes to another screen with a progress bar and a label "Installing the 'grub2' package." The progress bar gets to 16%, and then I get returned to the same menu. No matter how many times I try to install grub, the exact same thing happens. I'm trying to install Ubuntu on a two disk RAID-1 array. This is the RAID card I'm using: http://www.siig.com/ViewProduct.aspx?pn=SC-SAER12-S2. Any ideas what may be causing this to happen and how I can fix it? Thanks!

    Read the article

  • Using XNA's XML content pipeline to read arrays of objects with different subtypes

    - by Mcguirk
    Using XNA's XML content importer, is it possible to read in an array of objects with different subtypes? For instance, assume these are my class definitions: public abstract class MyBaseClass { public string MyBaseData; } public class MySubClass0 : MyBaseClass { public int MySubData0; } public class MySubClass1 : MyBaseClass { public bool MySubData1; } And this is my XML file: <XnaContent> <Asset Type="MyBaseClass[]"> <Item> <!-- I want this to be an instance of MySubClass0 --> <MyBaseData>alpha</MyBaseData> <MySubData0>314</MySubData0> </Item> <Item> <!-- I want this to be an instance of MySubClass1 --> <MyBaseData>bravo</MyBaseData> <MySubData1>true</MySubData1> </Item> </Asset> </XnaContent> How do I specify that I want the first Item to be an instance of MySubclass0 and the second Item to be an instance of MySubclass1?

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • PostgreSQL server: 10k RPM SAS or Intel 520 Series SSD drives?

    - by Vlad
    We will be expanding the storage for a PostgreSQL server and one of the things we are considering is using SSDs (Intel 520 Series) instead of rotating discs (10k RPM). Price per GB is comparable and we expect improved performance, however we are concerned about longevity since our database usage pattern is quite write-heavy. We are also concerned about data corruption in case of power failure (due to SSDs write cache not flushing properly). We currently use RAID10 with 4 active HDDs (10k 146GB) and 1 spare configured in the controller. It's a HP DL380 G6 server with P410 Smart Array Controller and BBWC. What makes more sense: upgrading the drives to 300GB 10k RPM or using Intel 520 Series SSDs (240GB)?

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Compiling Gnucash 2.6.3 in Ubuntu 14.04

    - by wolveryn
    Downloaded the debian file from source forge and followed instructions, where these errors appear, I re-downloaded the file several times with same error. I want to install the latest Gnucash not the one available on software center. Thank you for your support. /qof/gnc-date/qof print date dmy buff: There are some differences between distros in the way they namelocales, and this can cause trouble with the locale-basedformatting. If you get the assert in this function, run locale -aand make sure that en_US, en_GB, and fr_FR are installed and thatif a suffix is needed it's in the suffixes array.** ERROR:test-gnc-date.c:465:test_gnc_setlocale: code should not be reached FAIL GTester: last random seed: R02Sd8d3d0e67be954baa8ec75d81a14c0e3 /bin/bash: line 1: 18889 Terminated MALLOC_CHECK_=2 MALLOC_PERTURB_=$((${RANDOM:-256} % 256)) gtester --verbose test-qof make[5]: *** [test-nonrecursive] Error 143 make[5]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof/test' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof/test' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src' make: *** [check-recursive] Error 1

    Read the article

  • Slow NFS transfer performance of small files

    - by Arie K
    I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0. I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link. Simple benchmark using dd: $ dd if=/dev/zero of=outfile bs=1000 count=2000000 2000000+0 records in 2000000+0 records out 2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s I see it can achieve moderate transfer speed (58.0 MB/s). But if I copy a directory containing many small files (.php and .jpg, around 1-4 kB per file) of total size ~300 MB, the cp process ends in about 10 minutes. Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted?

    Read the article

  • Gamify your Web

    - by Isabel F. Peñuelas
    Yesterday Valencia welcomed the Gamification World Congress that I follow virtually through #GWC2012. BBVA, Iberia, Ligeresa, Axe, Wayra, ESADE, GlaxoSmithKline, Macmillan, Gamisfaction, Nomaders, Blaffin were among the companies presenting success stories on gaming. It has been proved that people remember things easily when an emotion is created. The marketing expectations around Gamification techniques have a lot to do with Neuromarketing theories. There are a lot of expectations on internal enterprise Gamification. In the public Web some sectors are taking the lead on following the trend. The Gartner Analyst Brian Burke opened another Gamification recent event in Madrid remembering that “Gamification is mostly about Engagement”, and this can be applied both to customers or employees. Gamification and Banking The experience of the Spanish Financial Group BBVA that just launched BBVA Game was also presented a week ago at the BBVA Innovation Centre during the event “Gamification & Banking: a fad or a serious business?” . One of the objectives of the BBVA Game was to double the name of registered users. “People like the efficiency of the online channel want to keep a one-to-one contact with the brand”-explained Bernardo Crespo. Another interested data coming out the BBVA presentation was that “only 20% of Spanish users –out of the total holders of Bank Accounts in the country- is familiar with the use of a Web Site to consult their bank accounts”, the project aims also to reverse this situation helping people to learn making a heavy use of the Video in the gaming context. In general Banking presenters seem to agree that Gamification techniques are helping to increase the time spent on the Web. Gamification and Health Using Gamification techniques for chronic illness rehabilitation was another topic of the World Congress. Here you can find some ideas and experiences What can games do for the health (In Spanish) I have personally started my own mental-health gaming project at http://www.lumosity.com/ Gamification in the Enterprise I really recommend Reading this excellent post of Ultan ÓBroin my Introduction to Gamification and Applications. Employee´s motivation and learning are experiencing a 360º turn and it looks than some of us will become soon the Dragon of the year instead of the Employee of the Year. Using Web 2.0 Tools for Gamification Projects  What type of tools do we need for a quick-win Gamification project? To certain extend Gamification can be considered an evolution of the participative Web. Badging, avatars, points and awards, leader boards, progress charts, virtual currencies, gifting and giving challenges and quests are common components and elements. The Web is offering new development frameworks to that purpose as this Avatar Framework from Paypal or Badgeville to include in web applications. Besides, tools to create communities around a game are required to comment, share and vote players as well as for an efficient multimedia management. Due to its entirely open architecture, its community features, and its multimedia and imaging solutions is were I see WebCenter as a tool helping brands to success. Link to Sources & Recommended Readings YouTube Video of BBVAGame presentation Where To Apply Gamification In Your Incentive Jim Calhoun Cancer Challenge Ride and Walkh For my Spanish Readers El aburrimiento es el enemigo número uno del éxito

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • RAID1 Hard drives with Intel Controller

    - by Dave_H
    I have an older server with 2 35GB hard drives arranged as a RAID-1 drive. I have 3 open bays, I'd like to add 2 more hard drives to the open bays and have all 4 drives configured as a RAID-1 drive that Windows sees as 1 larger drive. Is this possible? If not, the new drives are 2x as big as the old drives, is it possible to replace 1 of the old drives and rebuild then replace the second drive and rebuild and somehow expand the array to use all of the available space? The server is using an Intel hardware raid controller, the software interface is Intel Storage Console v2.12.

    Read the article

  • ZFS SAS/SATA controller recommendations

    - by ewwhite
    I've been working with OpenSolaris and ZFS for 6 months, primarily on a Sun Fire x4540 and standard Dell and HP hardware. One downside to standard Perc and HP Smart Array controllers is that they do not have a true "passthrough" JBOD mode to present individual disks to ZFS. One can configure multiple RAID 0 arrays and get them working in ZFS, but it impacts hotswap capabilities (thus requiring a reboot upon disk failure/replacement). I'm curious as to what SAS/SATA controllers are recommended for home-brewed ZFS storage solutions. In addition, what effect does battery-backed write cache (BBWC) have in ZFS storage?

    Read the article

  • Postfix/Procmail mailing list software

    - by Jason Antman
    I'm looking for suggestions on mailing list software to use on an existing server running Postfix/Procmail. Something relatively simple. requirements: 1 list, < 50 subscribers list members dumped in a certain file by a script (being pulled from LDAP or MySQL on another box) Handles MIME, images, etc. Moderation features No subscription/unsubscription - just goes by the file or database. Mailman is far too heavy-weight, and doesn't seem to play (easily) with Postfix/Procmail. I'm currently using a PHP script that just receives mail as a user, reads a list of members from a serialized array (file dumped on box via cron on the machine with the MySQL database containing members) and re-mails it to everyone. Unfortunately, we now need moderation capabilities, and I don't quite feel like adding that to the PHP script if there's already something out there that does it. Thanks for any tips. -Jason

    Read the article

  • Switching BIOS SATA RAID/AHCI setting causes BSOD at Windows Start - Why?

    - by thephatp
    I just changed my disk setup from: 1 SATA HDD Primary OS Disk 2x SATA HDD Backup Disks in RAID 1 TO: 1 SATA SSD Primary OS Disk 1 SATA HDD Backup Disk [No RAID] Everything worked great, no problem. So, since I don't have a RAID array anymore, I decided that I could change my BIOS setting to AHCI instead of RAID. I have a Gigabyte GA-P35-DS3R v1.0 mobo. These are my steps: Settings Integrated Peripherals "SATA RAID/AHCI Mode" = RAID -- Changed this setting to AHCI Reboot Windows Start screen shows up, but as the color orbs are spinning into focus, BSOD and immediate restart Repeated reboot several times, same outcome Next Step: Launch BIOS settings Integrated Peripherals "Onboard SATA/IDE Ctrl Mode" = RAID -- Changed this setting to AHCI Reboot Windows Start screen shows up, but as the color orbs are spinning into focus, BSOD and immediate restart Repeated reboot several times, same outcome Switch both settings back to RAID, reboot, and Windows starts up just fine, no issues. What am I missing? Why can't I set it to AHCI mode without BSODs?

    Read the article

  • Adding an ASP website in IIS7.5 on Windows 7

    - by birdus
    enter preformatted text hereI'm trying to add an ASP website under IIS 7.5 on Windows 7 and am having no luck so far. This site is just for me to hit locally. I need to make some changes to some of the HTML in some of the ASP files and I just need to be able to test my changes as I make them. I installed IIS and checked the box for ASP. Next, I added an Application Pool which I called ASP and which has "No Managed Code" and "ASP" set. Next, I added the website by right-clicking "Sites" then clicking "Add Web Site...". I gave it a name, set it to use the ASP app pool, pointed it to the path where the ASP code is (I left it at pass-through authentication), and typed in 5555 as the port, so as to not interfere with the default website. The code is sitting on my server and the path simply uses the mapped drive that I always use to access files on that drive array. When I type in http://mysite:5555, I get "could not find mysite:5555". I don't really know if all these settings are correct or what else I should try. What am I missing? Thanks, Jay

    Read the article

  • Uploading files to https mediawiki

    - by zac
    I am unable to upload files (images) to my mediawiki install. I think it may have something to do with it is being hosted as http secure(HTTPS). I followed carefully the instructions here. I updated write permissions to the /images/ dir drwxrwxrwx 2 apache apache 4096 Apr 13 19:04 images php.ini file_uploads = On In LocalSettings.php $wgEnableUploads = true; $wgFileExtensions = array('png','jpg','jpeg','gif'); When I try to upload it quickly refreshes the page without any errors or any indication anything went wrong, other than how fast it refreshed. When I navigate to the history of uploads it is empty. How can I troubleshoot this. Is it related to the secure http?

    Read the article

  • Gain Total Control of Systems running Oracle Linux

    - by Anand Akela
    Oracle Linux is the best Linux for enterprise computing needs and Oracle Enterprise Manager enables enterprises to gain total control over systems running Oracle Linux. Linux Management functionality is available as part of Oracle Enterprise Manager 12c and is available to Oracle Linux Basic and Premier Support customers at no cost. The solution provides an integrated and cost-effective solution for complete Linux systems lifecycle management and delivers comprehensive provisioning, patching, monitoring, and administration capabilities via a single, web-based user interface thus significantly reducing the complexity and cost associated with managing Linux operating system environments. Many enterprises are transforming their IT infrastructure from multiple independent datacenters to an Infrastructure-as-a-Service (IaaS) model, in which shared pools of compute and storage are made available to end-users on a self-service basis. While providing significant improvements when implemented properly, this strategy introduces change and complexity at a time when datacenters are already understaffed and overburdened. To aid in this transformation, IT managers need the proper tools to help them provide the array of IT capabilities required throughout the organization without stretching their staff and budget to the limit. Oracle Enterprise Manager 12c offers  the advanced capabilities to enable IT departments and end-users to take advantage of many benefits and cost savings of IaaS. Oracle Enterprise Manager Ops Center 12c addresses this challenge with a converged approach that integrates systems management across the infrastructure stack, helping organizations to streamline operations, increase productivity, and reduce system downtime.  You can see the Linux management functionality in action by watching the latest integrated Linux management demo . Stay Connected with Oracle Enterprise Manager: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Is a "failed" RAID 5 disk really no good?

    - by GregH
    This is my first venture in to setting up RAID on my home system. After installing 3 x 1TB drives in RAID 5, everything was running well for about 10 days. Then, the Intel Rapid Storage Technology software that monitors the disks and RAID on my system, told me that I had a failed drive. I marked the drive as good, and the array rebuilt. Then a day or so later I got a notification again, that the drive failed. I'm just wondering if this drive really is no good or if there is something I can do to get it working again? Or, do I just need to return it to the store where I bought it and get a replacement?

    Read the article

  • Code Trivia #4

    - by João Angelo
    Got the inspiration for this one in a recent stackoverflow question. What should the following code output and why? class Program { class Author { public string FirstName { get; set; } public string LastName { get; set; } public override string ToString() { return LastName + ", " + FirstName; } } static void Main() { Author[] authors = new[] { new Author { FirstName = "John", LastName = "Doe" }, new Author { FirstName = "Jane", LastName="Doe" } }; var line1 = String.Format("Authors: {0} and {1}", authors); Console.WriteLine(line1); string[] serial = new string[] { "AG27H", "6GHW9" }; var line2 = String.Format("Serial: {0}-{1}", serial); Console.WriteLine(line2); int[] version = new int[] { 1, 0 }; var line3 = String.Format("Version: {0}.{1}", version); Console.WriteLine(line3); } } Update: The code will print the first two lines // Authors: Doe, John and Doe, Jane // Serial: AG27H-6GHW9 and then throw an exception on the third call to String.Format because array covariance is not supported in value types. Given this the third call of String.Format will not resolve to String.Format(string, params object[]), like the previous two, but to String.Format(string, object) which fails to provide the second argument for the specified format and will then cause the exception.

    Read the article

  • SQL SERVER – Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)

    - by pinaldave
    Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers. In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave. A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole. There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services. DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent. Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database. Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out. SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012 SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • How can I get the palette of an 8-bit surface in SDL.NET/Tao.SDL?

    - by lolmaster
    I'm looking to get the palette of an 8-bit surface in SDL.NET if possible, or (more than likely) using Tao.SDL. This is because I want to do palette swapping with the palette directly, instead of blitting surfaces together to replace colours like how you would do it with a 32-bit surface. I've gotten the SDL_Surface and the SDL_PixelFormat, however when I go to get the palette in the same way, I get a System.ExecutionEngineException: private Tao.Sdl.Sdl.SDL_Palette GetPalette(Surface surf) { // Get surface. Tao.Sdl.Sdl.SDL_Surface sdlSurface = (Tao.Sdl.Sdl.SDL_Surface)System.Runtime.InteropServices.Marshal.PtrToStructure(surf.Handle, typeof(Tao.Sdl.Sdl.SDL_Surface)); // Get pixel format. Tao.Sdl.Sdl.SDL_PixelFormat pixelFormat = (Tao.Sdl.Sdl.SDL_PixelFormat)System.Runtime.InteropServices.Marshal.PtrToStructure(sdlSurface.format, typeof(Tao.Sdl.Sdl.SDL_PixelFormat)); // Execution exception here. Tao.Sdl.Sdl.SDL_Palette palette = (Tao.Sdl.Sdl.SDL_Palette)System.Runtime.InteropServices.Marshal.PtrToStructure(pixelFormat.palette, typeof(Tao.Sdl.Sdl.SDL_Palette)); return palette; } When I used unsafe code to get the palette, I got a compile time error: "Cannot take the address of, get the size of, or declare a pointer to a managed type ('Tao.Sdl.Sdl.SDL_Palette')". My unsafe code to get the palette was this: unsafe { Tao.Sdl.Sdl.SDL_Palette* pal = (Tao.Sdl.Sdl.SDL_Palette*)pixelFormat.palette; } From what I've read, a managed type in this case is when a structure has some sort of reference inside it as a field. The SDL_Palette structure happens to have an array of SDL_Color's, so I'm assuming that's the reference type that is causing issues. However I'm still not sure how to work around that to get the underlying palette. So if anyone knows how to get the palette from an 8-bit surface, whether it's through safe or unsafe code, the help would be greatly appreciated.

    Read the article

  • 3ware RAID in Windows 7

    - by scribbles
    Just moved a 3ware 9500S-8 raid card and a few drives from my linux box into my win7 box. Windows Updates picked up the 3ware driver and installed it. After a reboot I checked Disk Management and it shows the array as one solid drive with the correct size. The only issue is that its not assigned a drive letter and when I right click on it my only option is to Convert to Dynamic Disk... From what I've read this is not something I want to do. What are my options?

    Read the article

  • Snap Server 18000 connection help!

    - by sicko666
    I wonder if anyone here can help me. I have a home server setup made up of old secondhand computers, 2 servers running Windows Server 2003, 1 workstation running Windows 7, a 16 port switch & an adsl ethernet modem. All these connect and talk to each other fine but then I got a "Snap Server 18000" and a "Snap disk 30sa" sata array. When I turn the Snap on, it boots past the BIOS, runs a kernel, then displays: This device cannot be managed via the video/kbd/mouse interface. The video is now disabled. You may access the management functions from your web browser. Only, none of the other PCs detect it, so no browser can find it! I have checked all cables, and all LEDs indicate there's a connection. I have installed the windows "iscsi" and the adaptec "Snap Server Manager" on all PCs but still it's not detected. I don't know what else to do, please advise!

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >