Search Results

Search found 58636 results on 2346 pages for 'text services framework'.

Page 77/2346 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • When page loads display 1st image text - hide all other text

    - by Jonah1289
    Hi I have created http://techavid.com/design/test3.html and when you load the page you see there are 3 images. The sun image is focused(in color), while the others are greyed out until clicked. That is how it should be for the images. Also when you load the page you see under each image it's own text(i.e. 1st: Sun, 2nd: Airplane, 3rd: Nano), but on page load I only want 1st:Sun to display and hide all other text until their respective image is clicked. Any idea how to do this? thanks :) J

    Read the article

  • Modules and custom routes

    - by Dennis Haarbrink
    I'm building a website using Zend Framework and having trouble implementing modules and custom routes. There are basically two rules: Select a module based on the domain (multiple domains can select a single module) Regardless of domain, select one specific module based on path Examples: domain1.com selects module domain1 domain1.net selects module domain1 domain2.com selects module domain2 both domain1.com/admin and domain2.com/admin select module admin This is the first project where I use ZF, so my experience with the framework is basically non-existent. I have done some dirty hacking in my bootstrapper where I check the domain and than execute Zend_Layout::startMVC() to get the correct layout, but that is messed up when I'm implementing custom routes. So I was wondering what is the best way to go about implementing this?

    Read the article

  • Rendering only a part of text FTGL, OpenGL

    - by Mosquito
    I'm using FTGL library to render text in my C++ project. I can easily render text by using: CFontManager::Instance().renderWrappedText(font, lineLength, position, text); Unfortunately there is a situation in which this Button which displays text, is partly hidden because of resizing container in which it is situated. I'm able without any problem to draw Button's background to fit the container, but I've got a problem with doing the same with a text. Is it possible to somehow draw only text for given width and the rest just ignore? This is a screen which presents my problem: As you can see, the Button "Click here" is being drawn properly, but I can't do the same with "Click here" text.

    Read the article

  • Rich Text Editor in javascript

    - by chanthou
    iframe .text-bold{ border:1px solid orange; background-color:#ccc; width:16px; height:16px; font-weight:bold; cursor:pointer; } .active{ border-color:#9DAECD #E8F1FF #E8F1FF #9DAECD; background-color:yellow; } function init( ) { iframe = document.createElement("iframe"); document.body.appendChild(iframe); iframe.onload = setIframeEditable; isBold=false; div=document.getElementById("bold"); } var setIframeEditable = function(){ iframe.contentDocument.designMode='on'; iframe.focus(); } function makeBold(){ if(!isBold){ //console.log(iframe.contentDocument.execCommand("bold", false, null)); iframe.contentDocument.execCommand("bold", false, null); div.className += " active"; isBold=true; iframe.focus(); }else{ //console.log(iframe.contentDocument.execCommand("bold", true, null)); iframe.contentDocument.execCommand("bold", false, null); div.className ="text-bold"; isBold=false iframe.focus(); } } </script> </head> <body onload="init()"> <div id="bold" class="text-bold" onclick="makeBold()">B</div> </body>

    Read the article

  • Improve Efficiency for This Text Processing Code

    - by johnv
    I am writing a program that counts the number of words in a text file which is already in lowercase and separated by spaces. I want to use a dictionary and only count the word IF it's within the dictionary. The problem is the dictionary is quite large (~100,000 words) and each text document has also ~50,000 words. As such, the codes that I wrote below gets very slow (takes about 15 sec to process one document on a quad i7 machine). I'm wondering if there's something wrong with my coding and if the efficiency of the program can be improved. Thanks so much for your help. Code below: public static string WordCount(string countInput) { string[] keywords = ReadDic(); /* read dictionary txt file*/ /*then reads the main text file*/ Dictionary<string, int> dict = ReadFile(countInput).Split(' ') .Select(c => c) .Where(c => keywords.Contains(c)) .GroupBy(c => c) .Select(g => new { word = g.Key, count = g.Count() }) .OrderBy(g => g.word) .ToDictionary(d => d.word, d => d.count); int s = dict.Sum(e => e.Value); string k = s.ToString(); return k; }

    Read the article

  • Finding Common Phrases in SQL Server TEXT Column

    - by regex
    Short Desc: I'm curious to see if I can use SQL Analysis services or some other SQL Server service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification.

    Read the article

  • Normalize whitespace and other plain-text formatting routines

    - by dreftymac
    Background: The language is JavaScript. The goal is to find a library or pre-existing code to do low-level plain-text formatting. I can write it myself, but why re-invent the wheel. The issue is: it is tough to determine if a "wheel" is out there, since any search for JavaScript libraries pulls up an ocean of HTML-centric stuff. I am not interested in HTML necessarily, just text. Example: I need a JavaScript function that changes this: BEFORE: nisi ut aliquip | ex ea commodo consequat duis |aute irure dolor in esse cillum dolore | eu fugiat nulla pariatur |excepteur sint occa in culpa qui | officia deserunt mollit anim id |est laborum ... into this ... AFTER: nisi ut aliquip | ex ea commodo consequat duis | aute irure dolor in esse cillum dolore | eu fugiat nulla pariatur | excepteur sint occa in culpa qui | officia deserunt mollit anim id | est laborum Question: Does it exist, a JavaScript library that is non-html-web-development-centric that has functions for normalizing spaces in delimited plain text, justifying and spacing plain text? Rationale: Investigating JavaScript for use in a programmer's text editor.

    Read the article

  • jquery onblur text

    - by Isis
    Hello <div id="servisesmenu"> <span class="services" title="???????">???????</span> </div> <div id="services_menu" class="hiddenmenu"> <div class="framemenu"> <div class="itemmenu"><a href="/flights_booking/" class="u" title="??????? ??????????? ??????">??????? ??????????? ??????</a></div> <div class="itemmenu"><a href="/hotels/" class="u" title="???????????? ???????? ??????">???????????? ???????? ??????</a></div> <div class="itemmenu"><a href="/sea_cruises_search/" class="u" title="????? ???????">????? ???????</a></div> <div class="itemmenu"><a href="/flights_panel/" class="u" title="????? ??????????">????? ??????????</a></div> </div> </div> $('.services').click(function() { $('#services_menu').attr('class') == 'hiddenmenu' ? $('#services_menu').attr('class', 'visiblemenu') : $('#services_menu').attr('class', 'hiddenmenu'); }); It's okay. But...How can I make by clicking on any place on the page, this field disappeared (class a hiddenmenu) Sorry for bad english. Thank you!

    Read the article

  • Search implementation dilemma: full text vs. plain SQL

    - by Ethan
    I have a MySQL/Rails app that needs search. Here's some info about the data: Users search within their own data only, so searches are narrowed down by user_id to begin with. Each user will have up to about five thousand records (they accumulate over time). I wrote out a typical user's records to a text file. The file size is 2.9 MB. Search has to cover two columns: title and body. title is a varchar(255) column. body is column type text. This will be lightly used. If I average a few searches per second that would be surprising. It's running an a 500 MB CentOS 5 VPS machine. I don't want relevance ranking or any kind of fuzziness. Searches should be for exact strings and reliably return all records containing the string. Simple date order -- newest to oldest. I'm using the InnoDB table type. I'm looking at plain SQL search (through the searchlogic gem) or full text search using Sphinx and the Thinking Sphinx gem. Sphinx is very fast and Thinking Sphinx is cool, but it adds complexity, a daemon to maintain, cron jobs to maintain the index. Can I get away with plain SQL search for a small scale app?

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • How to get user input before saving a file in Sublime Text

    - by EddieJessup
    I'm making a plugin in Sublime Text that prompts the user for a password to encrypt a file before it's saved. There's a hook in the API that's executed before a save is executed, so my naïve implementation is: class TranscryptEventListener(sublime_plugin.EventListener): def on_pre_save(self, view): # If document is set to encode on save if view.settings().get('ON_SAVE'): self.view = view # Prompt user for password message = "Create a Password:" view.window().show_input_panel(message, "", self.on_done, None, None) def on_done(self, password): self.view.run_command("encode", {password": password}) The problem with this is, by the time the input panel appears for the user to enter their password, the document has already been saved (despite the trigger being 'on_pre_save'). Then once the user hits enter, the document is encrypted fine, but the situation is that there's a saved plaintext file, and a modified buffer filled with the encrypted text. So I need to make Sublime Text wait until the user's input the password before carrying out the save. Is there a way to do this? At the moment I'm just manually re-saving once the encryption has been done: def on_pre_save(self, view, encode=False): if view.settings().get('ON_SAVE') and not view.settings().get('ENCODED'): self.view = view message = "Create a Password:" view.window().show_input_panel(message, "", self.on_done, None, None) def on_done(self, password): self.view.run_command("encode", {password": password}) self.view.settings().set('ENCODED', True) self.view.run_command('save') self.view.settings().set('ENCODED', False) but this is messy and if the user cancels the encryption then the plaintext file gets saved, which isn't ideal. Any thoughts? Edit: I think I could do it cleanly by overriding the default save command. I hoped to do this by using the on_text_command or on_window_command triggers, but it seems that the save command doesn't trigger either of these (maybe it's an application command? But there's no on_application_command). Is there just no way to override the save function?

    Read the article

  • Creating a smart text generator

    - by royrules22
    I'm doing this for fun (or as 4chan says "for teh lolz") and if I learn something on the way all the better. I took an AI course almost 2 years ago now and I really enjoyed it but I managed to forget everything so this is a way to refresh that. Anyway I want to be able to generate text given a set of inputs. Basically this will read forum inputs (or maybe Twitter tweets) and then generate a comment based on the learning. Now the simplest way would be to use a Markov Chain Text Generator but I want something a little bit more complex than that as the MKC basically only learns by word order (which word is more likely to appear after word x given the input text). I'm trying to see if there's something I can do to make it a little bit more smarter. For example I want it to do something like this: Learn from a large selection of posts in a message board but don't weight it too much For each post: Learn from the other comments in that post and weigh these inputs higher Generate comment and post See what other users' reaction to your post was. If good weigh it positively so you make more posts that are similar to the one made, and vice versa if negative. It's the weighing and learning from mistakes part that I'm not sure how to implement. I thought about Artificial Neural Networks (mainly because I remember enjoying that chapter) but as far as I can tell that's mainly used to classify things (i.e. given a finite set of choices [x1...xn] which x is this given input) not really generate anything. I'm not even sure if this is possible or if it is what should I go about learning/figuring out. What algorithm is best suited for this? To those worried that I will use this as a bot to spam or provide bad answers to SO, I promise that I will not use this to provide (bad) advice or to spam for profit. I definitely will not post it's nonsensical thoughts on SO. I plan to use it for my own amusement. Thanks!

    Read the article

  • Finding Common Byte Sequences in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Concatenate 2 text elements on a line with full-width border using CSS only

    - by Michael Horne
    Okay, I'm a newbie to CSS3, so please be gentle. ;-) I'm working with some Wordpress code (Woocommerce plugin, to be exact), and I'm trying to format a line of code in a sidebar so that 2 separate text items (one in an <a, the other in a <span are all on the same line, the full width of the column, and with a bottom border. It looks something like this (except the bottom border on each text do not go all the way across the enclosing sidebar box): http://www.dalluva.com/temp/browse-catalog.JPG (sorry, I'm new and can't post inline images yet) Here's the code fragment I'm trying to live with (i.e. I don't want to change it): <div class="widget"> ... <ul class="product-categories"> <li class="cat-item"> <a href="http://localhost/dalluva/shop/product-category/books/">Books</a> <span class="count">(5)</span> </li> ... And here's the CSS I have now: .widget ul li a { border-bottom: 1px solid #e9e9e9; line-height:1.0; padding: 5px 0 5px 22px; display: inline-block; } .widget ul li span { border-bottom: 1px solid #e9e9e9; line-height: 1.0; padding: 5px 0 5px 0; display: inline-block; } The output in the image above looks right for this CSS code, but when I change the 'span' CSS to include a width:100%, it causes the span element to wrap to the next line, looking like this: http://www.dalluva.com/temp/browse-catalog-2.JPG I've played with white-space:nowrap, overflow:hidden, etc, but I can't seem to find a way to have both the <a and the <span text on the same line with the border extending the full width of the column. Any suggestions on getting the desired effect through CSS only? Thanks. Michael

    Read the article

  • Full Text Search like Google

    - by Eduardo
    I would like to implement full-text-search in my off-line (android) application to search the user generated list of notes. I would like it to behave just like Google (since most people are already used to querying to Google) My initial requirements are: Fast: like Google or as fast as possible, having 100000 documents with 200 hundred words each. Searching for two words should only return documents that contain both words (not just one word) (unless the OR operator is used) Case insensitive (aka: normalization): If I have the word 'Hello' and I search for 'hello' it should match. Diacritical mark insensitive: If I have the word 'así' a search for 'asi' should match. In Spanish, many people, incorrectly, either do not put diacritical marks or fail in correctly putting them. Stop word elimination: To not have a huge index meaningless words like 'and', 'the' or 'for' should not be indexed at all. Dictionary substitution (aka: stem words): Similar words should be indexed as one. For example, instances of 'hungrily' and 'hungry' should be replaced with 'hunger'. Phrase search: If I have the text 'Hello world!' a search of '"world hello"' should not match it but a search of '"hello world"' should match. Search all fields (in multifield documents) if no field specified (not just a default field) Auto-completion in search results while typing to give popular searches. (just like Google Suggest) How may I configure a full-text-search engine to behave as much as possible as Google? (I am mostly interested in Open Source, Java and in particular Lucene)

    Read the article

  • C# ...extract email address from inside 100's of text files

    - by Developer
    My SMTP server got 100's of errors when sending lots of emails. Now have lots of .BAD files each one containing an error message and somewhere in the middle, the actual email address it was supposed to be sent to. What is the easiest way to extract from each file "just" the "email address", so that I can have a list of the actual failed emails? I can code in C# and any suggestion will be truly welcomed. BAD SAMPLE TEXT: From: [email protected] To: [email protected] Date: Tue, 25 Sep 2012 12:12:09 -0700 MIME-Version: 1.0 Content-Type: multipart/report; report-type=delivery-status; boundary="9B095B5ADSN=_01CD9B35032DF58000000066my.server.co" X-DSNContext: 7ce717b1 - 1386 - 00000002 - C00402D1 Message-ID: Subject: Delivery Status Notification (Failure) This is a MIME-formatted message. Portions of this message may be unreadable without a MIME-capable mail program. --9B095B5ADSN=_01CD9B35032DF58000000066my.server.co Content-Type: text/plain; charset=unicode-1-1-utf-7 This is an automatically generated Delivery Status Notification. Unable to deliver message to the following recipients, due to being unable to connect successfully to the destination mail server. [email protected] --9B095B5ADSN=_01CD9B35032DF58000000066my.server.com Content-Type: message/delivery-status Reporting-MTA: dns;my.server.com Received-From-MTA: dns;Social Arrival-Date: Tue, 25 Sep 2012 11:45:15 -0700 Final-Recipient: rfc822;[email protected] Action: failed Status: 4.4.7 --9B095B5ADSN=_01CD9B35032DF58000000066my.server.com Content-Type: message/rfc822 Received: from Social ([127.0.0.1]) by my.server.com with Microsoft SMTPSVC(7.5.7601.17514); Tue, 25 Sep 2012 11:45:15 -0700 ====================================== ...and lots more text after ===================== Mainly I want to find the "[email protected]" email right in the middle...

    Read the article

  • How To Read A Remote Text File

    - by XcodeDev
    Hi, I would like to read a remote text file called posts.txt on my website. An example of the insides of the posts.txt file would be this: <div style="width : 300px; position : relative"><font face="helvetica, geneva, sans serif" size="6"><b>2</b></font><font face="helvetica, geneva, sans serif" size="4"><i> scored by iSDK</i></font><br><img src="Bar.png" /></div><div style="width : 300px; position : relative"><font face="helvetica, geneva, sans serif" size="6"><b>2</b></font><font face="helvetica, geneva, sans serif" size="4"><i> scored by martin</i></font><br><img src="Bar.png" /></div> What I wanted to know is how can I get the score, and scored by text from the .txt file? The score is (in this case) the: <b>2</b>, and the scored by text in this case would be: "scored by iSDK". Any code telling me how to do this is twice as helpful! Thanks in advanced XcodeDev

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Custom Text and Binary Payloads using WebSocket (TOTD #186)

    - by arungupta
    TOTD #185 explained how to process text and binary payloads in a WebSocket endpoint. In summary, a text payload may be received as public void receiveTextMessage(String message) {    . . . } And binary payload may be received as: public void recieveBinaryMessage(ByteBuffer message) {    . . .} As you realize, both of these methods receive the text and binary data in raw format. However you may like to receive and send the data using a POJO. This marshaling and unmarshaling can be done in the method implementation but JSR 356 API provides a cleaner way. For encoding and decoding text payload into POJO, Decoder.Text (for inbound payload) and Encoder.Text (for outbound payload) interfaces need to be implemented. A sample implementation below shows how text payload consisting of JSON structures can be encoded and decoded. public class MyMessage implements Decoder.Text<MyMessage>, Encoder.Text<MyMessage> {     private JsonObject jsonObject;    @Override    public MyMessage decode(String string) throws DecodeException {        this.jsonObject = new JsonReader(new StringReader(string)).readObject();               return this;    }     @Override    public boolean willDecode(String string) {        return true;    }     @Override    public String encode(MyMessage myMessage) throws EncodeException {        return myMessage.jsonObject.toString();    } public JsonObject getObject() { return jsonObject; }} In this implementation, the decode method decodes incoming text payload to MyMessage, the encode method encodes MyMessage for the outgoing text payload, and the willDecode method returns true or false if the message can be decoded. The encoder and decoder implementation classes need to be specified in the WebSocket endpoint as: @WebSocketEndpoint(value="/endpoint", encoders={MyMessage.class}, decoders={MyMessage.class}) public class MyEndpoint { public MyMessage receiveMessage(MyMessage message) { . . . } } Notice the updated method signature where the application is working with MyMessage instead of the raw string. Note that the encoder and decoder implementations just illustrate the point and provide no validation or exception handling. Similarly Encooder.Binary and Decoder.Binary interfaces need to be implemented for encoding and decoding binary payload. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • what NAS cellera services to be monitered?

    - by wildchild
    We have a monitoring tool called SCOM which mainly monitors different OS related services .However, we being part of storage team would also like some of our services to be monitored. We have HP NAS , and I am wondering what all services I can ask the other team to monitor for us and alert us if something goes wrong. The same goes with celerra and centera what important services can be monitored .I did search but to no avail.I ‘m not finding any of the useful services..Any help in this regard is greatly appreaciated thanks!

    Read the article

  • LLBLGen Pro v3.0 with Entity Framework v4.0 (12m video)

    - by FransBouma
    Today I recorded a video in which I illustrate some of the database-first functionality available in LLBLGen Pro v3.0. LLBLGen Pro v3.0 also supports model-first functionality, which I hope to illustrate in an upcoming video. LLBLGen Pro v3.0 is currently in beta and is scheduled to RTM some time in May 2010. It supports the following frameworks out of the box, with more scheduled to follow in the coming year: LLBLGen Pro RTL (our own o/r mapper framework), Linq to Sql, NHibernate and Entity Framework (v1 and v4). The video I linked to below illustrates the creation of an entity model for Entity Framework v4, by reverse engineering the SQL Server 2008 example database 'AdventureWorks'. The following topics (among others) are included in the video: Abbreviation support (example: convert 'Qty' into 'Quantity' during name construction) Flexible, framework specific settings Attribute definitions for various elements (so no requirement for buddy-classes or messing with generated code or templates) Retrieval of relational model data from a database Reverse engineering of tables into entities, automatically placed in groups Auto-creation of inheritance hierarchies Refactoring of entity fields into Value Type Definitions (DDD) Mapping a Typed view onto a stored procedure resultset Creation of a Typed list (definition of a query with a projection) on a set of related entities Validation and correction of found inconsistencies and errors Generating code using one of the pre-defined presets Illustration of the code in vs.net 2010 It also gives a good overview of what it takes with LLBLGen Pro v3.0 to start from a new project, point it to a database, get an entity model, perform tweaks and validation and generate code which is ready to run. I am no video recording expert so there's no audio and some mouse movements might be a little too quickly. If that's the case, please pause the video. It's rather big (52MB). Click here to open the HTML page with the video (Flash). Opens in a new window. LLBLGen Pro v3.0 is currently in beta (available for v2.x customers) and scheduled to be released somewhere in May 2010.

    Read the article

  • Storing SCA Metadata in the Oracle Metadata Services Repository by Nicolás Fonnegra Martinez and Markus Lohn

    - by JuergenKress
    The advantages of using the Oracle Metadata Services Repository as a central storage for the metadata. SCA has been available since the release of the Oracle SOA Suite 11g. This technology combines and orchestrates several SOA components inside an SCA composite, making design, development, deployment, and maintenance easier. SCA development is metadata-driven, meaning that metadata artifacts, such as Web Services Description Language (WSDL), XML Schema Definition (XSD), XML, others, define the composite's behavior. With the increased number of composites and the dependencies among them, it became necessary to manage all the metadata in an adequate way. This article will address the advantages of using the Oracle Metadata Services (MDS) repository as a central storage for the metadata. The MDS repository is a central part of the Oracle Fusion Middleware landscape, managing the metadata for several technologies, such as Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, and the Oracle SOA Suite. This article is divided into three parts. The first part provides an overview of SCA and MDS. The second part describes some MDS tasks that help in the management of the SCA metadata files inside the repository. The third part shows how to develop SCA composites in combination with an MDS repository. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SCA Metadata. Metadata Services Repository,Nicolás Fonnegra Martinez,Markus Lohn,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Anonymous Access and Sharepoint Web Services

    - by Stacy Vicknair
    A month or so ago I was working on a feature for a project that required a level of anonymity on the Sharepoint site in order to function. At the same time I was also working on another feature that required access to the Sharepoint search.asmx web service. I found out, the hard way, that the Sharepoint Web Services do not operate in an expected way while the IIS site is under anonymous access. Even though these web services expect requests with certain permissions (in theory) they never attempt to request those credentials when the web service is contacted. As a result the services return a 401 Unauthorized response. The fix for my situation was to restrict anonymous access to the area that needed it (in this case the control in question had support for being used in an ASP.NET app that I could throw in a virtual directory). After that I removed anonymous access from IIS for the site itself and the QueryService requests were working once more. Here’s a related article with a bit more depth about a similar experience: http://chrisdomino.com/Blog/Post/401-Reasons-Why-SharePoint-Web-Services-Don-t-Work-Anonymously?Length=4 Technorati Tags: Sharepoint,QueryService,WSS,IIS,Anonymous Access

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server 2008 R2 Analysis Services Operations Guide

    - by pinaldave
    SQL Server Analysis Service (SSAS) has been always interesting subject for research. Analysis Services cubes are a very powerful tool in the hands of the business intelligence (BI) developer. They provide an easy way to expose even large data models directly to business users. Microsoft has published very informative white paper on Analysis Services Operations Guide. This white paper is authored by Thomas Kejser, John Sirmon, and Denny Lee. In this guide you will find information on how to test and run Microsoft SQL Server Analysis Services in SQL Server 2005, SQL Server 2008, and SQL Server 2008 R2 in a production environment. The focus of this guide is how you can test, monitor, diagnose, and remove production issues on even the largest scaled cubes. This paper also provides guidance on how to configure the server for best possible performance. It is the goal of this guide to make your operations processes as painless as possible, and to have you run with the best possible performance without any additional development effort to your deployed cubes. In this guide, you will learn how to get the best out of your existing data model by making changes transparent to the data model and by making configuration changes that improve the user experience of the cube. Download SQL Server 2008 R2 Analysis Services Operations Guide Note: Abstract taken white paper. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Visual Studio Talk Show #115 is now online - Entity Framework 4 (French)

    - by guybarrette
    http://www.visualstudiotalkshow.com Matthieu Mezil: Entity Framework 4 Nous discutons avec Matthieu Mezil de la version 4 de Entity Framework (EF4). Entre autres, on évaluera avec Matthieu en quoi cette nouvelle version qui sera inclus avec Visual Studio 2010 permet de concevoir un ORM (Object Relational Mapper) avec une implémentation Agile. Matthieu Mezil est consultant formateur chez Access IT à Paris. MVP C# et speaker INETA, il s’est spécialisé sur l’Entity Framework. Il anime régulièrement des conférences sur ce sujet, notamment dans le cadre d’évènements Microsoft. MCT, Matthieu a également écrit plusieurs formations sur la POO, le langage C# et bien sûr sur l’Entity Framework qu’il anime fréquemment. Dans le cadre de son travail, il est souvent amené à travailler avec le Microsoft Technology Center de Paris. Matthieu est également un bloggeur important: en français sur http://blogs.codes-sources.com/matthieu et en anglais sur http://msmvps.com/blogs/matthieu. Télécharger l'émission Si vous désirez un accès direct au fichier audio en format MP3, nous vous invitons à télécharger le fichier en utilisant un des boutons ci-dessous. Si vous désirez utiliser le feed RSS pour télécharger l'émission, nous vous invitons à vous abonnez en utilisant le bouton ci-dessous. Si vous désirez utiliser le répertoire iTunes Podcast pour télécharger l'émission, nous vous encourageons à vous abonnez en utilisant le bouton ci-dessous. var addthis_pub="guybarrette";

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >