Search Results

Search found 5579 results on 224 pages for 'behavioral pattern'.

Page 95/224 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • An open plea to Microsoft to fix the serializers in WCF.

    - by Scott Wojan
    I simply DO NOT understand how Microsoft can be this far along with a tool like WCF and it STILL tout it as being an "Enterprise" tool. For example... The following is a simple xsd schema with a VERY simple data contract that any enterprise would expect an "enterprise system" to be able to handle: <?xml version="1.0" encoding="utf-8"?> <xs:schema id="Sample"     targetNamespace="http://tempuri.org/Sample.xsd"     elementFormDefault="qualified"     xmlns="http://tempuri.org/Sample.xsd"     xmlns:mstns="http://tempuri.org/Sample.xsd"     xmlns:xs="http://www.w3.org/2001/XMLSchema">    <xs:element name="SomeDataElement">     <xs:annotation>       <xs:documentation>This documents the data element. This sure would be nice for consumers to see!</xs:documentation>     </xs:annotation>     <xs:complexType>       <xs:all>         <xs:element name="Description" minOccurs="0">           <xs:simpleType>             <xs:restriction base="xs:string">               <xs:minLength value="0"/>               <xs:maxLength value="255"/>             </xs:restriction>           </xs:simpleType>         </xs:element>       </xs:all>       <xs:attribute name="IPAddress" use="required">         <xs:annotation>           <xs:documentation>Another explanation!  WOW!</xs:documentation>         </xs:annotation>         <xs:simpleType>           <xs:restriction base="xs:string">             <xs:pattern value="(([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.){3}([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])"/>           </xs:restriction>         </xs:simpleType>       </xs:attribute>     </xs:complexType>  </xs:element>   </xs:schema>  An minimal example xml document would be: <?xml version="1.0"encoding="utf-8" ?> <SomeDataElementxmlns="http://tempuri.org/Sample.xsd" IPAddress="1.1.168.10"> </SomeDataElement> With the max example being:  <?xml version="1.0"encoding="utf-8" ?> <SomeDataElementxmlns="http://tempuri.org/Sample.xsd" IPAddress="1.1.168.10">  <Description>ddd</Description> </SomeDataElement> This schema simply CANNOT be exposed by WCF.  Let's list why:  svcutil.exe will not generate classes for you because it can't read an xsd with xs:annotation. Even if you remove the documentation, the DataContractSerializer DOES NOT support attributes so IPAddress would become an element this not meeting the contract xsd.exe could generate classes but it is a very legacy tool, generates legacy code, and you still suffer from the following issues: NONE of the serializers support emitting of the xs:annotation documentation.  You'd think a consumer would really like to have as much documentation as possible! NONE of the serializers support the enforcement of xs:restriction so you can forget about the xs:minLength, xs:maxLength, or xs:pattern enforcement. Microsoft... please, please, please, please look at putting the work into your serializers so that they support the very basics of designing enterprise data contracts!!

    Read the article

  • Get Fanatical About Your Followers

    - by Mike Stiles
    In the fourth of our series of discussions with Aberdeen’s Trip Kucera, we touch on what fans of your brand have come to expect in exchange for their fandom. Spotlight: Around the Oracle Social office, we live for football. So when we think of a true “fan” of a brand, something on the level of a football fan is what comes to mind. But are brands trying to invest fans on that same level? Trip: Yeah, if you’re a football fan, this is definitely your time of year. And if you’ve been to any NFL games recently, especially if you hadn’t been for a few years previously, you may have noticed that from the cup holders to in-stadium Wi-Fi, there’s an increasing emphasis being placed on “fan-focused” accommodations. That’s what they’re known as in the stadium business. Spotlight: How are brands doing in that fan-focused arena? Trip: Remember fan is short for “fanatical.” Brands can definitely learn from the way teams have become fanatical about their fans, or in the social media world, their followers. Many companies consider a segment of their addressable social audience as true fans; I’ve even heard the term “super-fans” used. So just as fans know and can tell you nearly everything about their favorite team, our research shows that there’s a lot value from getting to know your social audience—your followers—at a deeper level. Spotlight: So did your research show there’s a lot to be gained by making fandom a two-way street? Trip: Aberdeen’s new social relationship management research suggests that companies should develop capabilities to better analyze their social audience at a more granular level. Countless “ripped from the headlines” examples, from “United Breaks Guitars” to the most recent British Airways social fiasco we talked about a few weeks ago show how social can magnify the impact of a single customer voice. Spotlight: So how do the companies who are executing social most successfully do that? Trip: Leaders, which are the top-performing companies in Aberdeen’s study, are showing the value of identifying and categorizing your social audience. You should certainly treat every customer as if they have 10,000 followers, because they just might, but you can also proactively engage with high-value customer and high-value influencers. Getting back to the football analogy, it’s like how teams strive to give every guest a great experience, but they really roll out the red carpet for those season ticket and luxury box holders. Spotlight: I’m not allowed in luxury boxes, so you’ll have to tell me what that’s like. But what is the brand equivalent of rolling out the red carpet? Trip: Leaders are nearly three times more likely than Followers to have a process in place that identifies key social influencers for engagement, and more than twice as likely to identify customer advocates for social outreach. This is the kind of knowledge that gives companies the ability to better target social messaging and promotions like we talked about in our last discussion, as well as a basis for understanding how to measure the impact of their social media programs. I’ll give you an example. I hosted an event at one of my favorite restaurants recently. I had mentioned them in a Tweet several weeks before the event, and on the day of the event, they Tweeted out that they were looking forward to seeing me that evening for the event. It’s a small thing, but it had a big impact and I’d certainly go back as a result. Spotlight: So what specifically can brands use and look at to determine where their potential super-fans are? Trip: Social graph analysis, which looks at both the demographic/psychographic trends as well as the behavioral connections, can surface important brand value. Aberdeen’s PR and Brand Management research indicated that top-performing companies are more than three times more likely than Followers to both determine demographic trends through social listening (44% vs. 13%), and to identify meaningful customer segments through social (44% vs. 12%). This kind of brand-level insight can complement and enrich traditional market research. But perhaps even more importantly, it can serve as an early warning system for customer experience failures. @mikestilesPhoto: freedigitalphotos.net

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

  • Oracle Applications Cloud Release 8 Customization: Your User Interface, Your Text

    - by ultan o'broin
    Introducing the User Interface Text Editor In Oracle Applications Cloud Release 8, there’s an addition to the customization tool set, called the User Interface Text Editor  (UITE). When signed in with an application administrator role, users launch this new editing feature from the Navigator's Tools > Customization > User Interface Text menu option. See how the editor is in there with other customization tools? User Interface Text Editor is launched from the Navigator Customization menu Applications customers need a way to make changes to the text that appears in the UI, without having to initiate an IT project. Business users can now easily change labels on fields, for example. Using a composer and activated sandbox, these users can take advantage of the Oracle Metadata Services (MDS), add a key to a text resource bundle, and then type in their preferred label and its description (as a best practice for further work, I’d recommend always completing that description). Changing a simplified UI field label using Oracle Composer In Release 8, the UITE enables business users to easily change UI text on a much wider basis. As with composers, the UITE requires an activated sandbox where users can make their changes safely, before committing them for others to see. The UITE is used for editing UI text that comes from Oracle ADF resource bundles or from the Message Dictionary (or FND_MESSAGE_% tables, if you’re old enough to remember such things). Functionally, the Message Dictionary is used for the text that appears in business rule-type error, warning or information messages, or as a text source when ADF resource bundles cannot be used. In the UITE, these Message Dictionary texts are referred to as Multi-part Validation Messages.   If the text comes from ADF resource bundles, then it’s categorized as User Interface Text in the UITE. This category refers to the text that appears in embedded help in the UI or in simple error, warning, confirmation, or information messages. The embedded help types used in the application are explained in an Oracle Fusion Applications User Experience (UX) design pattern set. The message types have a UX design pattern set too. Using UITE  The UITE enables users to search and replace text in UI strings using case sensitive options, as well as by type. Users select singular and plural options for text changes, should they apply. Searching and replacing text in the UITE The UITE also provides users with a way to preview and manage changes on an exclusion basis, before committing to the final result. There might, for example, be situations where a phrase or word needs to remain different from how it’s generally used in the application, depending on the context. Previewing replacement text changes. Changes can be excluded where required. Multi-Part Messages The Message Dictionary table architecture has been inherited from Oracle E-Business Suite days. However, there are important differences in the Oracle Applications Cloud version, notably the additional message text components, as explained in the UX Design Patterns. Message Dictionary text has a broad range of uses as indicated, and it can also be reserved for internal application use, for use by PL/SQL and C programs, and so on. Message Dictionary text may even concatenate together at run time, where required. The UITE handles the flexibility of such text architecture by enabling users to drill down on each message and see how it’s constructed in total. That way, users can ensure that any text changes being made are consistent throughout the different message parts. Multi-part (Message Dictionary) message components in the UITE Message Dictionary messages may also use supportability-related numbers, the ones that appear appended to the message text in the application’s UI. However, should you have the requirement to remove these numbers from users' view, the UITE is not the tool for the job. Instead, see my blog about using the Manage Messages UI.

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • MVVM/Presentation Model With WinForms

    - by Erik Ashepa
    Hi, I'm currently working on a brownfield application, it's written with winforms, as a preparation to use WPF in a later version, out team plans to at least use the MVVM/Presentation model, and bind it against winforms... I've explored the subject, including the posts in this site (which i love very much), when boiled down, the main advantage of wpf are : binding controls to properties in xaml. binding commands to command objects in the viewmodel. the first feature is easy to implement (in code), or with a generic control binder, which binds all the controls in the form. the second feature is a little harder to implement, but if you inherit from all your controls and add a command property (which is triggered by an internal event such as click), which is binded to a command instance in the ViewModel. The challenges I'm currently aware of are : implementing a commandmanager, (which will trigger the CanInvoke method of the commands as necessery. winforms only supports one level of databinding : datasource, datamember, wpf is much more flexible. am i missing any other major features that winforms lacks in comparison with wpf, when attempting to implement this design pattern? i sure many of you will recommend some sort of MVP pattern, but MVVM/Presentation model is the way to go for me, because I'll want future WPF support. Thanks in advance, Erik.

    Read the article

  • Axis2 embedded in my web app is not working

    - by Shamik
    OKay I lost alsmost the whole day on this. I have a webapp where I would like to add AXIS2 and start working. I added AxisServlets in the web.xml file like - <servlet> <servlet-name>AxisServlet</servlet-name> <display-name>Apache-Axis Servlet</display-name> <servlet-class>org.apache.axis2.transport.http.AxisServlet</servlet-class> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>AxisServlet</servlet-name> <url-pattern>/services/*</url-pattern> </servlet-mapping> I also added Services.xml file like <service name="ReportViewerService"> <description> This is a sample Web Service for illustrating Attachments API of Axis2 </description> <parameter name="ServiceClass">myclass</parameter> <operation name="getReport"> <actionMapping>urn:getReport</actionMapping> <messageReceiver class="org.apache.axis2.rpc.receivers.RPCMessageReceiver"/> </operation> </service> The directory structure is as mentioned here WEB-ING | - conf | |- axis2.xml |-lib | |- all libs |-services |-ReportViewerService | - META-INF |-services.xml |- web.xml The problem is - after all of these, the service endpoint will not come, I can not see the WSDL file http://localhost:8080/BOReportingServer/services/ReportViewerService?wsdl -- this gives an exception like - Throwable occurred: javax.servlet.ServletException: File &quot;/axis2-web/listSingleService.jsp&quot; not found

    Read the article

  • Unknown Entity namespace alias in symfony2

    - by Zoha Ali Khan
    Hey I have two bundles in my symfony2 project. one is Bundle and the other one is PatentBundle. My app/config/route.yml file is MunichInnovationGroupPatentBundle: resource: "@MunichInnovationGroupPatentBundle/Controller/" type: annotation prefix: / defaults: { _controller: "MunichInnovationGroupPatentBundle:Default:index" } MunichInnovationGroupBundle: resource: "@MunichInnovationGroupBundle/Controller/" type: annotation prefix: /v1 defaults: { _controller: "MunichInnovationGroupBundle:Patent:index" } login_check: pattern: /login_check logout: pattern: /logout inside my controller i have <?php namespace MunichInnovationGroup\PatentBundle\Controller; use Symfony\Component\HttpFoundation\Response; use Symfony\Component\HttpFoundation\Request; use JMS\SecurityExtraPatentBundle\Annotation\Secure; use Symfony\Component\Security\Core\Exception\AccessDeniedException; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Method; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Route; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Template; use Symfony\Component\Security\Core\SecurityContext; use MunichInnovationGroup\PatentBundle\Entity\Log; use MunichInnovationGroup\PatentBundle\Entity\UserPatent; use MunichInnovationGroup\PatentBundle\Entity\PmPortfolios; use MunichInnovationGroup\PatentBundle\Entity\UmUsers; use MunichInnovationGroup\PatentBundle\Entity\PmPatentgroups; use MunichInnovationGroup\PatentBundle\Form\PortfolioType; use MunichInnovationGroup\PatentBundle\Util\SecurityHelper; use Exception; /** * Portfolio controller. * @Route("/portfolio") */ class PortfolioController extends Controller { /** * Index action. * * @Route("/", name="v2_pm_portfolio") * @Template("MunichInnovationGroupPatentBundle:Portfolio:index.html.twig") */ public function indexAction(Request $request) { $portfolios = $this->getDoctrine() ->getRepository('MunichInnovationGroupPatentBundle:PmPortfolios') ->findBy(array('user' => '$user_id')); // rest of the method } when i try to load localhost/web/app_dev.php/portfolio It says Unknown Entity namespace alias 'MunichInnovationGroupPatentBundle'. I am unable to figure out this error please help me if anyone has any idea I googled it a lot :( Thanks in advance 500 Internal Server Error - ORMException

    Read the article

  • MVC: Nested Views, and Controllers (for a website)

    - by incrediman
    I'm doing a PHP website using the MVC pattern. I am not using a framework as the site is fairly simple and I feel that this will give me a good opportunity to learn about the pattern directly. I have a couple questions. Question 1: How should I organize my views? I'm thinking of having a Page view which will have the header and footer, and which will allow for a Content view to be nested between them. Question 2: If I have 5 Content pages, should I make 5 different views that can be used as the content that is nested within the Page view? Or, should I make them all extend an abstract view called AbstractContent? Question 3: What about controllers? I think there should be one main controller at least. But then where does the request go from there? To another controller? Or should I just call the Page view and leave it at that? I thought that controllers were supposed to handle input, possibly modify a model, and select a view. But what if one of the views nested within the view that a controller calls requires additional input to be parsed? Question 4: Are controllers allowed to pass parameters into the view? Or should the controller simply modify the model, which will then affect the view? Or is the model only for DB access and other such things?

    Read the article

  • Code Golf: Code 39 Bar Code

    - by gwell
    The challenge The shortest code by character count to draw an ASCII representation of a Code 39 bar code. Wikipedia article about Code 39: http://en.wikipedia.org/wiki/Code_39 Input The input will be a string of legal characters for Code 39 bar codes. This means 43 characters are valid: 0-9 A-Z (space) and -.$/+%. The * character will not appear in the input as it is used as the start and stop characters. Output Each character encoded in Code 39 bar codes have nine elements, five bars and four spaces. Bars will be represented with # characters, and spaces will be represented with the space character. Three of the nine elements will be wide. The narrow elements will be one character wide, and the wide elements will be three characters wide. A inter-character space of a single space should be added between each character pattern. The pattern should be repeated so that the height of the bar code is eight characters high. The start/stop character * (bWbwBwBwb) would be represented like this: # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # ^ ^ ^^ ^ ^ ^ ^^^ | | || | | | ||| narrow bar -+ | || | | | ||| wide space ---+ || | | | ||| narrow bar -----+| | | | ||| narrow space ------+ | | | ||| wide bar --------+ | | ||| narrow space ----------+ | ||| wide bar ------------+ ||| narrow space --------------+|| narrow bar ---------------+| inter-character space ----------------+ The start and stop character * will need to be output at the start and end of the bar code. No quiet space will need to be included before or after the bar code. No check digit will need to be calculated. Full ASCII Code39 encoding is not required, just the standard 43 characters. No text needs to be printed below the ASCII bar code representation to identify the output contents. The character # can be replaced with another character of higher density if wanted. Using the full block character U+2588, would allow the bar code to actually scan when printed. Test cases Input: ABC Output: # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # Input: 1/3 Output: # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # Input: - $ (minus space dollar) Output: # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # Code count includes input/output (full program).

    Read the article

  • How do I configure CI hudson with PHPunit and how do I run phpunit using hudson?

    - by Vinesh
    Hi, I am getting following error mentioned below. Help is much needed for this... kindly go through the errors. Started by an SCM change Updating https://suppliesguys.unfuddle.com/svn/suppliesguys_frontend2/Frontend-Texity/src U sites\all\modules\print\print_pdf\print_pdf.pages.inc At revision 1134 [workspace] $ sh -xe C:\WINDOWS\TEMP\hudson6292587174545072503.sh The system cannot find the file specified FATAL: command execution failed java.io.IOException: Cannot run program "sh" (in directory "E:\Projects\Hudson.hudson\jobs\TSG\workspace"): CreateProcess error=2, The system cannot find the file specified at java.lang.ProcessBuilder.start(Unknown Source) at hudson.Proc$LocalProc.(Proc.java:149) at hudson.Proc$LocalProc.(Proc.java:121) at hudson.Launcher$LocalLauncher.launch(Launcher.java:636) at hudson.Launcher$ProcStarter.start(Launcher.java:271) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:584) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1244) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:122) Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) ... 17 more Publishing Javadoc Publishing Clover coverage report... No Clover report will be published due to a Build Failure [xUnit] Starting to record. [xUnit] [PHPUnit] - Use the embedded style sheet. [xUnit] [ERROR] - No test report file(s) were found with the pattern 'build/logs/phpunit.xml' relative to 'E:\Projects\Hudson.hudson\jobs\TSG\workspace' for the testing framework 'PHPUnit'. Did you enter a pattern relative to the correct directory? Did you generate the result report(s) for 'PHPUnit'? [xUnit] Stopping recording. Finished: FAILURE

    Read the article

  • log4j:WARN No appenders could be found for logger in web.xml

    - by cometta
    I already put the log4jConfigLocation in web.xml, but still, i get warning log4j:WARN No appenders could be found for logger (org.springframework.web.context.ContextLoader). log4j:WARN Please initialize the log4j system properly. what did i missed out? <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml </param-value> </context-param> <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/classes/log4j.properties</param-value> </context-param> <listener> <listener-class>org.springframework.web.util.Log4jConfigListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <servlet> <servlet-name>suara2</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>suara2</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping>

    Read the article

  • Configuring xUnit test output in Hudson

    - by graham.reeds
    I have a simple PoC project in Hudson. The PoC has unit tests written via UnitTest++ and outputs the results as XML for consumption by xUnit to munge into jUnit format. Here are the salient relevant I have my project configured to use MSBuild to build the 2008 solution. The project contains both the dll it is to build and the unit tests which are run as a post-build step. My workspace in Hudson is set to c:\develop\money (Money is the name of the project) and in the Hudson console I can see the workspace folders, the solution file and output folders (/bin, /doc, etc). The test console app outputs its file 'money_unit_tests.xml' to the folder 'reports' (making c:\develop\money\reports). I've restarted Hudson since installing xUnit and setting the workspace. However when I start Hudson building I am given the following message: [xUnit] Starting to record. [xUnit] [UnitTest] - Use the embedded style sheet. [xUnit] [ERROR] - No test report file(s) were found with the pattern 'reports/money_unit_tests.xml' relative to 'C:\.hudson\jobs\Money\workspace' for the testing framework 'UnitTest'. Did you enter a pattern relative to the correct directory? Did you generate the result report(s) for 'UnitTest'? [xUnit] Stopping recording. Finished: FAILURE Why does Hudson seem to think the workspace is in C:.hudson... and not C:\Develop...? What can I do change it? If I can't change it, what can I do to mitigate these changes? (I don't exactly want to hardcode the output for the xml to C:.hudson...)

    Read the article

  • Is HttpContextWrapper all that....useful?

    - by bakasan
    I've been going through the process of cleaning up our controller code to make each action as testable. Generally speaking, this hasn't been too difficult--where we have opportunity to use a fixed object, like say FormsAuthentication, we generally introduce some form of wrapper as appropriate and be on our merry way. For reasons not particularly germaine to this conversation, when it came to dealing with usage of HttpContext, we decided to use the newly created HttpContextWrapper class rather than inventing something homegrown. One thing we did introduce was the ability to swap in a HttpContextWrapper (like say, for unit testing). This was wholly inspired by the way Oren Eini handles unit testing with DateTimes (see article, a pattern we also use) public static class FooHttpContext { public static Func<HttpContextWrapper> Current = () => new HttpContextWrapper(HttpContext.Current); public static void Reset() { Current = () => new HttpContextWrapper(HttpContext.Current); } } Nothing particularly fancy. And it works just fine in our controller code. The kicker came when we go to write unit tests. We're using Moq as our mocking framework, but alas var context = new Mock<HttpContextWrapper>() breaks since HttpContextWrapper doesn't have a parameterless ctor. And what does it take as a ctor parameter? A HttpContext object. So I find myself in a catch 22. I'm using the prescribed way to decouple HttpContext--but I can't mock a value in because the original HttpContext object was sealed and therefore difficult to test. I can map HttpContextBase, which both derive from--but that doesn't really get me what I'm after. Am I just missing the point somewhere with regard to HttpContextWrapper? I can find ways to work around the issue, but we are kind of fond of remaining consistent in decoupling using the Function delegate pattern--but it seems like we're not fully grokking intent of the wrapper.

    Read the article

  • How to let Tomcat publish WSDL for the WS it provides (CXF 2.2, Spring 3, Tomcat6)

    - by Zwei Steinen
    Hi, I am trying to implement a simple web service provider using Tomcat6, CXF 2.2, Spring 3, and actually the service itself runs fine (I can call web methods using the original WSDL and SoapUI). However, Tomcat returns a blank page on "?wsdl" requests. Also, when I try to manipulate the (would-be) published WSDL by adding a publishedEndpointURL property to the jaxws:endpoint element, Tomcat will issue a XML parse exception (something like property publishedEndpointURL is not allowed in element jaxws:endpoint) <jaxws:endpoint id="service" implementor="org.sample.ServiceImpl" implementorClass="org.sample.ServiceImpl" address="/service" publishedEndpointURL="http://localhost:8080/MyService/service"> I used "contract first" style. EDIT: What I did so far: Setup tomcat6 with Spring3 Generate CXF implementation class by using maven Provide web.xml (only relevant part shown) <listener> <listener-class>org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <servlet> <servlet-name>cxf</servlet-name> <servlet-class> org.apache.cxf.transport.servlet.CXFServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>cxf</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> Provide applicationContext.xml (only relevant part is shown) Package generated stuff into war and deploy

    Read the article

  • Websphere 7 simple realm (like tomcat-users.xml)

    - by Heavy Bytes
    I am trying to port a J2EE app from Tomcat to Websphere and I'm not too familiar with Websphere. The only problem I am having is authorization (I use basic-authentication in my web.xml). In Tomcat I use the tomcat-users.xml file to define my users/passwords and to what roles they belong. How do I do this "simply" in Websphere? When deploying the EAR to Websphere it also asks me to map my role from web.xml to a user or group. Do I have to set up some sort of realm? Custom user registry? Thanks. UPDATE: I configured a Standalone custom registry, however I can't get a log-in prompt for username/password. It works just fine in Tomcat, and it doesn't in Websphere. Code from web.xml <security-constraint> <web-resource-collection> <web-resource-name>basic-auth security</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>HELLO_USER</role-name> </auth-constraint> <user-data-constraint>NONE</user-data-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> <security-role> <role-name>HELLO_USER</role-name> </security-role>

    Read the article

  • jax-ws on glassfish3 init method

    - by Alex
    Hi all, I've created simple jax-ws (anotated Java 6 class to web service) service and deploied it on glassfish v3. The web.xml <?xml version="1.0" encoding="ISO-8859-1"?> <web-app> <servlet> <servlet-name>MyServiceName</servlet-name> <description>Blablabla</description> <servlet-class>com.foo-bar.somepackage.TheService</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>MyServiceName</servlet-name> <url-pattern>/MyServiceName</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> </web-app> There is no sun-jaxws.xml in the war. The service works fine but I have 2 issues: I'm using apache common configuration package to read my configuration, so i have init function that calls configuration stuff. 1. How can I configure init method for jaxws service (like i can do for the servlets for example) 2. the load on startup parameter is not affecting the service, I see that for every request init function called again (and c-tor). How can I set scope for my service? Thanks a lot,

    Read the article

  • Atomic UPSERT in SQL Server 2005

    - by rabidpebble
    What is the correct pattern for doing an atomic "UPSERT" (UPDATE where exists, INSERT otherwise) in SQL Server 2005? I see a lot of code on SO (e.g. see http://stackoverflow.com/questions/639854/tsql-check-if-a-row-exists-otherwise-insert) with the following two-part pattern: UPDATE ... FROM ... WHERE <condition> -- race condition risk here IF @@ROWCOUNT = 0 INSERT ... or IF (SELECT COUNT(*) FROM ... WHERE <condition>) = 0 -- race condition risk here INSERT ... ELSE UPDATE ... where will be an evaluation of natural keys. None of the above approaches seem to deal well with concurrency. If I cannot have two rows with the same natural key, it seems like all of the above risk inserting rows with the same natural keys in race condition scenarios. I have been using the following approach but I'm surprised not to see it anywhere in people's responses so I'm wondering what is wrong with it: INSERT INTO <table> SELECT <natural keys>, <other stuff...> FROM <table> WHERE NOT EXISTS -- race condition risk here? ( SELECT 1 FROM <table> WHERE <natural keys> ) UPDATE ... WHERE <natural keys> (Note: I'm assuming that rows will not be deleted from this table. Although it would be nice to discuss how to handle the case where they can be deleted -- are transactions the only option? Which level of isolation?) Is this atomic? I can't locate where this would be documented in SQL Server documentation.

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • Design Patterns Recommendation for Filtering Option

    - by Tarik
    Hi people, I am thinking to create a filter object which filters and delete everything like html tags from a context. But I want it to be independent which means the design pattern I can apply will help me to add more filters in the future without effecting the current codes. I thought Abstract Factory but it seems it ain't gonna work out the way I want. So maybe builder but it looks same. I don't know I am kinda confused, some one please recommend me a design pattern which can solve my problem but before that let me elaborate the problem a little bit. Lets say I have a class which has Description field or property what ever. And I need filters which remove the things I want from this Description property. So whenever I apply the filter I can add more filter in underlying tier. So instead of re-touching the Description field, I can easily add more filters and all the filters will run for Description field and delete whatever they are supposed to delete from the Description context. I hope I could describe my problem. I think some of you ran into the same situation before. Thanks in advance...

    Read the article

  • Sync Vs. Async Sockets Performance in C#

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >