Search Results

Search found 7651 results on 307 pages for 'pattern matching'.

Page 123/307 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Optimizing data downloaded via 'link' media queries and asynchronous loading

    - by adam-asdf
    I have a website that tries to make sensible use of media queries and avoid 'expensive' CSS for users of mobile devices. My eventual goal is to make it 'mobile-first' but for now, since it is based on Twitter Bootstrap it isn't. I included some background images (Base64 encoded) and styles that would only apply to "full-size" browsers in a separate stylesheet loaded asynchronously via modernizr.load. In Firefox (but not webkit browsers) it makes it so that if you navigate away from the homepage and then return, the content (specifically, all those extras) 'blinks' when it finishes loading...or maybe I should say reloading. If, instead of using modernizr.load, I include that stylesheet via a link... in the head with a media query attribute will it prevent the data from being downloaded by non-matching browsers (mobile, based on screensize) that it is inapplicable to?

    Read the article

  • An open plea to Microsoft to fix the serializers in WCF.

    - by Scott Wojan
    I simply DO NOT understand how Microsoft can be this far along with a tool like WCF and it STILL tout it as being an "Enterprise" tool. For example... The following is a simple xsd schema with a VERY simple data contract that any enterprise would expect an "enterprise system" to be able to handle: <?xml version="1.0" encoding="utf-8"?> <xs:schema id="Sample"     targetNamespace="http://tempuri.org/Sample.xsd"     elementFormDefault="qualified"     xmlns="http://tempuri.org/Sample.xsd"     xmlns:mstns="http://tempuri.org/Sample.xsd"     xmlns:xs="http://www.w3.org/2001/XMLSchema">    <xs:element name="SomeDataElement">     <xs:annotation>       <xs:documentation>This documents the data element. This sure would be nice for consumers to see!</xs:documentation>     </xs:annotation>     <xs:complexType>       <xs:all>         <xs:element name="Description" minOccurs="0">           <xs:simpleType>             <xs:restriction base="xs:string">               <xs:minLength value="0"/>               <xs:maxLength value="255"/>             </xs:restriction>           </xs:simpleType>         </xs:element>       </xs:all>       <xs:attribute name="IPAddress" use="required">         <xs:annotation>           <xs:documentation>Another explanation!  WOW!</xs:documentation>         </xs:annotation>         <xs:simpleType>           <xs:restriction base="xs:string">             <xs:pattern value="(([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.){3}([1-9]?[0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])"/>           </xs:restriction>         </xs:simpleType>       </xs:attribute>     </xs:complexType>  </xs:element>   </xs:schema>  An minimal example xml document would be: <?xml version="1.0"encoding="utf-8" ?> <SomeDataElementxmlns="http://tempuri.org/Sample.xsd" IPAddress="1.1.168.10"> </SomeDataElement> With the max example being:  <?xml version="1.0"encoding="utf-8" ?> <SomeDataElementxmlns="http://tempuri.org/Sample.xsd" IPAddress="1.1.168.10">  <Description>ddd</Description> </SomeDataElement> This schema simply CANNOT be exposed by WCF.  Let's list why:  svcutil.exe will not generate classes for you because it can't read an xsd with xs:annotation. Even if you remove the documentation, the DataContractSerializer DOES NOT support attributes so IPAddress would become an element this not meeting the contract xsd.exe could generate classes but it is a very legacy tool, generates legacy code, and you still suffer from the following issues: NONE of the serializers support emitting of the xs:annotation documentation.  You'd think a consumer would really like to have as much documentation as possible! NONE of the serializers support the enforcement of xs:restriction so you can forget about the xs:minLength, xs:maxLength, or xs:pattern enforcement. Microsoft... please, please, please, please look at putting the work into your serializers so that they support the very basics of designing enterprise data contracts!!

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • What's the difference between 'killall' and 'pkill'?

    - by jgbelacqua
    After using just plain kill <some_pid> on Unix systems for many years, I learned pkill from a younger Linux-savvy co-worker colleague1. I soon accepted the Linux-way, pgrep-ing and pkill-ing through many days and nights, through slow-downs and race conditions. This was all well and good. But now I see nothing but killall . How-to's seem to only mention killall, and I'm not sure if this is some kind of parallel development, or if killall is a successor to pkill, or something else. It seems to function as more targeted pkill, but I'm sure I'm missing something. Can an Ubuntu/Debian-savvy person explain when (or why) killall should be used, especially if it should be used in preference to pkill (when pkill often seems easier, because I can be sloppier with name matching, at least by default). 1 'colleague' is free upgrade from 'co-worker', so might as well.

    Read the article

  • NetBeans 7.3 Beta2 is Out!

    - by Ondrej Brejla
    NetBeans 7.3 Beta2 was published today. You can download it. You could read about the PHP features added to the NetBeans 7.3 release here on the blog, but the main features added or improved are: Parsers for Namespaced Annotations (Symfony 2, Doctrine 2, etc.), Basic Composer Integration (Dependency Manager for PHP), Twig Code Completion (with documentation), Smarty Braces Matching for Related Tags, Smarty Parser Errors of Unmatched Tags. As obvious you can help us to test the build. Just try it and if you find an issue / error, please report it. Thanks for your help.

    Read the article

  • A quick list of all SharePoint 2010 Powershell commandlets

    - by Sahil Malik
    SharePoint 2010 Training: more information Ever wonder what powershell commandlets exist on your SharePoint 2010 installation? Easy! Just run the SharePoint 2010 Management Shell, and issue the following command - Get-Command -module Microsoft.SharePoint.PowerShell And if you wish to find matching commands for a certain task, for instance, I wish to know all commands that have anything to do with “Update”, I would issue the following command  - Get-Command -module Microsoft.SharePoint.PowerShell  | where{$_.name -match "Update"} And if you want to do exactly the same for stsadm, you could do something like this - Read full article ....

    Read the article

  • What is the best way to have the same website in multiple domains?

    - by Daniel Magliola
    I would like to have the same website to sell a specific product, in multiple domains , to take advantage of keywords matching the domain name, for several different searches. However, I understand that having the same content in multiple sites will unleash the wrath of Google. If I have a redirect from all domains minus one, to that last one, do I still get any bonus for the "magic exact domain match jackpot"? Same question applies to canonical URLs... What's the best way to approach this? Thanks!

    Read the article

  • O'Reilly deal of the week to 23:59 PT 4/Sept/2012 - Master Regular Expressions

    - by TATWORTH
    O'Reilly at http://shop.oreilly.com/category/deals/regular-expressions-owo.do?code=WKRGEX are offering 50% off a range of e-books on mastering Regular Expressions "Take the guesswork out of using regular expressions. Learn powerful tips for matching, extracting, and transforming text as well as the gotchas to avoid. For one week only, SAVE 50% on these e-books and discover a whole new world of mastery over your code." I recommend Mastering Regular Expression to Dot Net developer as it covers the use of regular expressions across a number of environments, including Dot Net.

    Read the article

  • Google I/O 2012 - Gaming in the Cloud

    Google I/O 2012 - Gaming in the Cloud "Fred Sauer Many games developers are finding the easy development and deployment experience of Google App Engine ideal for building cloud based state-storage, matching making services and collaborations services. When you have a hit game, the last thing you want to do is worry about your server provisioning. App Engine has an always-free tier to get you started and then scales seamlessly to any size of usage. Game developers also use Google Cloud Storage to easily store and quickly deliver media files to clients around the world. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1 0 ratings Time: 01:02:17 More in Science & Technology

    Read the article

  • Google Analytics: Why does "/" appear in goal funnel visualization?

    - by Lauren
    This is the goal funnel for checkout. Does anyone have any idea where the "/" is coming from? The cart page is at site: game on glove dot com (I don't want this stackoverflow page being indexed in google particularly well). Go to the site, click on the order button, make your selection, and click the button to enter the cart (it resolves to /Cart and /Shop-Cart). I believe I used the regular expression matching to match "cart". So why the "/" (I don't know what is causing the home page to reload when users are on the Cart page within a Colorbox lightbox where the only way back to home or "/" is to hit the exit button in the top right of the lightbox)? Here's my one guess for the former question but it doesn't seem likely: See the "check out with paypal" button? If you hovered over it, it does default to the home page which is what might be the "/"... but it really redirects the user to the paypal.com page so it shouldn't also load the home page.

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Using Google Analytics to determine how much time a visitor spends in each section of my site

    - by flossfan
    I have a site with various pages, like: /about/history /about/team /contact/email-us /contact I want to figure out how much time people are spending on the entire /about section, and how much on the /contact section. If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } This looks roughly like what I want, but I'm not sure how to intepret it: Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is it the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect.

    Read the article

  • Quick path jumping

    - by Sebastian P.
    I was just at a lecture, where I noticed the lecturer using a command (probably aliased) to jump to a specific folder. Example: ~/code$ j sciproj ~/projects/sciproj2011/$ This looked quite slick, so I started wondering: Is this a standard utility, and if so, what is the name? I have two theories as to how it works: It can both create, delete and jump to aliases directly from the command-line in the style of the example, without having to set up aliases in a configuration file or script or whatnot manually. It searches the home directory for a folder matching the name and jumps to it. The second option seems a bit slow, however, so the first would be preferred.

    Read the article

  • Debian package libmarkdown-php, how can I use it? [closed]

    - by JamesM-SiteGen
    Hello all, I am just wondering how do you implement libmarkdown-php in a php script? By this, I mean: What code do I run to use the markdown library? Does it simply just add one function? Does it allows me to encode markdown2html and vise versa? Where is a doc for this package, I can't find one? :( Okay, so it terns out that I found the docs, just did not match them up, the project-page did not contain any info on it being the Squeeze package libmarkdown-php, Said to know it is not in Lenny. Thanks @palhmbus for matching them. :)

    Read the article

  • What is the general definition of something that can be included or excluded?

    - by gutch
    When an application presents a user with a list of items, it's pretty common that it permits the user to filter the items. Often a 'filter' feature is implemented as a set of include or exclude rules. For example: include all emails from [email protected], and exclude those emails without attachments I've seen this include/exclude pattern often; for example Maven and Google Analytics filter things this way. But now that I'm implementing something like this myself, I don't know what to call something that could be either included or excluded. In specific terms: If I have a database table of filter rules, each of which either includes or excludes matching items, what is an appropriate name of the field that stores include or exclude? When displaying a list of filters to a user, what is a good way to label the include or exclude value? (as a bonus, can anyone recommend a good implementation of this kind of filtering for inspiration?)

    Read the article

  • Abandoment to blame?

    - by Larsenal
    I have a code snippet for an app that users are loading as a 3rd party script on their site. The general sequence is as follows: Site loads "http://www.example.com/foo.js" foo.js does stuff 1 to 2 seconds later, foo.js loads bar.js Now in a perfect world, I'd want to see matching counts for the calls to foo.js and bar.js. However, bar.js loads only about 94% of the time. I'm wondering how much of this discrepancy might be attributable to site abandonment given the fact that bar.js is delayed by 1 or 2 seconds. I posted here instead of StackOverflow since I think it's more a question about what would be typical time on page when users abandon the page.

    Read the article

  • "// ..." comments at end of code block after } - good or bad?

    - by gablin
    I've often seen such comments be used: function foo() { ... } // foo while (...) { ... } // while if (...) { ... } // if and sometimes even as far as if (condition) { ... } // if (condition) I've never understood this practice and thus never applied it. If your code is so long that you need to know what this ending } is then perhaps you should consider splitting it up into separate functions. Also, most developers tools are able to jump to the matching bracket. And finally the last is, for me, a clear violation to the DRY principle; if you change the condition you would have to remember to change the comment as well (or else it could get messy for the maintainer, or even for you). So why do people use this? Should we use it, or is it bad practice?

    Read the article

  • Search Engine Query Word Order

    - by EoghanM
    I've pages with titles like 'Alpha with Beta'. For every such page, there is an inverse page 'Beta with Alpha'. Both pages link to each other. When someone on Google searches for 'Beta with Alpha', I'd like them to land on the correct page, but sometimes 'Alpha with Beta' ranks higher (or vice versa). I was thinking of inspecting the referral link when a visitor arrives on my site, and silently redirecting them to the correct page based on what they actually searched for. Just wondering if this could be penalized by Google as 'cloaking/sneaky redirects'? Or is there a better way to ensure that the correct page on my site ranks higher for the matching query?

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • What is the proper way to create a cross-fade effect? [closed]

    - by Starx
    When creating an image slider, using a cross fade is one of more popular effects. Various sliders use differing techniques to create such an effect. Two techniques I've found so far are: Use an overlay and underlay <div> and fade in and out each other's visibility. Create a <div> matching the exact size of the slider during initialization, play with its z-index property, and then fade each other. Is there a better way to create this effect?

    Read the article

  • how to have 'find' not return the current directory

    - by Pinpin
    I'm currently trying to find (and copy) all files and folder structure matching a specific pattern, in a specified directory and I'm so nearly there! Specifically, I want to recursively copy all folders not begining with a '_' character from a specified path. find /source/path/with/directories -maxdepth 1 -type d ! -name _\* -exec cp -R {} /destination/path \; In the /source/path/with/directories/ path are machine-specific directories beginning with '_' and others, and I'm only interested in copying the others. For a reason beyond me, the find command returns the /source/path/with/directories/ directory, and therefore copies its content, directories begining with '_' included. Anyone have a hint as to why that is? Thanks, Pascal

    Read the article

  • Abandoment to blame for the last JavaScript file not always being loaded?

    - by Larsenal
    I have a code snippet for an app that users are loading as a 3rd party script on their site. The general sequence is as follows: Site loads http://www.example.com/foo.js foo.js does stuff 1 to 2 seconds later, foo.js loads bar.js Now in a perfect world, I'd want to see matching counts for the calls to foo.js and bar.js. However, bar.js loads only about 94% of the time. I'm wondering how much of this discrepancy might be attributable to site abandonment given the fact that bar.js is delayed by 1 or 2 seconds. I posted here instead of StackOverflow since I think it's more a question about what would be typical time on page when users abandon the page.

    Read the article

  • which way should I look at visits by region in Google Analytics?

    - by Drai
    I need to generate a report for only the Americas in Google Analytics. When I create an advanced segment that includes Continent Exactly Matching Americas I get one number, If I create the segment that includes sub-Continent region Includes America I get a slightly different number, And if I look at all visits but choose Demographicslocationand segment by sub-continent region I get yet a 3rd number! (Note: this is because it also includes Caribbean) All are only different by around 1% of traffic. What is the most accurate way to do this, or should I just pick a way and be consistent?

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >