Search Results

Search found 381 results on 16 pages for 'the wind'.

Page 14/16 | < Previous Page | 10 11 12 13 14 15 16  | Next Page >

  • WCF 3.5 - Remove SVC Extension - Special Case

    - by Brandon
    I have several RESTful endpoints like such: System.Security.Role.svc System.Security.User.svc etc. This is meant to be a namespace so our RESTful URL's would look like: /rest/{class namespace}/{actions} I have tried a few examples to get the SVC extension removed when my endpoint has multiple periods in it, however, nothing seems to work. I have tested with the WCF REST Contrib package (http://wcfrestcontrib.codeplex.com/), this example (http://www.west-wind.com/weblog/posts/570695.aspx), and another StackOverflow post (http://stackoverflow.com/questions/355165/how-to-remove-thie-svc-extension-in-restful-wcf-service). This works great when my endpoint is something like this: Echo.svc It will properly remove the SVC extension. Any ideas on how to handle endpoints with multiple periods in the endpoint name? EDIT: After some further testing, I found out that it is failing because whenever you do: string path = HttpContext.Current.Request.AppRelativeCurrentExecutionFilePath; If the endpoint contains multiple periods, it strips off everything after the endpoint causing all of the standard IHttpModule's to fail. Example: If I call http://localhost/services/Echo/test, my relative app file path has a returned value of: ~/echo/test However, if I make a call as http://localhost/services/System.Security.User/test, then my relative app file path has a returned value of: ~/system.security.user I am missing the '/test' on the end in that situation.

    Read the article

  • Linq to XML: create an anonymous object with element attributes and values

    - by Phil Scholtes
    I'm new to Linq and I'm trying to query a XML document to find a list of account managers for a particular user. (I realize it might make more sense to put this in a database or something else, but this scenario calls for a XML document). <user emailAddress='[email protected]'> <accountManager department='Customer Service' title='Manager'>[email protected]</accountManager> <accountManager department='Sales' title='Account Manager'>[email protected]</accountManager> <accountManager department='Sales' title='Account Manager'>[email protected]</accountManager> </user> I trying to create a list of objects (anonymous type?) with properties consisting of both XElement attributes (department, title) and values (email). I know that I can get either of the two, but my problem is selecting both. Here is what I'm trying: var managers = _xDoc.Root.Descendants("user") .Where(d => d.Attribute("emailAddress").Value == "[email protected]") .SelectMany(u => u.Descendants("accountManager").Select(a => a.Value)); foreach (var manager in managers) { //do stuff } I can get at a.Value and a.Attribute but I can't figure out how to get both and store them in an object. I have a feeling it would wind up looking something like: select new { department = u.Attribute("department").Value, title = u.Attribute("title").Value, email = u.Value };

    Read the article

  • Integrating ASP.NET MVC 2 with classic ASP

    - by David Lively
    I'm in the process of moving a large classic ASP application to ASP.NET MVC 2. Questions: My question is about project organization. I would prefer to not mix the MVC code with the ASP code in the same VS project. I'd like to have an MVC WAP with areas that match the parts of the website that I'm migrating. For instance, the old site has a folder /products/default.asp..... /products/productName/default.asp etc. In the MVC WAP, I'd like to have an area called "products", which I could then, either through a rewrite, routing, or preferably through some IIS configuration, point the "products" folder on the ASP site to. In this way, I could gradually move root folders from the ASP site to the MVC application. However, if I create the MVC WAP in a virtual folder, then my routes wind up looking like http://localhost/virtualFolder/products instead of http://localhost/products Any suggestions on how to conquer this? I know that, during deployment, I could deploy the MVC WAP into the root of the ASP site, but this doesn't help with debugging.

    Read the article

  • Aggregating, restructuring hourly time series data in R

    - by Advait Godbole
    I have a year's worth of hourly data in a data frame in R: > str(df.MHwind_load) # compactly displays structure of data frame 'data.frame': 8760 obs. of 6 variables: $ Date : Factor w/ 365 levels "2010-04-01","2010-04-02",..: 1 1 1 1 1 1 1 1 1 1 ... $ Time..HRs. : int 1 2 3 4 5 6 7 8 9 10 ... $ Hour.of.Year : int 1 2 3 4 5 6 7 8 9 10 ... $ Wind.MW : int 375 492 483 476 486 512 421 396 456 453 ... $ MSEDCL.Demand: int 13293 13140 12806 12891 13113 13802 14186 14104 14117 14462 ... $ Net.Load : int 12918 12648 12323 12415 12627 13290 13765 13708 13661 14009 ... While preserving the hourly structure, I would like to know how to extract a particular month/group of months the first day/first week etc of each month all mondays, all tuesdays etc of the year I have tried using "cut" without result and after looking online think that "lubridate" might be able to do so but haven't found suitable examples. I'd greatly appreciate help on this issue.

    Read the article

  • Efficient splitting of elements in a field

    - by Gary
    I have a field in a text file exported from a database. The field contains addresses but sometimes they are quite long and the database allows them to contain multiple lines. When exported, the newline character gets replaced with a dollar sign like this: first part of very long address$second part of very long address$third part of very long address Not every address has multiple lines and no address contains more than three lines. The length of each line is variable. I'm massaging the data for import into MS Access which is used for a mailmerge. I want to split the field on the $ sign if it's there but if the field only contains 1 line, I want to set my two extra output fields to a zero length string so that I don't wind up with blank lines in the address when it gets printed. I have an awk file that's working correctly on all the other data in the textfile but I need to get this last bit working. I tried the below code. Aside from the fact that I get a syntax error at the else, I'm not sure this is a good way to do what I want. This is being done with gawk on Windows. BEGIN { FS = "|" } $1 != "HEADER" { if ($6 ~ /\$/) split($6, arr, "$") address = arr[1] addresstwo = arr[2] addressthree = arr[3] addressLength = length(address) addressTwoLength = length(addresstwo) addressThreeLength = length(addressthree) else { address = $6 addressLength = length($6) addresstwo = "" addressTwoLength = length(addresstwo) addressthree = "" addressThreeLength = length(addressthree) } printf("%*s\t%*s\t\%*s\n", addressLength, address, addressTwoLength, addresstwo, addressThreeLength, addressthree) }

    Read the article

  • image not displaying in midlet

    - by user316843
    Hello all,am new here. i have a slight problem; PLease look at the following code and tell me if am doing something wrong because the image is not displaying. i have made it really small so it should fit but its not displaying. i have images displaying in other screens but this main midlet would not. Here is the code: import java.io.IOException; import javax.microedition.midlet.*; import javax.microedition.lcdui.*; /** * @author jay */ public class WShop extends MIDlet implements CommandListener { /* Declare display variables*/ private Form mainForm; private Display display; private Command OK,Exit,wView, mView, myView; /* */ Categories categories = new Categories(this); Image image; public WShop() { /* initialize Screen and Command buttons that will be used when the application starts in the class constructor*/ mainForm = new Form("Wind Shopper"); OK = new Command("OK", Command.OK, 2); Exit = new Command("Exit", Command.EXIT, 0); wview= new Command("wview", Command.OK, 0); mview= new Command("mview", Command.OK, 0); try { /* retrieving the main image of the application*/ image = Image.createImage("/main.png"); } catch (IOException ex) { ex.printStackTrace(); } mainForm.addCommand(OK); mainForm.addCommand(Exit); mainForm.addCommand(wView); mainForm.addCommand(mView); mainForm.setCommandListener(this); } public void startApp() { /* checks to see if the display is currently empty and then sets it to the current screen */ if (display == null) { display = Display.getDisplay(this); } display.setCurrent(mainForm); } /* paused state of the application*/ public void pauseApp() { } /* Destroy Midlet state*/ public void destroyApp(boolean unconditional) { } Thanks in advance.

    Read the article

  • Google Analytics Event Tracking and Variable visibility.

    - by Jeow
    Hi guys, I have added to my html page the standard latest snippet to get google analytics to work: ... ... var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-15080849-1']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = 'http://www.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); Now looking at the official 'event tracking guide' google says: add a snippet such as: pageTracker._trackEvent('Videos', 'Play', 'Gone With the Wind'); my question is: where is pageTracker coming from ? is it a global object in ga.js ? but if it is, why google did not tell me that they run a risk on breaking some script... I must be missing something any help really appreciated.

    Read the article

  • ExecutorService memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • When is a good time to start thinking about scaling?

    - by Slokun
    I've been designing a site over the past couple days, and been doing some research into different aspects of scaling a site horizontally. If things go as planned, in a few months (years?) I know I'd need to worry about scaling the site up and out, since the resources it would end up consuming would be huge. So, this got me to thinking, when is the best time to start thinking about, and designing for, scalability? If you start too early on, you could easily over complicate your design, and make it impossible to actually build. You could also get too caught up in the details, the architecture, whatever, and wind up getting nothing done. Also, if you do get it working, but the site never takes off, you may have wasted a good chunk of extra effort. On the other hand, you could be saving yourself a ton of effort down the road. Designing it from the ground up to be big would make it much easier later on to let it grow big, with very little rewriting going on. I know for what I'm working on, I've decided to make at least a few choices now on the side of scaling, but I'm not going to do a complete change of thinking to get it to scale completely. Notably, I've redesigned my database from a conventional relational design to one similar to what was suggested on the Reddit site linked below, and I'm going to give memcache a try. So, the basic question, when is a good time to start thinking or worrying about scaling, and what are some good designs, tips, etc. for when doing so? A couple of things I've been reading, for those who are interested: http://www.codinghorror.com/blog/2009/06/scaling-up-vs-scaling-out-hidden-costs.html http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html http://developer.yahoo.com/performance/rules.html

    Read the article

  • Why when i rotate the image black borders appear? PHP GD

    - by EAGhost
    This code generates two images using GD and rotates one of them. When I rotate the image black borders begin to appear. Anyone have an idea of how to resolve this? imagefilledrectangle($im, 0, 0, 300, 400, $black); imagefilledrectangle($im, 1, 1, 298, 398, $grey); imagefilledrectangle($im, 49, 69, 251, 271, $black); imagefilledrectangle($im, 50, 70, 250, 270, $white); imagefttext($im, 13, 0, 90, 30, $black, $font_file, "Wind Direcction"); $source=imagecreatetruecolor(100, 100); imagefilledrectangle($source, 0, 0, 100, 100, $white); $values = array( 20, 30, // Point 1 (x, y) 50, 0, 80, 30, 65, 30, 65, 100, 35, 100, 35, 30 // Point 7 (x, y) ); imagefilledpolygon($source, $values, 7, $black1); $asd=imagerotate($source, $rotate, 0); imagecolortransparent($asd, $black); imageantialias($asd, true); $insert_x = imagesx($asd); $insert_y = imagesy($asd); if($rotate==0 || $rotate==90 || $rotate==180 || $rotate==270){ imagecopymerge ( $im , $asd , 100 , 130 , 0 , 0 , $insert_x , $insert_y , 100 ); } if($rotate==45 || $rotate==135 || $rotate==225 || $rotate==315){ imagecopymerge ( $im , $asd , 85 , 110 , 0 , 0 , $insert_x , $insert_y , 100 ); } imageantialias($im, true); header('Content-Type: image/png'); imagepng($im); imagedestroy($im); ?

    Read the article

  • Using git filter-branch to remove commits by their commit message

    - by machineghost
    In our repository we have a convention where every commit message starts with a certain pattern: Redmine #555: SOME_MESSAGE We also do a bit of rebasing to bring in the potential release branch's changes to a specific issue's branch. In other words, I might have branch "foo-555", but before I merge it in to branch "pre-release" I need to get any commits that pre-release has that foo-555 doesn't (so that foo-555 can fast-forward merge in to pre-release). However, because pre-release sometimes changes, we sometimes wind up with situations where you bring in a commit from pre-release, but then that commit later gets removed from pre-release. It's easy to identify commits that came from pre-release, because the number from their commit message won't match the branch number; for instance, if I see "Redmine #123: ..." in my foo-555 branch, I know that its not a commit from my branch. So now the question: I'd like to remove all of the commits that "don't belong" to a branch; in other words, any commit that: Is in my foo-555 branch, but not in the pre-release branch (pre-release..foo-555) Has a commit message that doesn't start with "Redmine #555" but of course "555" will vary from branch to branch. Is there any way to use filter-branch (or any other tool) to accomplish this? Currently the only way I can see to do it is to do go an interactive rebase ("git rebase -i") and manually remove all the "bad" commits.

    Read the article

  • parsing xml using dom4j

    - by D3GAN
    My XML structure is like this: <rss> <channel> <yweather:location city="Paris" region="" country="France"/> <yweather:units temperature="C" distance="km" pressure="mb" speed="km/h"/> <yweather:wind chill="-1" direction="40" speed="11.27"/> <yweather:atmosphere humidity="87" visibility="9.99" pressure="1015.92" rising="0"/> <yweather:astronomy sunrise="8:30 am" sunset="4:54 pm"/> </channel> </rss> when I tried to parse it using dom4j SAXReader xmlReader = createXmlReader(); Document doc = null; doc = xmlReader.read( inputStream );//inputStream is input of function log.info(doc.valueOf("/rss/channel/yweather:location/@city")); private SAXReader createXmlReader() { Map<String,String> uris = new HashMap<String,String>(); uris.put( "yweather", "http://xml.weather.yahoo.com/ns/rss/1.0" ); uris.put( "geo", "http://www.w3.org/2003/01/geo/wgs84_pos#" ); DocumentFactory factory = new DocumentFactory(); factory.setXPathNamespaceURIs( uris ); SAXReader xmlReader = new SAXReader(); xmlReader.setDocumentFactory( factory ); return xmlReader; } But I got nothing in cmd but when I print doc.asXML(), my XML structure print correctly!

    Read the article

  • Deal with undefined values in code or in the template?

    - by David
    I'm writing a web application (in Python, not that it matters). One of the features is that people can leave comments on things. I have a class for comments, basically like so: class Comment: user = ... # other stuff where user is an instance of another class, class User: name = ... # other stuff And of course in my template, I have <div>${comment.user.name}</div> Problem: Let's say I allow people to post comments anonymously. In that case comment.user is None (undefined), and of course accessing comment.user.name is going to raise an error. What's the best way to deal with that? I see three possibilities: Use a conditional in the template to test for that case and display something different. This is the most versatile solution, since I can change the way anonymous comments are displayed to, say, "Posted anonymously" (instead of "Posted by ..."), but I've often been told that templates should be mindless display machines and not include logic like that. Also, other people might wind up writing alternate templates for the same application, and I feel like I should be making things as easy as possible for the template writer. Implement an accessor method for the user property of a Comment that returns a dummy user object when the real user is undefined. This dummy object would have user.name = 'Anonymous' or something like that and so the template could access it and print its name with no error. Put an actual record in my database corresponding to a user with user.name = Anonymous (or something like that), and just assign that user to any comment posted when nobody's logged in. I know I've seen some real-world systems that operate this way. (phpBB?) Is there a prevailing wisdom among people who write these sorts of systems about which of these (or some other solution) is the best? Any pitfalls I should watch out for if I go one way vs. another? Whoever gives the best explanation gets the checkmark.

    Read the article

  • ExecutorSerrvice memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • How to track the touch vector?

    - by mystify
    I need to calculate the direction of dragging a touch, to determine if the user is dragging up the screen, or down the screen. Actually pretty simple, right? But: 1) Finger goes down, you get -touchesBegan:withEvent: called 2) Must wait until finger moves, and -touchesMoved:withEvent: gets called 3) Problem: At this point it's dangerous to tell if the user did drag up or down. My thoughts: Check the time and accumulate calculates vectors until it's secure to tell the direction of touch. Easy? No. Think about it: What if the user holds the finger down for 5 minutes on the same spot, but THEN decides to move up or down? BANG! Your code would fail, because it tried to determine the direction of touch when the finger didn't move really. Problem 2: When the finger goes down and stays at the same spot for a few seconds because the user is a bit in the wind and thinks about what to do now, you'll get a lot of -touchesMoved:withEvent: calls very likely, but with very minor changes in touch location. So my next thought: Do the accumulation in -touchesMoved:withEvent:, but only if a certain threshold of movement has been exceeded. I bet you have some better concepts in place?

    Read the article

  • How do I put all the types matching a particular C# interface in an IDictionary?

    - by Kevin Brassen
    I have a number of classes all in the same interface, all in the same assembly, all conforming to the same generic interface: public class AppleFactory : IFactory<Apple> { ... } public class BananaFactory : IFactory<Banana> { ... } // ... It's safe to assume that if we have an IFactory<T> for a particular T that it's the only one of that kind. (That is, there aren't two things that implement IFactory<Apple>.) I'd like to use reflection to get all these types, and then store them all in an IDictionary, where the key is typeof(T) and the value is the corresponding IFactory<T>. I imagine eventually we would wind up with something like this: _map = new Dictionary<Type, object>(); foreach(Type t in [...]) { object factoryForType = System.Reflection.[???](t); _map[t] = factoryForType; } What's the best way to do that? I'm having trouble seeing how I'd do that with the System.Reflection interfaces.

    Read the article

  • create a class attribute without going through __setattr__

    - by eric.frederich
    Hello, What I have below is a class I made to easily store a bunch of data as attributes. They wind up getting stored in a dictionary. I override __getattr__ and __setattr__ to store and retrieve the values back in different types of units. When I started overriding __setattr__ I was having trouble creating that initial dicionary in the 2nd line of __init__ like so... super(MyDataFile, self).__setattr__('_data', {}) My question... Is there an easier way to create a class level attribute with going through __setattr__? Also, should I be concerned about keeping a separate dictionary or should I just store everything in self.__dict__? #!/usr/bin/env python from unitconverter import convert import re special_attribute_re = re.compile(r'(.+)__(.+)') class MyDataFile(object): def __init__(self, *args, **kwargs): super(MyDataFile, self).__init__(*args, **kwargs) super(MyDataFile, self).__setattr__('_data', {}) # # For attribute type access # def __setattr__(self, name, value): self._data[name] = value def __getattr__(self, name): if name in self._data: return self._data[name] match = special_attribute_re.match(name) if match: varname, units = match.groups() if varname in self._data: return self.getvaras(varname, units) raise AttributeError # # other methods # def getvaras(self, name, units): from_val, from_units = self._data[name] if from_units == units: return from_val return convert(from_val, from_units, units), units def __str__(self): return str(self._data) d = MyDataFile() print d # set like a dictionary or an attribute d.XYZ = 12.34, 'in' d.ABC = 76.54, 'ft' # get it back like a dictionary or an attribute print d.XYZ print d.ABC # get conversions using getvaras or using a specially formed attribute print d.getvaras('ABC', 'cm') print d.XYZ__mm

    Read the article

  • Cannot access NSDictionary

    - by michael blaize
    I created a JSON using a PHP script. I am reading the JSON and can see that the data has been correctly read. However, when it comes to access the objects I get unrecognized selector sent to instance... Cannot seem to find why that is after too many hours !!!! Any help would be great ! My code looks like that: `NSDictionary *json = [[NSDictionary alloc] init]; json = [NSJSONSerialization JSONObjectWithData:receivedData options:kNilOptions error:&error]; NSLog(@"raw json = %@,%@",json,error); NSMutableArray *name = [[NSMutableArray alloc] init]; [name addObjectsFromArray: [json objectForKey:@"name"]];` The code crashes when reaching the last line above. The output like this: raw json = ( { category = vacancies; link = "http://blablabla.com"; name = "name 111111"; tagline = "tagline 111111"; }, { category = vacancies; link = "http://blobloblo.com"; name = "name 222222222"; tagline = "tagline 222222222"; } ),(null) 2012-06-23 21:46:57.539 Wind expert[4302:15203] -[__NSCFArray objectForKey:]: unrecognized selector sent to instance 0xdcfb970 HELP !!!

    Read the article

  • Should I *always* import my file references into the database in drupal?

    - by sprugman
    I have a cck type with an image field, and a unique_id text field. The file name of the image is based on the unique_id. All of the content, including the image itself is being generated automatically via another process, and I'm parsing what that generates into nodes. Rather than creating separate fields for the id and the image, and doing an official import of the image into the files table, I'm tempted to only create the id field and create the file reference in the theme layer. I can think of pros and cons: 1) Theme Layer Approach Pros: makes the import process much less complex don't have to worry about syncing the db with the file system as things change more flexible -- I can move my images around more easily if I want Cons: maybe not as much The Drupal Way™ not as pure -- I'll wind up with more logic on the theme side. 2) Import Approach Pros: import method is required if we ever wanted to make the files private (we won't.) safer? Maybe I'll know if there's a problem with the image at import time, rather than view time. Since I'll be bulk importing, that might make a difference. if I delete a node through the admin interface, drupal might be able to delete the file for me, as well. Con: more complex import and maintenance All else being equal, simpler is always better, so I'm leaning toward #1. Are there any other issues I'm missing? (Since this is an open ended question, I guess I'll make it a community wiki, whatever that means.)

    Read the article

  • The Koyal Group Info Mag News¦Charged building material could make the renewable grid a reality

    - by Chyler Tilton
    What if your cell phone didn’t come with a battery? Imagine, instead, if the material from which your phone was built was a battery. The promise of strong load-bearing materials that can also work as batteries represents something of a holy grail for engineers. And in a letter published online in Nano Letters last week, a team of researchers from Vanderbilt University describes what it says is a breakthrough in turning that dream into an electrocharged reality. The researchers etched nanopores into silicon layers, which were infused with a polyethylene oxide-ionic liquid composite and coated with an atomically thin layer of carbon. In doing so, they created small but strong supercapacitor battery systems, which stored electricity in a solid electrolyte, instead of using corrosive chemical liquids found in traditional batteries. These supercapacitors could store and release about 98 percent of the energy that was used to charge them, and they held onto their charges even as they were squashed and stretched at pressures up to 44 pounds per square inch. Small pieces of them were even strong enough to hang a laptop from—a big, fat Dell, no less. Although the supercapacitors resemble small charcoal wafers, they could theoretically be molded into just about any shape, including a cell phone’s casing or the chassis of a sedan. They could also be charged—and evacuated of their charge—in less time than is the case for traditional batteries. “We’ve demonstrated, for the first time, the simple proof-of-concept that this can be done,” says Cary Pint, an assistant professor in the university’s mechanical engineering department and one of the authors of the new paper. “Now we can extend this to all kinds of different materials systems to make practical composites with materials specifically tailored to a host of different types of applications. We see this as being just the tip of a very massive iceberg.” Pint says potential applications for such materials would go well beyond “neat tech gadgets,” eventually becoming a “transformational technology” in everything from rocket ships to sedans to home building materials. “These types of systems could range in size from electric powered aircraft all the way down to little tiny flying robots, where adding an extra on-board battery inhibits the potential capability of the system,” Pint says. And they could help the world shift to the intermittencies of renewable energy power grids, where powerful batteries are needed to help keep the lights on when the sun is down or when the wind is not blowing. “Using the materials that make up a home as the native platform for energy storage to complement intermittent resources could also open the door to improve the prospects for solar energy on the U.S. grid,” Pint says. “I personally believe that these types of multifunctional materials are critical to a sustainable electric grid system that integrates solar energy as a key power source.”

    Read the article

  • Seriously, It’s Time to Get Your Content Act Together

    - by Mike Stiles
    Branded content, content marketing, social content, brand journalism, we’re seeing those terms more and more. Why? The technology tools are coming together. We should know. We can gather big data, crunch it, listen to the public, moderate, respond, get to know the customer intimately, know what they like, know what they want, we can target, distribute, amplify, measure engagement and reaction, modify strategy and even automate a great deal of all that. An amazing machine, a sleek, smooth-running engine has been built such that all the parts can interact and work together to deliver peak performance and maximum output. But that engine isn’t going anywhere without any gas. Content is the gas. Yes, we curate other people’s content. We can siphon their gas. There’s tech to help with that too. But as for the creation of original, worthwhile content made for a specific audience, our audience, machines can’t do that…at least not yet. Curated content is great. But somebody has to originate the content for it to be curated and shared. And since the need for good, curated content is obviously large and the desire to share is there, it’s a winning proposition for a brand to be a consistent producer of original content. And yet, it feels like content is an issue we’re avoiding. There’s a reluctance to build a massive pipeline if you have no idea what you’re going to run through it. The C-suite often doesn’t know what content is, that it’s different from ads, where to get it, who makes it, how long it should be, what the point of it is if there’s no hard sell of the product, what it costs, how to use it, how to measure it, how to make sure it’s good, or how to make sure it will keep flowing. It could be the reason many brands aren’t pulling the trigger on socially enabling the enterprise. And that’s a shame, because there are a lot of creative, daring, experimental, uniquely talented entertainers and journalists chomping at the bit to execute content for brands. But for many corporate executives, content is “weird,” and the people who make it are even weirder. The content side of the equation is human. It’s art, but art that can be informed by data. The natural inclination is for brands to turn to their agencies for such creative endeavors. But agencies are falling into one of two categories. They’re failing to transition from ads to content. In “Content Era, What’s the Role of Agencies?” Alexander Jutkowitz says agencies were made for one-hit campaigns, not ongoing content. Or, they’re ready and capable but can’t get clients to do the right things. Agencies have to make money, even if it means continuing to do the wrong things because that’s all the client will agree to. So what we wind up with in the pipeline is advertising, marketing-heavy content, content that was obviously created or spearheaded by non-creative executives, random & inconsistent content, copy written for SEO bots, and other completely uninteresting nightmares. Frank Rose, author of “The Art of Immersion,” writes, “Content without story and excitement is noise pollution.” In the old days, you made an ad and inserted it into shows made by people who knew what they were doing. You could bask in that show’s success and leverage their audience. Now, you are tasked with attracting, amassing and holding your own audience. You may just want to make, advertise and sell your widgets. But now there’s a war on for a precious commodity, attention. People are busy. They have filters to keep uninteresting and irrelevant things out. They value their time and expect value back when they give it up. Joe Pulizzi, founder of the Content Marketing Institute, says, "Your customers don't care about you, your products, your services…they care about themselves, their wants and their needs." Is it worth getting serious about content and doing it right? 61% of consumers feel better about a company that delivers custom content (Custom Content Council). Interesting content is one of the top 3 reasons people follow brands on social (Content+). 78% of consumers think organizations that provide custom content want to build good relationships with them (TMG Custom Media). On the B2B side, 80% of business decision makers prefer to get company info in a series of articles vs. an ad. So what’s the hang-up? Cited barriers to content marketing are lack of human resources (42%) and lack of budget (35%). 54% of brands don’t have a single on-site, dedicated content creator. And only 38% of brands have a content marketing strategy. Tech has built the biggest, most incredible stage for brands that’s ever been built. Putting something on that stage is your responsibility. Do a bad show, or no show at all, and you’ll be the beautiful, talented actress that never got discovered. @mikestilesPhoto: Gabriella Fabbri, stock.xchng

    Read the article

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Goodby jQuery Templates, Hello JsRender

    - by SGWellens
    A funny thing happened on my way to the jQuery website, I blinked and a feature was dropped: jQuery Templates have been discontinued. The new pretender to the throne is JsRender. jQuery Templates looked pretty useful when they first came out. Several articles were written about them but I stayed away because being on the bleeding edge of technology is not a productive place to be. I wanted to wait until it stabilized…in retrospect, it was a serendipitous decision. This time however, I threw all caution to the wind and took a close look at JSRender. Why? Maybe I'm having a midlife crisis; I'll go motorcycle shopping tomorrow. Caveat, here is a message from the site: Warning: JsRender is not yet Beta, and there may be frequent changes to APIs and features in the coming period. Fair enough, we've been warned. The first thing we need is some data to render. Below is some JSON formatted data. Typically this will come from an asynchronous call to a web service. For simplicity, I hard coded a variable:     var Golfers = [         { ID: "1", "Name": "Bobby Jones", "Birthday": "1902-03-17" },         { ID: "2", "Name": "Sam Snead", "Birthday": "1912-05-27" },         { ID: "3", "Name": "Tiger Woods", "Birthday": "1975-12-30" }         ]; We also need some templates, I created two. Note: The script blocks have the id property set. They are needed so JsRender can locate them.     <script id="GolferTemplate1" type="text/html">         {{=ID}}: <b>{{=Name}}</b> <i>{{=Birthday}}</i> <br />     </script>       <script id="GolferTemplate2" type="text/html">         <tr>             <td>{{=ID}}</td>             <td><b>{{=Name}}</b></td>             <td><i>{{=Birthday}}</i> </td>         </tr>     </script> Including the correct JavaScript files is trivial:     <script src="Scripts/jquery-1.7.js" type="text/javascript"></script>     <script src="Scripts/jsrender.js" type="text/javascript"></script> Of course we need some place to render the output:     <div id="GolferDiv"></div><br />     <table id="GolferTable"></table> The code is also trivial:     function Test()     {         $("#GolferDiv").html($("#GolferTemplate1").render(Golfers));         $("#GolferTable").html($("#GolferTemplate2").render(Golfers));           // you can inspect the rendered html if there are poblems.         // var html = $("#GolferTemplate2").render(Golfers);     } And here's what it looks like with some random CSS formatting that I had laying around.    Not bad, I hope JsRender lasts longer than jQuery Templates. One final warning, a lot of jQuery code is ugly, butt-ugly. If you do look inside the jQuery files, you may want to cover your keyboard with some plastic in case you get vertigo and blow chunks. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Red Gate does Byte Night 2012

    - by red(at)work
    On the 5th of October 2012, a team of nine plucky Red Gaters braved the howling wind and the driving rain to sleep outside. No tents or mattresses were allowed – all we took for protection were sleeping bags, groundsheets, plastic sacks and Colin’s enormous fishing umbrella (a godsend in umbrella-y disguise). Why would we do such a thing? For Byte Night, an annual tech sector sleepout in support of Action for Children, who tackle the causes as well as the consequences of youth homelessness. Byte Night encourages technology professionals to do for one night a year what thousands of young people have to do every night – sleep rough.  We signed up for Byte Night in the warm, heady midst of the British summer, thinking it couldn’t possibly be all that bad. Even on the night itself – before the rain began to fall, sat in the comfort and warmth of a company canteen, drinking wine and eating chill and preparing to win the pub quiz – we were excited and optimistic about the night that lay ahead of us. All of that changed as soon as we stepped out into one of the worst rainstorms of the year. Brian, the team’s birthday boy, describes it best: Picture the scene: it’s 3 am on a Friday. I’m lying outside, fully clothed in a sleeping bag, wearing a raincoat, trussed up inside a large plastic pocket, on a ground sheet beneath a giant umbrella, wedged so tightly between two of my colleagues that I can’t move my arms. I’m wide awake, staring up at the grey sky beyond the edge of the umbrella; a limp, flickering white glow hints at a moon somewhere behind the drifting clouds. I haven’t slept since we first moved outside at 11 pm. Outside. Did I mention we were outside? I’m hung over. I need the loo. But there is no way on earth that I’m getting out of this sleeping bag. It’s cold. It’s raining. Not just raining, but chucking it down. It’s been doing this non-stop since 10pm. The rain sounds like a hyperactive drummer on the fishing umbrella, and the noise is loud and relentless. Puddles of water are forming all over the groundsheet, and, despite being ensconced inside the plastic pouch, I am wet. The fishing umbrella is protecting me from the worst of the driving rain, but not all of me is under it, and five hours of rain is no match for it. Everything is wet. My left side has become horribly damp. My trainers, which I placed next to my sleeping bag, are now completely soaked through. Mmm. That’ll be fun in the morning. My head is next to Colin’s head on one side, and a multi-pack of McCoy’s cheddar and onion crisps on the other. Don’t ask about the tub of hummus. That’s somewhere down by my ankles, abandoned to the night. Jess, who is lying next to me, rolls over onto her side. A mini waterfall cascades from her rain-pouch onto my face. Bah. I continue to stare into the heavens, willing the dawn to hurry up. Something lands on my face. It’s a mosquito. Great. Midnight, when this still seemed like fun – when we opened some champagne and my colleagues presented me with a caterpillar birthday cake, when everyone was drunk and jolly and full of stoic resolve – feels like a long time ago. Did I mention that today is my birthday? The remains of the caterpillar cake endure the same fate as the hummus, left out in the rain like a metaphor for sadness. It’s getting colder. I can see my breath. Silence has descended on the group, apart from the rustle of plastic. And the rain, obviously. Someone snores, and I envy whoever it is the sweet escape of sleep. I try to wriggle a bit further down inside my sleeping bag, but it doesn’t want to be wriggled into. Only 3 hours till dawn. 180 minutes. I begin to count them off, one at a time.  All nine of us got to go home in the morning, but thousands of children across the UK don’t have that luxury. If you’d like to sponsor the Red Gate Byte Night team, our JustGiving page can be found here.   Chris, before the outside bit actually happened. More photos from Byte Night Cambridge 2012 can be found here.

    Read the article

< Previous Page | 10 11 12 13 14 15 16  | Next Page >