Search Results

Search found 5203 results on 209 pages for 'rules of thumb'.

Page 39/209 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How to create a Windows 7 installation usb from Linux or Mac?

    - by Shane
    I have a Windows 7 installation DVD that came with a computer with no optical drive. I have an empty USB thumb drive. I have access to two computers with optical drives, one running Linux and the other running Mac OS X. Notably, I do not have access to any Windows computer at this time. With the tools that I have, how can I create a thumb drive that I can boot with and install Windows 7? Do I have to look out for anything when making the ISO from the DVD (DRM or anything)? After the ISO is made, will UNetbootin work? How about dd?

    Read the article

  • emacs keybindings

    - by Max
    I read a lot about vim and emacs and how they make you much more productive, but I didn't know which one to pick. Finally when I decided to teach myself common lisp, the decision was straight forward: everybody says that there's no better editor for common lisp, than emacs + slime. So I started with emacs tutorial and immediately I ran into something that seems very unproductive to me. I'm talking about key bindings for cursor keys: forward/backward: Ctrl+f, Ctrl+b up/down: Ctrl+p, Ctrl+n I find these bindings very strange. I assume that fingers should be on their home rows (am I wrong here?), so to move cursor forward or backward I should use my left index finger and for up and down right pinky and right index fingers. When working with any of Windows IDEs and text editors to navigate text I usually place my right hand in a position so that my thumb is on the right ctrl and my index, ring and middle fingers are on the cursor keys. From this position it is very easy and comfortable to move cursor: I can do one-character moves with my 3 right fingers, or I can press ctrl with my right thumb and do word-moves instead. Also I can press shift with my left pinky and do single-character or word selections. Also it is a very comfortable position to reach PgUp, PgDn, Home, End, Delete and Backspace keys with my right hand. So I have even more navigation and selection possibilities. I understand that the decision not to use cursor keys is to allow one to use emacs to connect to remote terminal sessions, where these keys are not supported, but I still find the choice of cursor keys very unfortunate. Why not to use j, k, i, l instead? This way I could use my right hand without much finger stretching. So how is emacs more productive? What am I doing wrong?

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • High Load average threshold in linux

    - by user2481010
    My one of friend said that his server load average sometime goes above 500-1000, for me it is strange value because I never saw load average more than 10. I asked him give me some snapshot of top and memory usages, he gave following details: TOP USAGES top - 06:06:03 up 117 days, 23:02, 2 users, load average: 147.37, 44.57, 15.95 Tasks: 116 total, 2 running, 113 sleeping, 0 stopped, 1 zombie Cpu(s): 16.6%us, 6.9%sy, 0.0%ni, 9.2%id, 66.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8161648k total, 7779528k used, 382120k free, 3296k buffers Swap: 5242872k total, 1293072k used, 3949800k free, 168660k cached Free $ free -gt total used free shared buffers cached Mem: 7 6 1 0 0 4 -/+ buffers/cache: 1 5 Swap: 4 0 4 Total: 12 6 6 Total cpu $ nproc 8 my question is it possible load average more than 100 on 8 core,12 GB mem Server? because I read many tutorial,article on load average, it said that thumb rule is "number of cores = max load" according to thumb rule here is max load average 16 then how his server running with 147.37 load server? he said that it is least value (147.37) some time goes more than 500.

    Read the article

  • Wyse Simple Imager. Unable to Create Product Directory

    - by Steve
    I am trying to submit a post on www.technicalhelp.de, but I receive an error: Invalid Session. Please resubmit the form. This happens if I delete temporary internet files and log out and back in, and if I use a different browser, and if I use a proxy browser. Perhaps someone on this forum can help I am trying to push a Wyse device image to a USB thumb drive. The image is on a remote server, and the thumb drive is connected to my desktop PC. I am using Wyse Simple Imager to do this. When I select the following: Product: V90 Image Version: 5.010627.512 Image File: \servername\folder\OLD_Rapport\V90-withusb\9V90.i2d Almost instantly, without attempting any action, I receive the message: WyseImager Unable to Create Product Directory. Add Image Failed I have completely formatted the USB drive with FAT32. It is new out of the fox, and I can create folders in it. How do I fix this?

    Read the article

  • Dead USB flash drive

    - by Unsliced
    So a friend has come to me with a problem. They have a dead USB thumb drive which no longer responds when plugged into a machine. I've tried it in a Mac and it doesn't even respond, at least on a Windows XP machine it sees that it is there but can't show it in explorer, just that whatever is plugged in has malfunctioned. There is obviously current because the activity light on the drive illuminates. I'm looking for suggestions, please. I have access to Mac or Windows hardware and am happy to experiment (and even to pay if the solution works!) It's a bit late to recommend regular backups, but in the lack of that, what's the next best forensic advice? Edit: I should stress that, if possible, we're trying to rescue the data, after all, thumb drives are basically disposable and hardly worth the bother if there's no emotional or functional reason for wanting to rescue it!

    Read the article

  • Running Ubuntu off a USB drive?

    - by Solignis
    I was wondering if a USB 2.0 Thumb drive has enough bandwidth to act as a primary system drive in an Ubuntu Linux server. More specifically an SAN server. I am running an iSCSI target, ZFS and NFS-kernel-server, BIND9 (Slave), and Openldap (Slave). I was thinking of resorting to a thumb drive because my new motherboard only has 4 SATA ports and I have 5 disks. 4 (ZFS Pool) 1 (System). And unless I get an expansion card there is no way to get more SATA ports. This "server" leans more twords a home server. I use in my lab with my VMware server. It provides storage, or atleast it did until it died. Would it still be better to go with the SATA hard disk?

    Read the article

  • Interfaces: profit of using

    - by Zapadlo
    First of all, my ubiquitous language is PHP, and I'm thinking about learning Java. So let me split my question on two closely related parts. Here goes the first part. Say I have a domain-model class. It has some getters, setters, some query methods etc. And one day I want to have a possibility to compare them. So it looks like: class MyEntity extends AbstractEntity { public function getId() { // get id property } public function setId($id) { // set id property } // plenty of other methods that set or retrieve data public function compareTo(MyEntity $anotherEntity) { // some compare logic } } If it would have been Java, I should have implemented a Comparable interface. But why? Polymorphism? Readbility? Or something else? And if it was PHP -- should I create Comparable interface for myself? So here goes the second part. My colleague told me that it is a rule of thumb in Java to create an interface for every behavioral aspect of the class. For example, if I wanted to present this object as a string, I should state this behaviour by something like implements Stringable, where in case of PHP Stringable would look like: interface Stringable { public function __toString(); } Is that really a rule of thumb? What benefits are gained with this approach? And does it worth it in PHP? And in Java?

    Read the article

  • Progress bar in a Flash MP3 Player

    - by Deryck
    Hi I have coded a simple XML driven MP3 player. I have used Sound and SoundChannel objects and method but I can´t find a way of make a progress bar. I don´t need a loading progress I need a song progress status bar. Canbd anybody help me? Thanks. UPDATE: Theres is the code. var musicReq: URLRequest; var thumbReq: URLRequest; var music:Sound = new Sound(); var sndC:SoundChannel; var currentSnd:Sound = music; var position:Number; var currentIndex:Number = 0; var songPaused:Boolean; var songStopped:Boolean; var lineClr:uint; var changeClr:Boolean; var xml:XML; var songList:XMLList; var loader:URLLoader = new URLLoader(); loader.addEventListener(Event.COMPLETE, Loaded); loader.load(new URLRequest("musiclist.xml")); var thumbHd:MovieClip = new MovieClip(); thumbHd.x = 50; thumbHd.y = 70; addChild(thumbHd); function Loaded(e:Event):void{ xml = new XML(e.target.data); songList = xml.song; musicReq = new URLRequest(songList[0].url); thumbReq = new URLRequest(songList[0].thumb); music.load(musicReq); sndC = music.play(); title_txt.text = songList[0].title + " - " + songList[0].artist; loadThumb(); sndC.addEventListener(Event.SOUND_COMPLETE, nextSong); } function loadThumb():void{ var thumbLoader:Loader = new Loader(); thumbReq = new URLRequest(songList[currentIndex].thumb); thumbLoader.load(thumbReq); thumbLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, thumbLoaded); } function thumbLoaded(e:Event):void { var thumb:Bitmap = (Bitmap)(e.target.content); var holder:MovieClip = thumbHd; holder.addChild(thumb); } prevBtn.addEventListener(MouseEvent.CLICK, prevSong); nextBtn.addEventListener(MouseEvent.CLICK, nextSong); playBtn.addEventListener(MouseEvent.CLICK, playSong); function prevSong(e:Event):void{ if(currentIndex 0){ currentIndex--; } else{ currentIndex = songList.length() - 1; } var prevReq:URLRequest = new URLRequest(songList[currentIndex].url); var prevPlay:Sound = new Sound(prevReq); sndC.stop(); title_txt.text = songList[currentIndex].title + " - " + songList[currentIndex].artist; sndC = prevPlay.play(); currentSnd = prevPlay; songPaused = false; loadThumb(); sndC.addEventListener(Event.SOUND_COMPLETE, nextSong); } function nextSong(e:Event):void { if(currentIndex And here the code for the lenght and position. It´s inside a MovieClip. That´s why I use absolute path for find the Sound object. this.addEventListener(Event.ENTER_FRAME, moveSpeaker); var initWidth:Number = this.SpkCone.width; var initHeight:Number = this.SpkCone.height; var rootObj:Object = root; function moveSpeaker(eventArgs:Event) { var average:Number = ((rootObj.audioPlayer_mc.sndC.leftPeak + rootObj.audioPlayer_mc.sndC.rightPeak) / 2) * 10; // trace(average); // trace(initWidth + ":" + initHeight); trace(rootObj.audioPlayer_mc.sndC.position + "/" + rootObj.audioPlayer_mc.music.length); this.SpkCone.width = initWidth + average; this.SpkCone.height = initHeight + average; }

    Read the article

  • Jquery query XML document where clause

    - by user578406
    I have an XML document that I want to search a specific date and get information for just that date. My XML looks like this: <month id="01"> <day id="1"> <eitem type="dayinfo"> <caption> <text lang="cy">f. 3 r.</text> <text lang="en">f. 3 r.</text> </caption> <ref href="link" id="3"/> <thumb href="link" id="3"/> </eitem> </day> <day id="7"> <eitem type="dayinfo"> <caption> <text lang="cy">f. 5 v.</text> <text lang="en">f. 5 v.</text> </caption> <ref href="link" id="4"/> <thumb href="link" id="4"/> </eitem> </day> <day id="28"> <eitem type="dayinfo2"> <caption id="1"> <text lang="cy">test</text> <text lang="en">test2</text> </caption> <ref href="link" id="1"/> <thumb href="link" id="1"/> </eitem> </day> <day id="28"> <eitem type="dayinfo"> <caption> <text lang="cy">f. 14 v.</text> <text lang="en">f. 14 v.</text> </caption> <ref href="link" id="20"/> <thumb href="link" id="20"/> </eitem> </day> </month> My Jquery looks like this: $(xml).find('month[id=01]').each(function() { $(xml).find("day").each(function() { var day = $(this).attr('id'); alert(day); }); }); In the XML example above I only shown one month, however there are many more. In my JQuery I've tried to do 'Where month = 1' and get all the days info for that month, however that JQuery brings back days for every month. How do I do a where clause with JQuery/JavaScript on a XML document? thanks.

    Read the article

  • Grabbing all <a> tags in a specific div and displaying them

    - by Taylor Swyter
    So I'v got a small problem with my portfolio site (you can see it at here) When you click on a portfolio piece, a top section opens up to reveal the details (Title, year, role, description) as well as the photos. I'v been able to get each project to replace all the text data, but I can't seem to get the images to load into the thumbnails. I have been able to get the last image i'm looking for in all of the images on the site, but not display each photo for each project. Here's the HTML i'm working with: <section id="details"> <div class="pagewrapper"> <section id="main-img"> <article id="big-img"> <img src="" alt="big-img" /> </article> <article class="small-img-container"> <a href="#"><img src="#" alt="smallimg" class="small-img" /></a> </article> </section> <section id="description"> <h3></h3> <h4></h4> <h5></h5> <p></p> </section> </div> <div class="clear"></div> </section> <section id="portfolio"> <div class="pagewrapper"> <h2 class="sectionTitle">Portfolio</h2> <div class="thumb"> <a class="small" href="#" title="David Lockwood Music" data-year="2010" data-role="Sole Wordpress Developer" data-description="David Lockwood is a musician and an educator based in New Hampshire who came to me needing a website for his musical career. I fully developed his site using Wordpress as a CMS, creating a custom template based on the design by Jeremiah Louf. Jeremiah and I worked together on the website's UX design."><img src="images_original/davidcover.png" alt="thumb" /> <div class="hide"> <a href="images/davidlockwood/homepage.png" ></a> <a href="images/davidlockwood/blog.png"></a> <a href="images/davidlockwood/shows.png"></a> <a href="images/davidlockwood/bio.png"></a> <a href="images/davidlockwood/photos.png"></a> </div> <h3>David Lockwood Music</h3> <div class="clear"></div> </a> </div><!--thumb--> and here's the jQuery: $(document).ready(function(){ var proj = {}; $('.thumb a').click(function() { $('#details').slideDown(1000); $('.hide a').each(function() { proj.img = $(this).attr("href"); $('.small-img-container img').attr('src',proj.img); }); alert("the image is " + proj.img);//is it getting the image URLS? proj.title = $(this).attr("title"); proj.year = $(this).attr("data-year"); proj.role = $(this).attr("data-role"); proj.description = $(this).attr("data-description"); $('#description h3').text(proj.title); $('#description h4').text(proj.year); $('#description h5').text(proj.role); $('#description p').text(proj.description); }); }); Anyone have any idea how I grab just the images for the specific project, display them all as thumbnails and then make those thumbnail clickable to see the bigger image? Thanks!

    Read the article

  • Django 1.2 + South 0.7 + django-annoying's AutoOneToOneField leads to TypeError: 'LegacyConnection'

    - by konrad
    I'm using Django 1.2 trunk with South 0.7 and an AutoOneToOneField copied from django-annoying. South complained that the field does not have rules defined and the new version of South no longer has an automatic field type parser. So I read the South documentation and wrote the following definition (basically an exact copy of the OneToOneField rules): rules = [ ( (AutoOneToOneField), [], { "to": ["rel.to", {}], "to_field": ["rel.field_name", {"default_attr": "rel.to._meta.pk.name"}], "related_name": ["rel.related_name", {"default": None}], "db_index": ["db_index", {"default": True}], }, ) ] from south.modelsinspector import add_introspection_rules add_introspection_rules(rules, ["^myapp"]) Now South raises the following error when I do a schemamigration. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "django/core/management/base.py", line 223, in execute output = self.handle(*args, **options) File "South-0.7-py2.6.egg/south/management/commands/schemamigration.py", line 92, in handle (k, v) for k, v in freezer.freeze_apps([migrations.app_label()]).items() File "South-0.7-py2.6.egg/south/creator/freezer.py", line 33, in freeze_apps model_defs[model_key(model)] = prep_for_freeze(model) File "South-0.7-py2.6.egg/south/creator/freezer.py", line 65, in prep_for_freeze fields = modelsinspector.get_model_fields(model, m2m=True) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 322, in get_model_fields args, kwargs = introspector(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 271, in introspector arg_defs, kwarg_defs = matching_details(field) File "South-0.7-py2.6.egg/south/modelsinspector.py", line 187, in matching_details if any([isinstance(field, x) for x in classes]): TypeError: 'LegacyConnection' object is not iterable Is this related to a recent change in Django 1.2 trunk? How do I fix this? I use this field as follows: class Bar(models.Model): foo = AutoOneToOneField("foo.Foo", primary_key=True, related_name="bar") For reference the field code from django-tagging: class AutoSingleRelatedObjectDescriptor(SingleRelatedObjectDescriptor): def __get__(self, instance, instance_type=None): try: return super(AutoSingleRelatedObjectDescriptor, self).__get__(instance, instance_type) except self.related.model.DoesNotExist: obj = self.related.model(**{self.related.field.name: instance}) obj.save() return obj class AutoOneToOneField(OneToOneField): def contribute_to_related_class(self, cls, related): setattr(cls, related.get_accessor_name(), AutoSingleRelatedObjectDescriptor(related))

    Read the article

  • Is F# a good language for card game AI?

    - by Anthony Brien
    I'm writing a Mahjong Game in C# (the Chinese traditional game, not the solitaire kind). While writing the code for the bot player's AI, I'm wondering if a functional language like F# would be a more suitable language than what I currently use which is C# with a lot of Linq. I don't know much about F# which is why I ask here. To illustrate what I try to solve, here's a quick summary of Mahjong: Mahjong plays a bit like Gin Rummy. You have 13 tiles in your hand, and each turn, you draw a tile and discard another one, trying to improve your hand towards a winning Mahjong hand, which consists or 4 sets and a pair. Sets can be a 3 of a kind (pungs), 4 of a kind (kongs) or a sequence of 3 consecutive tiles (chows). You can also steal another player's discard, if it can complete one of your sets. The code I had to write to detect if the bot can declare 3 consecutive tiles set (chow) is pretty tedious. I have to find all the unique tiles in the hand, and then start checking if there's a sequence of 3 tiles that contain that one in the hand. Detecting if the bot can go Mahjong is even more complicated since it's a combination of detecting if there's 4 sets and a pair in his hand. And that's just a standard Mahjong hand. There's also numerous "special" hands that break those rules but are still a Mahjong hand. For example, "13 unique wonders" consists of 13 specific tiles, "Jade Empire" consists of only tiles colored green, etc. In a perfect world, I'd love to be able to just state the 'rules' of Mahjong, and have the language be able to match a set of 13 tiles against those rules to retrieve which rules it fulfills, for example, checking if it's a Mahjong hand or if it includes a 4 of a kind. Is this something F#'s pattern matching feature can help solve?

    Read the article

  • Inner join and outer join options in Entity Framework 4.0

    - by bigb
    I am using EF 4.0 and I need to implement query with one inner join and with N outer joins I started to implement this using different approaches but get into trouble at some point. Here is two examples how I started of doing this using ObjectQuery<'T' and Linq to Entity 1)Using ObjectQuery<'T' I implement flexible outer join but I don't know how to perform inner join with entity Rules in that case (by default Include("Rules") doing outer join, but i need to inner join by Id). public static IEnumerable<Race> GetRace(List<string> includes, DateTime date) { IRepository repository = new Repository(new BEntities()); ObjectQuery<Race> result = (ObjectQuery<Race>)repository.AsQueryable<Race>(); //perform outer joins with related entities if (includes != null) foreach (string include in includes) result = result.Include(include); //here i need inner join insteard of default outer join result = result.Include("Rules"); return result.ToList(); } 2)Using Linq To Entity I need to have kind of outer join(somethin like in GetRace()) where i may pass a List with entities to include) and also i need to perform correct inner join with entity Rules public static IEnumerable<Race> GetRace2(List<string> includes, DateTime date) { IRepository repository = new Repository(new BEntities()); IEnumerable<Race> result = from o in repository.AsQueryable<Race>() from b in o.RaceBetRules select new { o }); //I need here: // 1. to perform the same way inner joins with related entities like with ObjectQuery above //here i getting List<AnonymousType> which i cant cast to //IEnumerable<Race> when i did try to cast like //(IEnumerable<Race>)result.ToList(); i did get error: //Unable to cast object of type //'System.Collections.Generic.List`1[<>f__AnonymousType0`1[BetsTipster.Entity.Tip.Types.Race]]' //to type //'System.Collections.Generic.IEnumerable`1[BetsTipster.Entity.Tip.Types.Race]'. return result.ToList(); } May be someone have some ideas about that.

    Read the article

  • What is the base open source java package to filter/match URLs?

    - by Boaz
    Hi, I have an high performance application which deals with URLs. For every URL it needs to retrieve the appropriate settings from a predefined pool. Every settings object is associated with a URL pattern which indicates which URLs should use these settings. The matching rules are as follows: "google.com" match pattern should match all URLs pointing to the google domain (thus, maps.google.com and www.google.com/match are matched). "*.google.com" should match all URLs pointing to a subdomain of google.com (thus, maps.google.com matches, but google.com and www.google.com don't). "maps.google.com" should match all URLs pointing to this specific subdomain. Apart from the above rules, every match rule can contain a path, which means that the path part of the URL should start with the match rule path. So: "*.google.com/maps" matches "maps.google.com/maps" but not "maps.google.com/advanced". As you can see the rules above are overlapping. In the case two rules exist which match the same URL the most specific should apply. The list above is ranked from least specific to most specific. This seems to be such a standard problem that I was hoping to use a ready made library rather than program my self. Google reveals a couple of options but without a clear way to choose between them. What would you recommend as a good library for this task? Thanks, Boaz

    Read the article

  • GNU Makefile: multiple outputs from single rule + preventing intermediate files from being deleted

    - by makesaurus
    This is sort of a continuation of question from link text. The problem is that there is a rule generating multiple outputs from a single input, and the command is time-consuming so we would prefer to avoid recomputation. Now there is an additional twist, that we want to keep files from being deleted as intermediate files, and rules involve wildcards to allow for parameters. The solution suggested was that we set up the following rule: file-a.out: program file.in ./program file.in file-a.out file-b.out file-c.out file-b.out: file-a.out @ file-c.out: file-b.out @ Then, calling make file-c.out creates both and we avoid issues with running make in parallel with -j switch. All fine so far. The problem is the following. Because the above solution sets up a chain in the DAG, make considers it differently; the files file-a.out and file-b.out are treated as intermediate files, and they by default get deleted as unnecessary as soon as file-c.out is ready. A way of avoiding that was mentioned somewhere here, and consists of adding file-a.out and file-b.out as dependencies of a target .SECONDARY, which keeps them from being deleted. Unfortunately, this does not solve my case because my rules use wildcard patters; specifically, my rules look more like this: file-a-%.out: program file.in ./program $* file.in file-a-$*.out file-b-$*.out file-c-$*.out file-b-%.out: file-a-%.out @ file-c-%.out: file-b-%.out @ so that one can pass a parameter that gets included in the file name, for example by running make file-c-12.out The solution that make documentation suggests is to add these as implicit rules to the list of dependencies of .PRECIOUS, thus keeping these files from being deleted. The solution with .PRECIOUS works, but it also prevents these files from being deleted when a rule fails and files are incomplete. Is there any other way to make this work?

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Wildcard mapping in IIS 7.0 not working

    - by jmoney
    I can't seem to get the ASP.NET engine to handle ALL wildcard mapping. When I try to make a request that is supposed to be handled by the asp.net engine, i get a 404 error from the StaticFile handler Here is the content of my web.config file. You will notice that the last entry contains the wildcard mapping rules. <handlers> <clear /> <add name="LanapCaptchaHandler" path="LanapCaptcha.aspx" verb="*" type="Lanap.BotDetect.CaptchaHandler, Lanap.BotDetect" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="ScriptHandlerFactory" path="*.asmx" verb="*" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="ScriptHandlerFactoryAppServices" path="*_AppService.axd" verb="*" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="ScriptResource" path="ScriptResource.axd" verb="GET,HEAD" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="PHP5" path="*.php" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\php\php5isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="bitness32" responseBufferLimit="4194304" /> <add name="rules-Integrated" path="*.rules" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="rules-ISAPI-2.0" path="*.rules" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="4194304" /> <add name="rules-64-ISAPI-2.0" path="*.rules" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="4194304" /> <add name="xoml-Integrated" path="*.xoml" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="xoml-ISAPI-2.0" path="*.xoml" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="4194304" /> <add name="xoml-64-ISAPI-2.0" path="*.xoml" verb="*" type="" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="4194304" /> <add name="svc-ISAPI-2.0-64" path="*.svc" verb="*" type="" modules="IsapiModule" scriptProcessor="%SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="4194304" /> <add name="svc-ISAPI-2.0" path="*.svc" verb="*" type="" modules="IsapiModule" scriptProcessor="%SystemRoot%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="4194304" /> <add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="ASPClassic" path="*.asp" verb="GET,HEAD,POST" type="" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="File" requireAccess="Script" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="SecurityCertificate" path="*.cer" verb="GET,HEAD,POST" type="" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="File" requireAccess="Script" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="ISAPI-dll" path="*.dll" verb="*" type="" modules="IsapiModule" scriptProcessor="" resourceType="File" requireAccess="Execute" allowPathInfo="true" preCondition="" responseBufferLimit="4194304" /> <add name="TraceHandler-Integrated" path="trace.axd" verb="GET,HEAD,POST,DEBUG" type="System.Web.Handlers.TraceHandler" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="WebAdminHandler-Integrated" path="WebAdmin.axd" verb="GET,DEBUG" type="System.Web.Handlers.WebAdminHandler" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="AssemblyResourceLoader-Integrated" path="WebResource.axd" verb="GET,DEBUG" type="System.Web.Handlers.AssemblyResourceLoader" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="PageHandlerFactory-Integrated" path="*.aspx" verb="GET,HEAD,POST,DEBUG" type="System.Web.UI.PageHandlerFactory" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="SimpleHandlerFactory-Integrated" path="*.ashx" verb="GET,HEAD,POST,DEBUG" type="System.Web.UI.SimpleHandlerFactory" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="HttpRemotingHandlerFactory-rem-Integrated" path="*.rem" verb="GET,HEAD,POST,DEBUG" type="System.Runtime.Remoting.Channels.Http.HttpRemotingHandlerFactory, System.Runtime.Remoting, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="HttpRemotingHandlerFactory-soap-Integrated" path="*.soap" verb="GET,HEAD,POST,DEBUG" type="System.Runtime.Remoting.Channels.Http.HttpRemotingHandlerFactory, System.Runtime.Remoting, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" modules="ManagedPipelineHandler" scriptProcessor="" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="integratedMode" responseBufferLimit="4194304" /> <add name="AXD-ISAPI-2.0-64" path="*.axd" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="PageHandlerFactory-ISAPI-2.0-64" path="*.aspx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="SimpleHandlerFactory-ISAPI-2.0-64" path="*.ashx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="WebServiceHandlerFactory-ISAPI-2.0-64" path="*.asmx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="HttpRemotingHandlerFactory-rem-ISAPI-2.0-64" path="*.rem" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="HttpRemotingHandlerFactory-soap-ISAPI-2.0-64" path="*.soap" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness64" responseBufferLimit="0" /> <add name="AXD-ISAPI-2.0" path="*.axd" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="PageHandlerFactory-ISAPI-2.0" path="*.aspx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="SimpleHandlerFactory-ISAPI-2.0" path="*.ashx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="WebServiceHandlerFactory-ISAPI-2.0" path="*.asmx" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="HttpRemotingHandlerFactory-rem-ISAPI-2.0" path="*.rem" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="HttpRemotingHandlerFactory-soap-ISAPI-2.0" path="*.soap" verb="GET,HEAD,POST,DEBUG" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="0" /> <add name="CGI-exe" path="*.exe" verb="*" type="" modules="CgiModule" scriptProcessor="" resourceType="File" requireAccess="Execute" allowPathInfo="true" preCondition="" responseBufferLimit="4194304" /> <add name="TRACEVerbHandler" path="*" verb="TRACE" type="" modules="ProtocolSupportModule" scriptProcessor="" resourceType="Unspecified" requireAccess="None" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="OPTIONSVerbHandler" path="*" verb="OPTIONS" type="" modules="ProtocolSupportModule" scriptProcessor="" resourceType="Unspecified" requireAccess="None" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="StaticFile" path="*" verb="*" type="" modules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule" scriptProcessor="" resourceType="Either" requireAccess="Read" allowPathInfo="false" preCondition="" responseBufferLimit="4194304" /> <add name="WILDCARD MAPPING 32 BIT" path="*" verb="*" type="" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="None" allowPathInfo="false" preCondition="classicMode,runtimeVersionv2.0,bitness32" responseBufferLimit="4194304" /> </handlers>

    Read the article

  • Grub won't boot windows after update from 11.10 to 12.04

    - by Holger
    thanks for your time and reading this, here's the deal: i upgraded from 11.10 to 12.04 and everything worked out until i rebooted, i had 11.10 sucessfully running as a dual boot with windows vista. when i rebooted, my GRUB was shot to hell, what ever option i selected it said partion not found or something similar... booting into a live version on a thumb drive and running bootrepair from there fixed the issue... but only for ubuntu, when i try to boot into windows it only goes back to GRUB. i'm not at home, and heres a list of what i have here with me... 1 4gb thumb drive, empty 1 8gb thumb drive, windows vista installer bootable 1 old laptop, the one i try to save, optical drive is not existent 2 Mbps internet connection can you help me get back into my windows without having to reinstall windows? or at least show me a way how to use my illustrator through a virtual machine or something? here's my grub cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e set locale_dir=($root)/boot/grub/locale set lang=de_DE insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, mit Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux /boot/vmlinuz-3.2.0-24-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, mit Linux 3.2.0-24-generic (Wiederherstellungsmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e echo 'Linux 3.2.0-24-generic wird geladen …' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro recovery nomodeset echo 'Initiale Ramdisk wird geladen …' initrd /boot/initrd.img-3.2.0-24-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, mit Linux 3.0.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux /boot/vmlinuz-3.0.0-19-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.0.0-19-generic } menuentry 'Ubuntu, mit Linux 3.0.0-19-generic (Wiederherstellungsmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e echo 'Linux 3.0.0-19-generic wird geladen …' linux /boot/vmlinuz-3.0.0-19-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro recovery nomodeset echo 'Initiale Ramdisk wird geladen …' initrd /boot/initrd.img-3.0.0-19-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows Vista (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 2C9E66B39E6674EC chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • broken apache .htaccess (mod_rewrite)

    - by Tim
    Hey there, I'm running into an apache mod_rewrite configuration issue on one of our machines. Has anyone encountered / overcome anyone of these issues. URL1 ( http://www.uppereast.com ) is not being redirected to URL2 ( http://www.nyclocalliving.com ). This definitely worked in my test environment where a localhost address was rewritten to URL2 ( RewriteRule ^http://upe.localhost$ http://www.nyclocalliving.com ). I'm trying to get the all of the redirect rules working ( 2200 + ), but the 'http://www.nyclocalliving.com' site encounters a server error if I use more that 1000 or more rules. A) .htaccess file - I've tried the simplest approach which worked in a local environment 75 # Various rewrite rules. 76 <IfModule mod_rewrite.c> 77 RewriteEngine on 78 79 # BEGIN new URL Mapping rules 80 #RewriteRule ^http://www.uppereast.com/$ http://www.nyclocalliving.com ... 2307 #RewriteRule ^http://www.uppereast.com/zipcodechange.html$ http://www.nyclocalliving.com/zip-code-change fig. 1 B) /var/log/httpd/error_log file - there are these seg. fault errors when I enable the first rule ( line 80 ). no error logs otherwise. 1893 [Fri Sep 25 17:53:46 2009] [notice] Digest: generating secret for digest authentication ... 1894 [Fri Sep 25 17:53:46 2009] [notice] Digest: done 1895 [Fri Sep 25 17:53:46 2009] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations 1896 [Fri Sep 25 17:53:47 2009] [notice] child pid 29774 exit signal Segmentation fault (11) 1897 [Fri Sep 25 17:53:47 2009] [notice] child pid 29775 exit signal Segmentation fault (11) 1898 [Fri Sep 25 17:53:47 2009] [notice] child pid 29776 exit signal Segmentation fault (11) 1899 [Fri Sep 25 17:53:47 2009] [notice] child pid 29777 exit signal Segmentation fault (11) 1900 [Fri Sep 25 17:53:47 2009] [notice] child pid 29778 exit signal Segmentation fault (11) 1901 [Fri Sep 25 17:53:47 2009] [notice] child pid 29779 exit signal Segmentation fault (11) fig. 2 C) Some more debug information from the shell; the mod_rewrite is turned on and this is the machine architecture 1 # apachectl -t -D DUMP_MODULES | more 2 Loaded Modules: 3 core_module (static) 4 ... 5 rewrite_module (shared) 1 # uname -a 2 Linux RegionalWeb 2.6.24-23-xen #1 SMP Mon Jan 26 03:09:12 UTC 2009 x86_64 x86_64 x86_64 GNU/Linux fig. 3 I looked into some previous posts (http://serverfault.com/questions/18744/htaccess-not-working-modrewrite), but didn't find a solution for this. I'm sure there's a small switch somewhere that I'm missing. Thanks in advance Tim

    Read the article

  • Is it possibile to alow port forwarding only for specific IP public addresses

    - by adopilot
    I have freeBSD router and it host public IP address, I am using ipnat.rules to configure port forwarding prom public network inside my private network. Now I wondering can I restrict only specific public IP addresses to can pass trough my port forwarding. What I want is to only my specific public IP addresses can walk inside my network on specific ports. Here is how now look like my ipnat.rules file rdr fxp0 217.199.XXX.XXX/32 port 7900-> 192.168.1.12 port 80 tcp

    Read the article

  • Debugger for Iptables

    - by chris_l
    Hi, I'm looking for an easy way to follow a packet through the iptables rules. This is not so much about logging, because I don't want to log all traffic (and I only want to have LOG targets for very few rules). Something like Wireshark for Iptables. Or maybe even something similar to a debugger for a programming language. Thanks Chris

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >